Project detail

Dolování infoRmAcí z řeči Pořízené vzdÁlenými miKrofony

Duration: 1.10.2015 — 30.9.2020

Funding resources

Ministerstvo vnitra ČR - Bezpečnostní výzkum České republiky 2015-2020

On the project

Dolování informací z řeči se stává nepostradatelné pro složky bojující proti kriminalitě a terorismu. Současné verze dovolují úspěšné nasazení na signálech získaných pomocí close-talk" mikrofonů. Cílem projektu DRAPÁK je zvýšit úspěšnost dolování v řeči pořízené vzdálenými mikrofony v reálném prostředí a generovat relevantní informace v odpovídajících operačních scénářích. Výstupem je sada softwarových nástrojů, které budou k disposici pro testování PČR a státními složkami.

Description in English
Speech data mining is becoming indispensable for units fighting criminality and terrorism. The current versions allow for successful deployment on data acquired from close-talk microphones. The goal of DRAPAK is to increase the performance of speech data mining from distant microphones in real environments and to generate relevant information in corresponding operational scenarios. The output is a set of software tools to be tested by the Police of the Czech Republic and other state agencies.

Keywords
dolování informací z řeči, rozpoznávání řeči, rozpoznávání mluvčího, identifikace jazyka, detekce klíčových slov, vzdálené mikro

Key words in English
speech data mining, speech recognition, speaker recognition, language identification, keyword spotting, distant microphones

Mark

VI20152020025

Default language

Czech

People responsible

Černocký Jan, prof. Dr. Ing. - principal person responsible
Kesiraju Santosh, Ph.D. - fellow researcher
Ondel Lucas Antoine Francois, Mgr., Ph.D. - fellow researcher

Units

Department of Computer Graphics and Multimedia
- responsible department (12.1.2015 - not assigned)
Speech Data Mining Research Group BUT Speech@FIT
- internal (12.1.2015 - 30.9.2020)
Department of Computer Graphics and Multimedia
- beneficiary (12.1.2015 - 30.9.2020)

Results

BURGET, L.; GLEMBEK, O.; LOZANO DÍEZ, A.; MATĚJKA, P.; NOVOTNÝ, O.; PLCHOT, O.; PULUGUNDLA, B.; ROHDIN, J.; SILNOVA, A.; VESELÝ, K. BUT System Description to SdSV Challenge 2020. Proceedings of Short-duration Speaker Verification Challenge 2020 Workshop. Shanghai, on-line event of Interspeech 2020 Conference: 2020. p. 1-5.
Detail

BRUMMER, J.; SWART, A.; PRIETO, J.; GARCIA PERERA, L.; MATĚJKA, P.; PLCHOT, O.; DIEZ SÁNCHEZ, M.; SILNOVA, A.; JIANG, X.; NOVOTNÝ, O.; ROHDIN, J.; GLEMBEK, O.; GRÉZL, F.; BURGET, L.; ONDEL YANG, L.; PEŠÁN, J.; ČERNOCKÝ, J.; KENNY, P.; ALAM, J.; BHATTACHARYA, G.; ZEINALI, H. ABC NIST SRE 2016 SYSTEM DESCRIPTION. San Diego: National Institute of Standards and Technology, 2016. p. 1-8.
Detail

ZEINALI, H.; WANG, S.; SILNOVA, A.; MATĚJKA, P.; PLCHOT, O. BUT System Description to VoxCeleb Speaker Recognition Challenge 2019. Proceedings of The VoxCeleb Challange Workshop 2019. Graz: 2019. p. 1-4.
Detail

ALAM, J.; BHATTACHARYA, G.; BRUMMER, J.; BURGET, L.; DIEZ SÁNCHEZ, M.; GLEMBEK, O.; KENNY, P.; KLČO, M.; LANDINI, F.; LOZANO DÍEZ, A.; MATĚJKA, P.; MONTEIRO, J.; MOŠNER, L.; NOVOTNÝ, O.; PLCHOT, O.; PROFANT, J.; ROHDIN, J.; SILNOVA, A.; SLAVÍČEK, J.; STAFYLAKIS, T.; ZEINALI, H. ABC NIST SRE 2018 SYSTEM DESCRIPTION. Proceedings of 2018 NIST SRE Workshop. Athens: National Institute of Standards and Technology, 2018. p. 1-10.
Detail

MATĚJKA, P.; PLCHOT, O.; NOVOTNÝ, O.; CUMANI, S.; LOZANO DÍEZ, A.; SLAVÍČEK, J.; DIEZ SÁNCHEZ, M.; GRÉZL, F.; GLEMBEK, O.; KAMSALI VEERA, M.; SILNOVA, A.; BURGET, L.; ONDEL YANG, L.; KESIRAJU, S.; ROHDIN, J. BUT- PT System Description for NIST LRE 2017. Proceedings of NIST Language Recognition Workshop 2017. Orlando, Florida: National Institute of Standards and Technology, 2017. p. 1-6.
Detail

LOZANO DÍEZ, A.; SILNOVA, A.; PULUGUNDLA, B.; ROHDIN, J.; VESELÝ, K.; BURGET, L.; PLCHOT, O.; GLEMBEK, O.; NOVOTNÝ, O.; MATĚJKA, P. BUT Text-Dependent Speaker Verification System for SdSV Challenge 2020. In Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH. Proceedings of Interspeech. Shanghai: International Speech Communication Association, 2020. p. 761-765. ISSN: 1990-9772.
Detail

ALAM, J.; BOULIANNE, G.; BURGET, L.; DAHMANE, M.; DIEZ SÁNCHEZ, M.; GLEMBEK, O.; LALONDE, M.; LOZANO DÍEZ, A.; MATĚJKA, P.; MIZERA, P.; MOŠNER, L.; NOISEUX, C.; MONTEIRO, J.; NOVOTNÝ, O.; PLCHOT, O.; ROHDIN, J.; SILNOVA, A.; SLAVÍČEK, J.; STAFYLAKIS, T.; ST-CHARLES, P.; WANG, S.; ZEINALI, H. Analysis of ABC Submission to NIST SRE 2019 CMN and VAST Challenge. Proceedings of Odyssey 2020 The Speaker and Language Recognition Workshop. Proceedings of Odyssey: The Speaker and Language Recognition Workshop Odyssey 2014, Joensuu, Finland. Tokyo: International Speech Communication Association, 2020. p. 289-295. ISSN: 2312-2846.
Detail

MOŠNER, L.; PLCHOT, O.; ROHDIN, J.; ČERNOCKÝ, J. Utilizing VOiCES dataset for multichannel speaker verification with beamforming. Proceedings of Odyssey 2020 The Speaker and Language Recognition Workshop. Proceedings of Odyssey: The Speaker and Language Recognition Workshop Odyssey 2014, Joensuu, Finland. Tokyo: International Speech Communication Association, 2020. p. 187-193. ISSN: 2312-2846.
Detail

WANG, S.; ROHDIN, J.; PLCHOT, O.; BURGET, L.; YU, K.; ČERNOCKÝ, J. Investigation of Specaugment for Deep Speaker Embedding Learning. In ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings. Barcelona: IEEE Signal Processing Society, 2020. p. 7139-7143. ISBN: 978-1-5090-6631-5.
Detail

SAGHA, H.; MATĚJKA, P.; GAVRYUOKOVA, M.; POVOLNÝ, F.; MARCHI, E.; SCHULLER, B. Enhancing multilingual recognition of emotion in speech by language identification. In 17TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION - Proceedings (INTERSPEECH 2016). Proceedings of Interspeech. San Francisco: International Speech Communication Association, 2016. p. 2949-2953. ISSN: 1990-9772.
Detail

MATĚJKA, P.; PLCHOT, O.; GLEMBEK, O.; BURGET, L.; ROHDIN, J.; ZEINALI, H.; MOŠNER, L.; SILNOVA, A.; NOVOTNÝ, O.; DIEZ SÁNCHEZ, M.; ČERNOCKÝ, J. 13 years of speaker recognition research at BUT, with longitudinal analysis of NIST SRE. COMPUTER SPEECH AND LANGUAGE, 2020, vol. 2020, no. 63, p. 1-15. ISSN: 0885-2308.
Detail

NOVOTNÝ, O.; MATĚJKA, P.; PLCHOT, O.; GLEMBEK, O. On the use of DNN Autoencoder for Robust Speaker Recognition. Brno: Faculty of Information Technology BUT, 2018. p. 1-5.
Detail

ALAM, J.; BOULIANNE, G.; BURGET, L.; GLEMBEK, O.; LOZANO DÍEZ, A.; MATĚJKA, P.; MIZERA, P.; MOŠNER, L.; NOVOTNÝ, O.; PLCHOT, O.; ROHDIN, J.; SILNOVA, A.; SLAVÍČEK, J.; STAFYLAKIS, T.; WANG, S.; ZEINALI, H.; DAHMANE, M.; ST-CHARLES, P.; LALONDE, M.; NOISEUX, C.; MONTEIRO, J. ABC System Description for NIST Multimedia Speaker Recognition Evaluation 2019. Proceedings of NIST 2019 SRE Workshop. Sentosa, Singapore: National Institute of Standards and Technology, 2019. p. 1-7.
Detail

ALAM, J.; BOULIANNE, G.; GLEMBEK, O.; LOZANO DÍEZ, A.; MATĚJKA, P.; MIZERA, P.; MONTEIRO, J.; MOŠNER, L.; NOVOTNÝ, O.; PLCHOT, O.; ROHDIN, J.; SILNOVA, A.; SLAVÍČEK, J.; STAFYLAKIS, T.; WANG, S.; ZEINALI, H. ABC NIST SRE 2019 CTS System Description. Proceedings of NIST. Sentosa, Singapore: National Institute of Standards and Technology, 2019. p. 1-6.
Detail

ZEINALI, H.; BURGET, L.; ROHDIN, J.; STAFYLAKIS, T.; ČERNOCKÝ, J.: x-vector-kaldi-tf; Tensorflow implementation of speaker recognition with x-vector topology. https://github.com/BUTSpeechFIT/x-vector-kaldi-tf. URL: https://github.com/BUTSpeechFIT/x-vector-kaldi-tf. (software)
Detail

MOŠNER, L.; PLCHOT, O.; ROHDIN, J.; BURGET, L.; ČERNOCKÝ, J. Speaker Verification with Application-Aware Beamforming. In IEEE Automatic Speech Recognition and Understanding Workshop - Proceedings (ASRU). Sentosa, Singapore: IEEE Signal Processing Society, 2019. p. 411-418. ISBN: 978-1-7281-0306-8.
Detail

DIEZ SÁNCHEZ, M.; BURGET, L.; LANDINI, F.; ČERNOCKÝ, J. Analysis of Speaker Diarization based on Bayesian HMM with Eigenvoice Priors. IEEE/ACM TRANSACTIONS ON AUDIO, SPEECH AND LANGUAGE PROCESSING, 2020, vol. 28, no. 1, p. 355-368. ISSN: 2329-9290.
Detail

NOVOTNÝ, O.; PLCHOT, O.; GLEMBEK, O.; BURGET, L.; MATĚJKA, P. Discriminatively Re-trained i-Vector Extractor For Speaker Recognition. In Proceedings of 2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP). Brighton: IEEE Signal Processing Society, 2019. p. 6031-6035. ISBN: 978-1-5386-4658-8.
Detail

STAFYLAKIS, T.; ROHDIN, J.; PLCHOT, O.; MIZERA, P.; BURGET, L. Self-supervised speaker embeddings. In Proceedings of Interspeech. Proceedings of Interspeech. Graz: International Speech Communication Association, 2019. p. 2863-2867. ISSN: 1990-9772.
Detail

NOVOTNÝ, O.; PLCHOT, O.; GLEMBEK, O.; BURGET, L. Factorization of Discriminatively Trained i-Vector Extractor for Speaker Recognition. In Proceedings of Interspeech. Proceedings of Interspeech. Graz: International Speech Communication Association, 2019. p. 4330-4334. ISSN: 1990-9772.
Detail