Project detail

Robustní zpracování nahrávek pro operativu a bezpečnost

Duration: 1.10.2020 — 30.9.2025

Funding resources

Ministerstvo vnitra ČR - PROGRAM STRATEGICKÁ PODPORA ROZVOJE BEZPEČNOSTNÍHO VÝZKUMU ČR 2019-2025 (IMPAKT 1) PODPROGRAMU 1 SPOLEČNÉ VÝZKUMNÉ PROJEKTY (BV IMP1/1VS)

- whole funder (1. 10. 2020 - 30. 9. 2025)

On the project

Cílem projektu je zvýšení kompetencí, sjednocení a větší koordinace dvou předních českých výzkumných pracovišť, v oboru dolování informací z řeči z reálných nahrávek pro oblast bezpečnosti a úzká spolupráce s bezpečnostními sbory na uvádění výsledků výzkumu do praxe vyšetřování a zpravodajství. Tento cíl zahrnuje posun v robustním automatickém rozpoznávání řeči (ASR), trénování/adaptaci ASR pro různá prostředí, určení kdy kdo mluví v nahrávce (diarizace) a výzkum prohledávání nahrávek pomocí akustických dotazů (Query by Example)

Description in English
The aim of the project is to increase competencies, unification and greater coordination of two leading Czech research institutes, in the field of speech information mining from real recordings in the field of security and close cooperation with security corps to put research results into practice of investigation and intelligence. This goal includes a shift in robust automatic speech recognition (ASR), training / adaptation of ASRs for different environments, determining when a person is speaking in a recording (diarization), and researching recordings through acoustic queries (Query by Example)

Keywords
rozpoznávání řeči, robustní, nahrávky, operativa, bezpečnost

Key words in English
speech recognition, robust, recordings, operations, security

Mark

VJ01010108

Default language

Czech

People responsible

Karafiát Martin, Ing., Ph.D. - principal person responsible
Malenovský Vladimír, Ing., Ph.D. - fellow researcher

Units

Department of Computer Graphics and Multimedia
- responsible department (23.3.2020 - 30.9.2025)
Department of Computer Graphics and Multimedia
- beneficiary (23.3.2020 - 30.9.2025)

Results

ALAM, J.; BARAHONA QUIRÓS, S.; BOBOŠ, D.; BURGET, L.; CUMANI, S.; DAHMANE, M.; HAN, J.; HLAVÁČEK, M.; KODOVSKÝ, M.; LANDINI, F.; MOŠNER, L.; PÁLKA, P.; PAVLÍČEK, T.; PENG, J.; PLCHOT, O.; RAJASEKHAR, P.; ROHDIN, J.; SILNOVA, A.; STAFYLAKIS, T.; ZHANG, L. ABC SYSTEM DESCRIPTION FOR NIST SRE 2024. Proceedings of NIST SRE 2024. San Juan: National Institute of Standards and Technology, 2024. p. 1-9.
Detail

YUSUF, B.; KARAFIÁT, M.; ŠVEC, J.; ŠMÍDL, L.: VJ01010108-V5; SW4 Detektor akustických vzorů. Pro stažení kontaktujte: https://www.fit.vut.cz/person/karafiat/ nebo http://www.kky.zcu.cz/en/people/smidl-lubos. URL: https://www.fit.vut.cz/research/product/835/. (software)
Detail

MOŠNER, L.; SERIZEL, R.; BURGET, L.; PLCHOT, O.; VINCENT, E.; PENG, J.; ČERNOCKÝ, J. Multi-Channel Extension of Pre-trained Models for Speaker Verification. In Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH. Proceedings of Interspeech. Kos: International Speech Communication Association, 2024. p. 2135-2139. ISSN: 1990-9772.
Detail

ZHANG, L.; WANG, X.; COOPER, E.; DIEZ SÁNCHEZ, M.; LANDINI, F.; EVANS, N.; YAMAGISHI, J. Spoof Diarization: "What Spoofed When" in Partially Spoofed Audio. In Proceedings of Interspeech 2024. Proceedings of Interspeech. Kos: International Speech Communication Association, 2024. p. 502-506. ISSN: 1990-9772.
Detail

YUSUF, B.; ČERNOCKÝ, J.; SARAÇLAR, M. Pretraining End-to-End Keyword Search with Automatically Discovered Acoustic Units. In Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH. Proceedings of Interspeech. Kos: International Speech Communication Association, 2024. p. 5068-5072. ISSN: 1990-9772.
Detail

KUNEŠOVÁ, M.; ZAJÍC, Z.; ŠMÍDL, L.; KARAFIÁT, M. Comparison of wav2vec 2.0 models on three speech processing tasks. International Journal of Speech Technology, 2024, vol. 27, no. 4, p. 847-859. ISSN: 1572-8110.
Detail

PEŠÁN, J.; JUŘÍK, V.; RŮŽIČKOVÁ, A.; SVOBODA, V.; JANOUŠEK, O.; NĚMCOVÁ, A.; BOJANOVSKÁ, H.; ALDABAGHOVÁ, J.; KYSLÍK, F.; VODIČKOVÁ, K.; SODOMOVÁ, A.; BARTYS, P.; CHUDÝ, P.; ČERNOCKÝ, J. Speech production under stress for machine learning: multimodal dataset of 79 cases and 8 signals. Scientific data, 2024, vol. 11, no. 1, p. 1-9. ISSN: 2052-4463.
Detail

ZHANG, L.; STAFYLAKIS, T.; LANDINI, F.; DIEZ SÁNCHEZ, M.; SILNOVA, A.; BURGET, L. Do End-to-End Neural Diarization Attractors Need to Encode Speaker Characteristic Information?. Proceedings of Odyssey 2024: The Speaker and Language Recognition Workshop. Québec City: International Speech Communication Association, 2024. p. 123-130.
Detail

YUSUF, B.; SARAÇLAR, M. Written Term Detection Improves Spoken Term Detection. IEEE/ACM TRANSACTIONS ON AUDIO, SPEECH AND LANGUAGE PROCESSING, 2024, vol. 32, no. 06, p. 3213-3223. ISSN: 2329-9290.
Detail

LANDINI, F.; DIEZ SÁNCHEZ, M.; STAFYLAKIS, T.; BURGET, L. DiaPer: End-to-End Neural Diarization With Perceiver-Based Attractors. IEEE Transactions on Audio, Speech, and Language Processing, 2024, vol. 32, no. 7, p. 3450-3465. ISSN: 1558-7916.
Detail

KLEMENT, D.; DIEZ SÁNCHEZ, M.; LANDINI, F.; BURGET, L.; SILNOVA, A.; DELCROIX, M.; TAWARA, N. Discriminative Training of VBx Diarization. In ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings. Seoul: IEEE Signal Processing Society, 2024. p. 11871-11875. ISBN: 979-8-3503-4485-1.
Detail

HAN, J.; LANDINI, F.; ROHDIN, J.; DIEZ SÁNCHEZ, M.; BURGET, L.; CAO, Y.; LU, H.; ČERNOCKÝ, J. Diacorrect: Error Correction Back-End for Speaker Diarization. In ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Seoul: IEEE Signal Processing Society, 2024. p. 11181-11185. ISBN: 979-8-3503-4485-1.
Detail

MATĚJKA, P.; SILNOVA, A.; SLAVÍČEK, J.; MOŠNER, L.; PLCHOT, O.; KLČO, M.; PENG, J.; STAFYLAKIS, T.; BURGET, L. Description and Analysis of ABC Submission to NIST LRE 2022. In Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH. Proceedings of Interspeech. Dublin: International Speech Communication Association, 2023. p. 511-515. ISSN: 1990-9772.
Detail

MOŠNER, L.; PLCHOT, O.; PENG, J.; BURGET, L.; ČERNOCKÝ, J. Multi-Channel Speech Separation with Cross-Attention and Beamforming. In Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH. Proceedings of Interspeech. Dublin: International Speech Communication Association, 2023. p. 1693-1697. ISSN: 1990-9772.
Detail

KAKOUROS, S.; STAFYLAKIS, T.; MOŠNER, L.; BURGET, L. Speech-Based Emotion Recognition with Self-Supervised Models Using Attentive Channel-Wise Correlations and Label Smoothing. In Proceedings of ICASSP 2023. Rhodes Island: IEEE Signal Processing Society, 2023. p. 1-5. ISBN: 978-1-7281-6327-7.
Detail

PENG, J.; STAFYLAKIS, T.; GU, R.; PLCHOT, O.; MOŠNER, L.; BURGET, L.; ČERNOCKÝ, J. Parameter-Efficient Transfer Learning of Pre-Trained Transformer Models for Speaker Verification Using Adapters. In ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings. Rhodes Island: IEEE Signal Processing Society, 2023. p. 1-5. ISBN: 978-1-7281-6327-7.
Detail

LANDINI, F.; DIEZ SÁNCHEZ, M.; LOZANO DÍEZ, A.; BURGET, L. Multi-Speaker and Wide-Band Simulated Conversations as Training Data for End-to-End Neural Diarization. In Proceedings of ICASSP 2023. Rhodes Island: IEEE Signal Processing Society, 2023. p. 1-5. ISBN: 978-1-7281-6327-7.
Detail

SILNOVA, A.; SLAVÍČEK, J.; MOŠNER, L.; KLČO, M.; PLCHOT, O.; MATĚJKA, P.; PENG, J.; STAFYLAKIS, T.; BURGET, L. ABC System Description for NIST LRE 2022. Proceedings of NIST LRE 2022 Workshop. Washington DC: National Institute of Standards and Technology, 2023. p. 1-5.
Detail

STAFYLAKIS, T.; MOŠNER, L.; KAKOUROS, S.; PLCHOT, O.; BURGET, L.; ČERNOCKÝ, J. Extracting speaker and emotion information from self-supervised speech models via channel-wise correlations. In 2022 IEEE Spoken Language Technology Workshop, SLT 2022 - Proceedings. Doha: IEEE Signal Processing Society, 2023. p. 1136-1143. ISBN: 978-1-6654-7189-3.
Detail

ŠMÍDL, L.; KARAFIÁT, M.; ŠVEC, J.; LEHEČKA, J.; MOŠNER, L.; BRUKNER, J.: VJ01010108-V4; SW3 ASR pro akusticky náročná prostředí. Pro stažení kontaktujte: https://www.fit.vut.cz/person/karafiat/ nebo http://www.kky.zcu.cz/en/people/smidl-lubos. URL: https://www.fit.vut.cz/research/product/795/. (software)
Detail