Detail projektu

Detection and Identification of Rare Audio-visual Cues - DIRAC

Období řešení: 01.01.2006 — 31.12.2010

Zdroje financování

Ministerstvo školství, mládeže a tělovýchovy ČR - Šestý rámcový program Evropského společenství pro výzkum, technický rozvoj a demonstrační činnosti

- plně financující (2006-01-01 - 2010-12-31)

O projektu

Today’s computers can do many amazing things but there are still many "trivial" but important tasks they cannot do well. In particular, current information extraction techniques perform well when event types are well represented in the training data but often fail when encountering information-rich unexpected rare events. DIRAC project addresses this crucial machine weakness and aims at designing and developing an environment-adaptive autonomous artificial cognitive system that will detect, identify and classify possibly threatening rare events from the information derived by multiple active information-seeking audio-visual sensors.

Biological organisms rely for their survival on detecting and identifying new events. DIRAC therefore strives to combine its expertise in physiology of mammalian auditory and visual cortex and in audio/visual recognition engineering with the aim to move the art of audiovisual machine recognition from the classical signal processing/pattern classification paradigm to human-like information extraction. This means, among other things, to move from interpretation of all incoming data to reliable rejection of non-informative inputs, from passive acquisition of a single incoming stream to active search for the most relevant information in multiple streams, and from a system optimized for one static environment to autonomous adaptation to new changing environments, thus forming foundation for a new generation of efficient cognitive information processing technologies.

DIRAC is an EU IP IST project of the 6th Framework Program. Its duration is 5 years, from January 2006 until December 2010.

Partners of the project comes from all over the world and are the following: Idiap Research Institute (coordinator),  Eidgenossische Technische Hochschule Zuerich (CH),  The Hebrew University of Jerusalem (IL),  Czech Technical University (CS), Carl von Ossietzky Universitaet Oldenburg (DE), Leibniz Institute for Neurobiology (DE), Katholieke Universiteit Leuven (B), Oregon Health and Science University OGI School of Science and Engineerring (USA).

Popis česky
Projekt se zabývá detekcí a identifikací řídkých audiovizuálních podnětů

Klíčová slova
audio, video, detection

Označení

027787

Originální jazyk

angličtina

Řešitelé

Útvary

Ústav počítačové grafiky a multimédií
- spolupříjemce (01.01.2006 - 31.12.2010)

Výsledky

BURGET, L.; SCHWARZ, P.; MATĚJKA, P.; HANNEMANN, M.; RASTROW, A.; WHITE, C.; KHUDANPUR, S.; HEŘMANSKÝ, H.; ČERNOCKÝ, J. Combination of strongly and weakly constrained recognizers for reliable detection of OOVs. Proc. International Conference on Acoustics, Speech, and Signal Processing (ICASSP). Las Vegas: IEEE Signal Processing Society, 2008. p. 1-4. ISBN: 1-4244-1484-9.
Detail

KOMBRINK, S.; BURGET, L.; MATĚJKA, P.; KARAFIÁT, M.; HEŘMANSKÝ, H. Posterior-based Out of Vocabulary Word Detection in Telephone Speech. Proc. Interspeech 2009. Proceedings of Interspeech. Brighton: International Speech Communication Association, 2009. p. 80-83. ISSN: 1990-9772.
Detail

BRÜMMER, N.; STRASHEIM, A.; HUBEIKA, V.; MATĚJKA, P.; BURGET, L.; GLEMBEK, O. Discriminative Acoustic Language Recognition via Channel-Compensated GMM Statistics. Proc. Interspeech 2009. Proceedings of Interspeech. Brighton: International Speech Communication Association, 2009. p. 2187-2190. ISBN: 978-1-61567-692-7. ISSN: 1990-9772.
Detail

HANNEMANN, M.; KOMBRINK, S.; KARAFIÁT, M.; BURGET, L. Similarity Scoring for Recognizing Repeated Out-of-VocabularyWords. Proceedings of the 11th Annual Conference of the International Speech Communication Association (INTERSPEECH 2010). Proceedings of Interspeech. Makuhari, Chiba: International Speech Communication Association, 2010. p. 897-900. ISBN: 978-1-61782-123-3. ISSN: 1990-9772.
Detail

KOMBRINK, S.; HANNEMANN, M.; BURGET, L.; HEŘMANSKÝ, H. Recovery of Rare Words in Lecture Speech. Proc. Text, Speech and Dialogue 2010. Lecture Notes in Computer Science. Brno: Springer Verlag, 2010. p. 330-337. ISBN: 978-3-642-15759-2. ISSN: 0302-9743.
Detail

KOMBRINK, S.; MIKOLOV, T. Recurrent Neural Network Language Modeling Applied to the Brno AMI/AMIDA 2009 Meeting Recognizer Setup. Proceedings of the 17th Conference STUDENT EEICT 2011. Volume 3. Brno: Brno University of Technology, 2011. p. 527-531. ISBN: 978-80-214-4273-3.
Detail

ČERNOCKÝ, J.; SZŐKE, I.; HANNEMANN, M.; KOMBRINK, S. Word-subword based keyword spotting with implications in OOV detection. Pacific Grove: Institute of Electrical and Electronics Engineers, 2010. p. 0-0.
Detail

MIKOLOV, T.; KARAFIÁT, M.; BURGET, L.; ČERNOCKÝ, J.; KHUDANPUR, S. Recurrent neural network based language model. Proceedings of the 11th Annual Conference of the International Speech Communication Association (INTERSPEECH 2010). Proceedings of Interspeech. Makuhari, Chiba: International Speech Communication Association, 2010. p. 1045-1048. ISBN: 978-1-61782-123-3. ISSN: 1990-9772.
Detail

DEORAS, A.; MIKOLOV, T.; KOMBRINK, S.; KARAFIÁT, M.; KHUDANPUR, S. Variational Approximation of Long-span Language Models for LVCSR. Proceedings of the 2011 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2011. Praha: IEEE Signal Processing Society, 2011. p. 5532-5535. ISBN: 978-1-4577-0537-3.
Detail

MIKOLOV, T.; KOMBRINK, S.; BURGET, L.; ČERNOCKÝ, J.; KHUDANPUR, S. Extensions of Recurrent Neural Network Language Model. Proceedings of the 2011 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2011. Praha: IEEE Signal Processing Society, 2011. p. 5528-5531. ISBN: 978-1-4577-0537-3.
Detail

BURGET, L.; BRÜMMER, N.; REYNOLDS, D.; KENNY, P.; PELECANOS, J.; VOGT, R.; CASTALDO, F.; DEHAK, N.; DEHAK, R.; GLEMBEK, O.; KARAM, Z.; NOECKER, J.; NA, H.; COSTIN, C.; HUBEIKA, V.; KAJAREKAR, S.; SCHEFFER, N.; ČERNOCKÝ, J. Robust Speaker Recognition Over Varying Channels. Baltimore: Johns Hopkins University, 2008. p. 0-0.
Detail

ŽIŽKA, J.; FAPŠO, M.; SZŐKE, I.: VUT-SW-Search; Web-based lecture browser. http://www.superlectures.com/ (http://www.superlectures.com/odyssey/) http://www.prednasky.com/. URL: https://www.fit.vut.cz/research/product/193/. (software)
Detail

Odkaz