Přístupnostní navigace
E-application
Search Search Close
Publication detail
MIKOLOV, T. DEORAS, A. POVEY, D. BURGET, L. ČERNOCKÝ, J.
Original Title
Strategies for Training Large Scale Neural Network Language Models
Type
article in a collection out of WoS and Scopus
Language
English
Original Abstract
Techniques for effective training of recurrent neural network based language models are described, and new state-of-the-art results on standard speech recognition task are reported.
Keywords
recurrent neural network, language model, speech recognition, maximum entropy
Authors
MIKOLOV, T.; DEORAS, A.; POVEY, D.; BURGET, L.; ČERNOCKÝ, J.
RIV year
2011
Released
11. 12. 2011
Publisher
IEEE Signal Processing Society
Location
Hilton Waikoloa Village, Big Island, Hawaii
ISBN
978-1-4673-0366-8
Book
Proceedings of ASRU 2011
Pages from
196
Pages to
201
Pages count
6
URL
http://www.fit.vutbr.cz/research/groups/speech/publi/2011/mikolov_asru2011_00196.pdf
BibTex
@inproceedings{BUT76453, author="Tomáš {Mikolov} and Anoop {Deoras} and Daniel {Povey} and Lukáš {Burget} and Jan {Černocký}", title="Strategies for Training Large Scale Neural Network Language Models", booktitle="Proceedings of ASRU 2011", year="2011", pages="196--201", publisher="IEEE Signal Processing Society", address="Hilton Waikoloa Village, Big Island, Hawaii", isbn="978-1-4673-0366-8", url="http://www.fit.vutbr.cz/research/groups/speech/publi/2011/mikolov_asru2011_00196.pdf" }