Publication detail

Eat: Enhanced ASR-TTS for Self-Supervised Speech Recognition

BASKAR, M. BURGET, L. WATANABE, S. ASTUDILLO, R. ČERNOCKÝ, J.

Original Title

Eat: Enhanced ASR-TTS for Self-Supervised Speech Recognition

Type

conference paper

Language

English

Original Abstract

Self-supervised ASR-TTS models suffer in out-of-domain data conditions.Here we propose an enhanced ASR-TTS (EAT) model thatincorporates two main features: 1) The ASR!TTS direction isequipped with a language model reward to penalize the ASR hypothesesbefore forwarding it to TTS. 2) In the TTS!ASR direction,a hyper-parameter is introduced to scale the attention contextfrom synthesized speech before sending it to ASR to handle out-ofdomaindata. Training strategies and the effectiveness of the EATmodel are explored under out-of-domain data conditions. The resultsshow that EAT reduces the performance gap between supervised andself-supervised training significantly by absolute 2.6% and 2.7% onLibrispeech and BABEL respectively.

Keywords

cycle-consistency, self-supervision, sequence-tosequence,speech recognition

Authors

BASKAR, M.; BURGET, L.; WATANABE, S.; ASTUDILLO, R.; ČERNOCKÝ, J.

Released

6. 6. 2021

Publisher

IEEE Signal Processing Society

Location

Toronto, Ontario

ISBN

978-1-7281-7605-5

Book

ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)

Pages from

6753

Pages to

6757

Pages count

5

URL

BibTex

@inproceedings{BUT175793,
  author="BASKAR, M. and BURGET, L. and WATANABE, S. and ASTUDILLO, R. and ČERNOCKÝ, J.",
  title="Eat: Enhanced ASR-TTS for Self-Supervised Speech Recognition",
  booktitle="ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)",
  year="2021",
  pages="6753--6757",
  publisher="IEEE Signal Processing Society",
  address="Toronto, Ontario",
  doi="10.1109/ICASSP39728.2021.9413375",
  isbn="978-1-7281-7605-5",
  url="https://ieeexplore.ieee.org/document/9413375"
}

Documents