Publication detail

Self-supervised Pre-training of Text Recognizers

KIŠŠ, M. HRADIŠ, M.

Original Title

Self-supervised Pre-training of Text Recognizers

Type

conference paper

Language

English

Original Abstract

In this paper, we investigate self-supervised pre-training methods for document text recognition. Nowadays, large unlabeled datasets can be collected for many research tasks, including text recognition, but it is costly to annotate them. Therefore, methods utilizing unlabeled data are researched. We study self-supervised pre-training methods based on masked label prediction using three different approaches - Feature Quantization, VQ-VAE, and Post-Quantized AE. We also investigate joint-embedding approaches with VICReg and NT-Xent objectives, for which we propose an image shifting technique to prevent model collapse where it relies solely on positional encoding while completely ignoring the input image. We perform our experiments on historical handwritten (Bentham) and historical printed datasets mainly to investigate the benefits of the self-supervised pre-training techniques with different amounts of annotated target domain data. We use transfer learning as strong baselines. The evaluation shows that the self-supervised pretraining on data from the target domain is very effective, but it struggles to outperform transfer learning from closely related domains. This paper is one of the first researches exploring self-supervised pre-training in document text recognition, and we believe that it will become a cornerstone for future research in this area. We made our implementation of the investigated methods publicly available at https://github.com/DCGM/pero-pretraining.

Keywords

Self-supervised learning, Text Recognition, Pre-training, OCR, HTR

Authors

KIŠŠ, M.; HRADIŠ, M.

Released

30. 8. 2024

Publisher

Springer Nature Switzerland AG

Location

Atény

ISBN

978-3-031-70545-8

Book

Barney Smith, E.H., Liwicki, M., Peng, L. (eds) Document Analysis and Recognition - ICDAR 2024

Edition

Lecture Notes in Computer Science

Pages from

218

Pages to

235

Pages count

18

URL

BibTex

@inproceedings{BUT193312,
  author="Martin {Kišš} and Michal {Hradiš}",
  title="Self-supervised Pre-training of Text Recognizers",
  booktitle="Barney Smith, E.H., Liwicki, M., Peng, L. (eds) Document Analysis and Recognition - ICDAR 2024",
  year="2024",
  series="Lecture Notes in Computer Science",
  volume="14807",
  pages="218--235",
  publisher="Springer Nature Switzerland AG",
  address="Atény",
  doi="10.1007/978-3-031-70546-5\{_}13",
  isbn="978-3-031-70545-8",
  url="https://link.springer.com/chapter/10.1007/978-3-031-70546-5_13"
}