Course detail

Classification and recognition

FIT-KRDAcad. year: 2017/2018

Estimation of parameters Maximum Likelihood and Expectation-Maximization, formulation of the objective function of discriminative training, Maximum Mutual information (MMI) criterion, adaptation of GMM models,
transforms of features for recognition, modeling of feature space using discriminative sub-spaces, factor analysis, kernel techniques, calibration and fusion of classifiers, applications in recognition of speech, video and text.

Language of instruction

Czech

Mode of study

Not applicable.

Learning outcomes of the course unit

The students will get acquainted with advanced classification and recognition techniques and learn how to apply basic methods in the fields of speech recognition, computer graphics and natural language processing.

The students will learn to solve general problems of classification and recognition.

Prerequisites

Basic knowledge of statistics, probability theory, mathematical analysis and algebra.

Co-requisites

Not applicable.

Planned learning activities and teaching methods

Not applicable.

Assesment methods and criteria linked to learning outcomes

Study evaluation is based on marks obtained for specified items. Minimimum number of marks to pass is 50.

Course curriculum

    Syllabus of lectures:
    1. Estimation of parameters of Gaussian probability distribution by Maximum Likelihood (ML)
    2. Estimation of parameters of Gaussian Gaussian Mixture Model (GMM) by Expectation-Maximization (EM)
    3. Discriminative training, introduction, formulation of the objective function
    4. Discriminative training with the Maximum Mutual information (MMI) criterion
    5. Adaptation of GMM models- Maximum A-Posteriori (MAP), Maximum Likelihood Linear Regression (MLLR)
    6. Transforms of features for recognition - basis, Principal component analysis (PCA)
    7. Discriminative transforms of features - Linear Discriminant Analysis (LDA) and Heteroscedastic Linear Discriminant Analysis  (HLDA)
    8. Modeling of feature space using discriminative sub-spaces - factor analysis
    9. Kernel techniques, SVM
    10. Calibration and fusion of classifiers
    11. Applications in recognition of speech, video and text
    12. Student presentations I
    13. Student presentations II


    Syllabus - others, projects and individual work of students:
    • Individually assigned projects

Work placements

Not applicable.

Aims

To understand advanced classification and recognition techniques and to learn how to apply the algorithms and methods to problems in speech recognition, computer graphics and natural language processing. To get acquainted with discriminative training and building hybrid systems.

Specification of controlled education, way of implementation and compensation for absences

The evaluation includes the individual project

Recommended optional programme components

Not applicable.

Prerequisites and corequisites

Not applicable.

Basic literature

Bishop, C. M.: Pattern Recognition, Springer Science + Business Media, LLC, 2006, ISBN 0-387-31073-8.Fukunaga, K. Statistical pattern recognition, Morgan Kaufmann, 1990, ISBN 0-122-69851-7.

Recommended reading

Mařík,V., Štěpánková,O., Lažanský, J. a kol.: Umělá inteligence (1-4), ACADEMIA Praha, 1998-2003, ISBN 80-200-1044-0.

Classification of course in study plans

  • Programme CSE-PHD-4 Doctoral

    branch DVI4 , 0 year of study, summer semester, elective

Type of course unit

 

Lecture

39 hod., optionally

Teacher / Lecturer

Syllabus

  1. Estimation of parameters of Gaussian probability distribution by Maximum Likelihood (ML)
  2. Estimation of parameters of Gaussian Gaussian Mixture Model (GMM) by Expectation-Maximization (EM)
  3. Discriminative training, introduction, formulation of the objective function
  4. Discriminative training with the Maximum Mutual information (MMI) criterion
  5. Adaptation of GMM models- Maximum A-Posteriori (MAP), Maximum Likelihood Linear Regression (MLLR)
  6. Transforms of features for recognition - basis, Principal component analysis (PCA)
  7. Discriminative transforms of features - Linear Discriminant Analysis (LDA) and Heteroscedastic Linear Discriminant Analysis  (HLDA)
  8. Modeling of feature space using discriminative sub-spaces - factor analysis
  9. Kernel techniques, SVM
  10. Calibration and fusion of classifiers
  11. Applications in recognition of speech, video and text
  12. Student presentations I
  13. Student presentations II