Course detail

Optimal Control and Identification

FIT-ORIDAcad. year: 2020/2021

The "Optimal Control and Identification" is suitable for students of IT and related fields and its goal is to explain the principles of automatic control in a suitable way. The course does not intend to train specialists in controller design but rather to exlpain to the graduates of the course what control means and how to approach various tasks of automatic control.

Doctoral state exam - topics:

  1. Tasks of optimal control, static and dynamic optimization of deterministic, stochastic and adaptive control.
  2. Dynamac optimization, forms of loss functions, border conditions, Euler-Lagrange equation.
  3. Limitation of shapes of control non-equations and Pontrjagin principle of minimum.
  4. Dynamic programming, design of loss functions, Hamiltona-Jakobiho-Bellman equation.
  5. Linear controller, design of loss function, Riccati equation.
  6. Repeating of characteristics of random processes, mean values, dispersion, correlation, covariation, Wiener-Chincin relationships, Parceval theorem, while and "color" noise, transformation of random signals in linear system.
  7. Overview of Bayesovs estimations, loss and risk functions, general principle of dynamic filtration.
  8. Linear dynamic (Kalman) filter, its design, conversion to discrete filter, generalization of dynamic filter, Wiener filter.
  9. Parallel identification of system and trajectory as well as generalized state vector, linearized Kalman filter, contruction of selected non-linear filters.
  10. Stochastic control, linear quadratic Gauss problem, continuous and discrete stochastic state regulator and servo mechanism.
  11. Adaptive systems, parallel identification of status, parameters, and control, most frequent structures of adaptive systems.
  12. Classic methods of regulations.

Language of instruction

Czech

Mode of study

Not applicable.

Learning outcomes of the course unit

Knowledge of the tasks of optimal control, static and dynamic optimization of deterministic, stochastic and adaptive control. Introductory knowledge of dynamac optimization, forms of loss functions, border conditions, Euler-Lagrange equation. Knowledge of limitation of shapes of control non-equations and Pontrjagin principle of minimum. Understanding of  dynamic programming, design of loss functions, Hamiltona-Jakobiho-Bellman equation. Introductory knowledge of linear controller, design of loss function, Riccati equation. Repeating of characteristics of random processes, mean values, dispersion, correlation, covariation, Wiener-Chincin relationships, Parceval theorem, while and "color" noise, transformation of random signals in linear system. Overview of Bayesovs estimations, loss and risk functions, general principle of dynamic filtration. Knowledge of linear dynamic (Kalman) filter, its design, conversion to discrete filter, generalization of dynamic filter, Wiener filter. Knowledge of parallel identification of system and trajectory as well as generalized state vector, linearized Kalman filter, contruction of selected non-linear filters. Overview of stochastic control, linear quadratic Gauss problem, continuous and discrete stochastic state regulator and servo mechanism. Knowledge of adaptive systems, parallel identification of status, parameters, and control, most frequent structures of adaptive systems.

Prerequisites

Basic knowledge of signal processing and mathematical statistics.

Co-requisites

Not applicable.

Planned learning activities and teaching methods

Not applicable.

Assesment methods and criteria linked to learning outcomes

Presentations in the form of a seminar.

Course curriculum

Not applicable.

Work placements

Not applicable.

Aims

THe goal of the course is to, using suitable forms, explain principles of automatic control. The tasks of optimal control will be formulated in a general way as optimization tasks. Similarly, the stochastic methods of control and identification are explained. Classical methods of control will be considered as partial general tasks solved using contemporary mathoematics apparatus and possibilities.

Specification of controlled education, way of implementation and compensation for absences

Oral exam.

Recommended optional programme components

Not applicable.

Prerequisites and corequisites

Not applicable.

Basic literature

Not applicable.

Recommended reading

Astrom,K.J.-Wittenmark,B.: Computer Controlled Systems. Prentice-Hall,1990.
Dimitri Bertsekas. Dynamic Programming and Optimal Control. Athena Scientific, 4th Ed., 2017.
Dimitri Bertsekas. Reinforcement Learning and Optimal Control. Athena Scientific, 1st Ed., 2019.
E.B. Lee and L. Markus, Foundations of Optimal Control Theory, Wiley, New York 1967.
Fleming W. H., Rishel R. W.: Deterministic and Stochastic Optimal Control. Springer, New York, 1975, sec. edition 2001.
Frank L. Lewis, Draguna Vrabie, Vassilis L. Syrmos. Optimal Control. Wiley, 3rd Ed., 2012.
Sage, A.P.: Estimation Theory with Application to Communication and control. N.Y. 1972.
Sage, A.P.: Optimum Systems Control. New Jersey 1982.

Classification of course in study plans

  • Programme CSE-PHD-4 Doctoral

    branch DVI4 , 0 year of study, winter semester, elective

  • Programme CSE-PHD-4 Doctoral

    branch DVI4 , 0 year of study, winter semester, elective

  • Programme CSE-PHD-4 Doctoral

    branch DVI4 , 0 year of study, winter semester, elective

  • Programme CSE-PHD-4 Doctoral

    branch DVI4 , 0 year of study, winter semester, elective

Type of course unit

 

Lecture

26 hod., optionally

Teacher / Lecturer

Syllabus


The indicative outline of the course is shown below. The topics of the lectures will be adjusted based in the initial knowledge of students. The end of the course is expected in a form of seminars and individual presentations.

  1. Tasks of optimal control, static and dynamic optimization of deterministic, stochastic and adaptive control.
  2. Dynamac optimization, forms of loss functions, border conditions, Euler-Lagrange equation.
  3. Limitation of shapes of control non-equations and Pontrjagin principle of minimum.
  4. Dynamic programming, design of loss functions, Hamiltona-Jakobiho-Bellman equation.
  5. Linear controller, design of loss function, Riccati equation.
  6. Repeating of characteristics of random processes, mean values, dispersion, correlation, covariation, Wiener-Chincin relationships, Parceval theorem, while and "color" noise, transformation of random signals in linear system.
  7. Overview of Bayesovs estimations, loss and risk functions, general principle of dynamic filtration.
  8. Linear dynamic (Kalman) filter, its design, conversion to discrete filter, generalization of dynamic filter, Wiener filter.
  9. Parallel identification of system and trajectory as well as generalized state vector, linearized Kalman filter, contruction of selected non-linear filters.
  10. Stochastic control, linear quadratic Gauss problem, continuous and discrete stochastic state regulator and servo mechanism.
  11. Adaptive systems, parallel identification of status, parameters, and control, most frequent structures of adaptive systems.

Project

13 hod., compulsory

Teacher / Lecturer

Syllabus

Individual projects whose results will be presented in a form of seminar in at end of the course.

Guided consultation in combined form of studies

26 hod., optionally

Teacher / Lecturer