Přístupnostní navigace
E-přihláška
Vyhledávání Vyhledat Zavřít
Detail publikace
VĚCHET, S., KREJSA, J., BŘEZINA, T.
Originální název
Using Modified Q-learning With LWR for Inverted Pendulum Control
Typ
článek ve sborníku ve WoS nebo Scopus
Jazyk
angličtina
Originální abstrakt
Locally Weighted Learning (LWR) is a class of approximations, based on a local model. In this paper we demonstrate using LWR together with Q-learning for control tasks. Q-learning is the most effective and popular algorithm which belongs to the Reinforcement Learning algorithms group. This algorithm works with rewards and penalties. The most common representation of Q-function is the table. The table must be replaced by suitable approximator if use of continuous states is required. LWR is one of possible approximators. To get the first impression on application of LWR together with modified Q-learning for the control task a simple model of inverted pendulum was created and proposed method was applied on this model.
Klíčová slova
Q-Learning, LWR, Continuous Space
Autoři
Rok RIV
2003
Vydáno
24. 3. 2003
Nakladatel
Institute of Mechanics of Solids, Brno University of Technology
Místo
Brno
ISBN
80-214-2312-9
Kniha
Mechatronics, Robotics and Biomachanics 2003
Strany od
91
Strany do
92
Strany počet
2
BibTex
@inproceedings{BUT9715, author="Stanislav {Věchet} and Jiří {Krejsa} and Tomáš {Březina}", title="Using Modified Q-learning With LWR for Inverted Pendulum Control", booktitle="Mechatronics, Robotics and Biomachanics 2003", year="2003", pages="2", publisher="Institute of Mechanics of Solids, Brno University of Technology", address="Brno", isbn="80-214-2312-9" }