Přístupnostní navigace
E-application
Search Search Close
Publication detail
VĚCHET, S., KREJSA, J., BŘEZINA, T.
Original Title
Using Modified Q-learning With LWR for Inverted Pendulum Control
Type
conference paper
Language
English
Original Abstract
Locally Weighted Learning (LWR) is a class of approximations, based on a local model. In this paper we demonstrate using LWR together with Q-learning for control tasks. Q-learning is the most effective and popular algorithm which belongs to the Reinforcement Learning algorithms group. This algorithm works with rewards and penalties. The most common representation of Q-function is the table. The table must be replaced by suitable approximator if use of continuous states is required. LWR is one of possible approximators. To get the first impression on application of LWR together with modified Q-learning for the control task a simple model of inverted pendulum was created and proposed method was applied on this model.
Keywords
Q-Learning, LWR, Continuous Space
Authors
RIV year
2003
Released
24. 3. 2003
Publisher
Institute of Mechanics of Solids, Brno University of Technology
Location
Brno
ISBN
80-214-2312-9
Book
Mechatronics, Robotics and Biomachanics 2003
Pages from
91
Pages to
92
Pages count
2
BibTex
@inproceedings{BUT9715, author="Stanislav {Věchet} and Jiří {Krejsa} and Tomáš {Březina}", title="Using Modified Q-learning With LWR for Inverted Pendulum Control", booktitle="Mechatronics, Robotics and Biomachanics 2003", year="2003", pages="2", publisher="Institute of Mechanics of Solids, Brno University of Technology", address="Brno", isbn="80-214-2312-9" }