PROBLEM SOLVING WITH REINFORCEMENT LEARNING
Gavin A Rummery
July 1995
This thesis is concerned with practical issues surrounding the application of reinforcement learning techniques to tasks that take place in high dimensional continuous state-space environments. In particular, the extension of on-line updating methods is considered, where the term implies systems that learn as each experience arrives, rather than storing the experiences for use in a separate off-line learning phase. Firstly, the use of alternative update rules in place of standard Q-learning [Watkins] is examined to provide faster convergence rates. Secondly, the use of multi-layer perceptron (MLP) neural networks [Rumelhart] is investigated to provide suitable generalising function approximators. Finally, consideration is given to the combination of Adaptive Heuristic Critic (AHC) methods and Q-learning to produce systems combining the benefits of real-valued actions and discrete switching.
The different update rules examined are based on Q-learning combined with the TD-lambda algorithm [Sutton]. Several new algorithms, including Modified Q-Learning and Summation Q-Learning, are examined, as well as alternatives such as Q-lambda [Peng]. In addition, algorithms are presented for applying these Q-learning updates to train MLPs on-line during trials, as opposed to the backward-replay method used by [Lin] that requires waiting until the end of each trial before updating can occur.
The performance of the connectionist algorithms is compared on a larger and more complex robot navigation problem. Here a simulated mobile robot is trained to guide itself to a goal position in the presence of obstacles. The robot must rely on limited sensory feedback from its surroundings and make decisions that can be generalised to arbitrary layouts of obstacles. These simulations show that the performance of on-line learning algorithms is less sensitive to the choice of training parameters than backward-replay, and that the alternative Q-learning rules of Modified Q-Learning and Q-lambda are more robust than standard Q-learning updates.
Finally, a combination of real-valued AHC and Q-learning, called Q-AHC learning, is presented, and various architectures are compared in performance on the robot problem. The resulting reinforcement learning system has the properties of providing on-line training, parallel computation, generalising function approximation, and continuous vector actions.
Watkins, C. J. C.H. (1989). Learning from Delayed Rewards, PhD thesis, King's College, Cambridge University, UK.
Rumelhart, D.E., Hinton, G.E. and Williams, R.J. (1986). Parallel Distributed Processing, Vol.1, MIT Press.
Sutton, R.S. (1988). Learning to predict by the methods of temporal differences, Machine Learning 3:9--44.
Peng, J. and Williams, R.J. (1994). Incremental multi-step {Q}-learning, in W.~Cohen and H.~Hirsh (eds), Machine Learning: Proceedings of the Eleventh International Conference (ML94), Morgan Kaufmann, New Brunswick, NJ, USA, pp.226--232.
Lin, L. (1993b). Reinforcement Learning for Robots Using Neural Networks, PhD thesis, Carnegie Mellon University, Pittsburgh, Pennsylvania.
Barto, A.G., Bradtke, S.J. and Singh, S.P. (1993). Learning to act using real-time dynamic programming, Technical Report CMPSCI 93-02, Department of Computer Science, University of Massachusetts, Amherst MA 01003.
If you have difficulty viewing files that end '.gz'
,
which are gzip compressed, then you may be able to find
tools to uncompress them at the gzip
web site.
If you have difficulty viewing files that are in PostScript, (ending
'.ps'
or '.ps.gz'
), then you may be able to
find tools to view them at
the gsview
web site.
We have attempted to provide automatically generated PDF copies of documents for which only PostScript versions have previously been available. These are clearly marked in the database - due to the nature of the automatic conversion process, they are likely to be badly aliased when viewed at default resolution on screen by acroread.