REINFORCEMENT LEARNING FOR MULTI-LINKED MANIPULATOR CONTROL
Chen K. Tham & Richard W. Prager
June 1992
We present a trajectory planning and obstacle avoidance method which uses Reinforcement Learning to learn the appropriate real-valued torques to apply at each joint of a simulated two-linked manipulator in order to move the end-effector to a desired destination in the workspace. The inputs to the controller are the joint positions and velocities which are fed directly into a Cerebellar Model Arithmetic Computer (CMAC) (Albus,75). In each state, the expected reward and appropriate torques for each joint are learnt through self-experimentation using a combination of the Temporal Difference (TD) technique (Sutton,87) and stochastic hillclimbing (Williams,88). Actions which cause the manipulator to reach the desired destination are rewarded whereas actions which lead to collisions with either joint limits or obstacles are punished by an amount proportional to the velocity before collision. After training, the manipulator is able to move along smooth collision-free paths from different start positions in the workspace to the destination.
If you have difficulty viewing files that end '.gz'
,
which are gzip compressed, then you may be able to find
tools to uncompress them at the gzip
web site.
If you have difficulty viewing files that are in PostScript, (ending
'.ps'
or '.ps.gz'
), then you may be able to
find tools to view them at
the gsview
web site.
We have attempted to provide automatically generated PDF copies of documents for which only PostScript versions have previously been available. These are clearly marked in the database - due to the nature of the automatic conversion process, they are likely to be badly aliased when viewed at default resolution on screen by acroread.