Visually Guided Robots
Visual servoing
This project takes advantage of the geometric structure of the
Lie algebra of the affine transformation. A novel approach for
visual servoing exploits a single robot motion to image deformation
Jacobian, computed once near the target location, to guide the robot
over a large range of perturbations. This framework has been
extended recently to produce a robust 3D model tracking system which
is able to track articulated objects in the presence of occlusion
live from video images.
The following MPEG videos show a robot operating under visual guidance
(visual servoing) from a real-time tracking system.
Closed loop control (~6MB)
Following a trajectory (~15MB)
This work contributes towards an EPSRC project
on invariant signatures for visual servoing and an
ESPRIT project on the visual guidance of robots.
2½D Visual Servoing from Planar Contours
The aim of this research is to design a complete system for segmenting,
matching and tracking planar contours for use in visual servoing. Our
system can be used with arbitrary contours of any shape and without
any prior knowledge of their models. The system is first shown the
target view. A selected contour is automatically extracted and its
image shape is stored. The robot and object are then moved and the
system automatically identifies the target. The matching step is done
together with the estimation of the homography matrix between the two
views of the contour. Then, a 2½D visual servoing technique is
used to reposition the end-effector of a robot at the target position
relative to the planar contour. The system has been successfully
tested on several contours with very complex shapes such as leaves,
keys and the coastal outlines of islands.
Image Divergence from Closed Curves
Visual motion, as perceived by a camera mounted on a robot moving
relative to a scene, can be used to aid in navigation. Simple
cues such as time to contact can in principle be
estimated from the divergence of the image velocity field.
In practice methods using spatio-temporal derivatives of image
velocity were too sensitive to image noise to be useful. This project
considers the temporal evolution of the apparent area of a closed contour
(and an extension of Green's theorem in the plane) and aims to
recover time to contact and surface orientation reliably.
This is exploited in real-time visual docking and obstacle
avoidance.
Uncalibrated Stereo Hand-Eye Coordination
In this project, a simple and robust approximation to stereo
using only the cues available under orthographic projection
is used to build a system which exploits relative disparity
(and its gradient) in uncalibrated stereo to guide a robot
manipulator to pick up unfamiliar objects in an unstructured
scene. The system must not only be able to cope with uncertainty
in shape of the object, but also with uncertainty in the postions
and orientations of the camera, the robot and the object.
Man-Machine Interfaces Using Visual Gestures, Pointing
By detecting and tracking a human hand the system is
extended so that the user can point at an object of
interest and guide the robotic manipulator to pick it
up. The project uses uncalibrated stereo vision and
visual tracking of the hand. This makes the system robust
to movement of the cameras and of the user. This is just
one example of novel man-machine interfaces using computer
vision to provide more natural ways of interacting with
computers and machines. Some of the earliest examples in this
field include a wireless, passive alternative to a 3D mouse
which exploits motion parallax cues and an algorithm to detect
and track face gaze which exploits symmetry.
|