[Univ of Cambridge] [Dept of Engineering]
 
Dr Roberto Cipolla, Reader in Information Engineering
PeopleResearch InterestsPublicationsProfessional ActivitiesUndergraduate TeachingPostgraduate StudiesContact DetailsRelated Sites: Academic and Personal
People

ENGINEERING TRIPOS PART IIB
2003
-2004

Module 4F12 - Computer Vision and Robotics

Leader: Prof. R. Cipolla
Timing: Michaelmas term, 16 lectures (including examples)
 

 

Aims

The module aims to introduce the principles, models and applications of computer vision. The course will cover image structure, projection, stereo vision, and the interpretation of visual motion. It will be illustrated with case studies of industrial (robotic) applications of computer vision, including visual navigation for autonomous robots, robot hand-eye coordination and novel man-machine interfaces.

 

Lecture Syllabus

Introduction
Computer vision: what is it, why study it and how? The eye and the camera, vision as an information processing task. A geometrical framework for vision. 3D interpretation of 2D images. Applications.

Image structure
Image intensities and structure: edges and corners. Edge detection, the aperture problem. Corner detection. Contour extraction using B-spline snakes. Case study: tracking edges and corners for robot hand-eye coordination and man-machine interfaces.

Projection
Orthographic projection. Pin-hole camera model. Planar perspective projection. Vanishing points and lines. Projection matrix, homogeneous coordinates. Camera calibration, recovery of world position. Weak perspective, the affine camera. Projective invariants. Case study: 2D object recognition.

Stereo vision
Epipolar geometry and the essential matrix. Recovery of depth. Uncalibrated cameras and the fundamental matrix. The correspondence problem. Affine stereo. Case study: 3D stereograms.

Object detection and tracking

Basic target tracking; Kalman filter; application to B-spline snake. Active appearance models. Chamfer matching, template trees. Case study: intelligent automotive vision system.

Objectives

On completion of the module, students should:

  • Be able to design feature detectors to detect, localise and track image features;
  • Know how to model perspective image formation and calibrate single and multiple camera systems;
  • Be able to recover 3D position and shape information from arbitrary viewpoints;
  • Appreciate the problems in finding corresponding features in different viewpoints;
  • Analyse visual motion to recover scene structure and viewer motion, and understand how this information can be used for navigation;
  • Understand how simple object recognition systems can be designed so that they are independent of lighting and camera viewpoint;
  • Appreciate the industrial potential of computer vision but understand the limitations of current methods.

 

Assessment

Written examination (1.5 hours, start of Lent term)

 

References
*NALWA, V. S., A GUIDED TOUR OF COMPUTER VISION
Addison-Wesley, 1993

 
NO 219
*FAUGERAS, O. THREE DIMENSIONAL COMPUTER VISION
MIT Press, 1993


 

NOF 47
*CIPOLLA, R &
GIBLIN, P.J.
VISUAL MOTION OF CURVES AND SURFACES
CUP, 2000


 

NOF 60

 
     

 

Return to Undergraduate Teaching

People | Research | Publications | Professional | Undergraduate | Postgraduate | Contact | Related Sites
Return to Main Page | Computer Vision and Robotics
© Roberto Cipolla, 13th October 1999