The Project

Proposal due Thursday, Oct. 10

Paper presentation Oct. 31-Nov. 14

Final project due Wednesday, Nov. 27

 

Your proposal should consist of about a page of text describing what

you intend to accomplish, how it relates to computer vision and

autonomous robotics, and why you think it is interesting.  The proposal should 

include a reference to a primary paper you will be drawing upon (not 

one on the official reading list), what parts of it you will  implement and/or 

improve, other methods you will be trying or problems you will be tackling, as 

well as any other references you think are relevant.  The primary paper 

will be the one that you review for your second, solo presentation.  This 

presentation should take 15 minutes and be accompanied by a two-page write-up.

(Once again, please schedule an appointment with me to go over it; this time just

the day before will do).

 

I would like to meet with each of you individually between now and next Wednesday

to discuss your project, so please e-mail me to set up an appointment.  Below is a 

partial list of possible subjects if you need some direction regarding a topic or the 

scope of this undertaking (I can expand on them in our meeting), but I am hoping you 

will bring your own ideas or papers you have found to talk about.  Although the primary 

paper you choose should not be on the reading list, the topic can be similar to one we 

are covering in class.  I only ask that you try to do something fairly different from the 

first paper you were assigned.

 

In terms of equipment, assume that you will have access to a wheeled robot like the 

U. Penn. Clodbuster that I showed some videos of in the first lecture (I'm building one

right now).  I also have a DV video camera and digital camera for data collection 

"on location" (such as from a car window), as well as a number of Firewire webcams

for deskbound experimentation.  There is of course a lot of data available online.  As a

last resort, you can always e-mail paper authors to ask them (nicely) for their data.

 

Tracking cars

Automatic mosaicing

Tracking with super-resolution

Optical flow for obstacle avoidance

Detecting motion while moving

Gesture interpretation

Person/place recognition

Building visual maps

Structure from motion

Etc.