Vision Group 1

Members:
Ian Lenz (ilenz)

Chandrasekhar Bhagavatula (cbhagava)

Soumith Chintala (soumith)

Project:
Our group will be attempting to develop a more robust and useful visual system on the NAO. Our first goal is to implement calibration for different light levels. From there, we will work on image segmentation, and possibly texture and object recognition.

9/22 Demo:
For our demo on September 22, we hope to have the NAO be able to navigate autonomously in the atrium area outside the lab. The navigation system will probably be very basic - the robot will recognize the color of the floor, and attempt to stay on that color while moving around the room. Such a method might be thrown off by local variations in the floor color due to light coming in from outside or shadows cast by various objects in the room. While we hope to be able to compensate for these effects by the 22nd, the primary goal is to get simple color segmentation and navigation working.

Other goals:

 * Attempt to outsource vision processing over wireless - this would let us do much more indepth processing on the incoming images, yet should be able to be done in a reasonable amount of bandwidth
 * Obstacle recognition - use the ultrasound to recognize colors (or textures, shapes, features, etc.) associated with obstacles and learn to avoid those in the future
 * Mapping - map obstacles/ground and (ideally) be able to determine our location through visual cues
 * Dynamic calibration - recognize that different lighting conditions exist in different areas and adjust the colors in those areas accordingly. Would probably work with mapping.