Abstract
We describe a framework for robot navigation that exploits the continuity of image sequences. Tracked visual features both guide the robot and provide predictive information about subsequent features to track. Our hypothesis is that image-based techniques will allow accurate motion without a precise geometric model of the world, while using predictive information will add speed and robustness. A basic component of our framework is called a scene, which is the set of image features stable over some segment of motion. When the scene changes, it is appended to a stored sequence. As the robot moves, correspondences and dissimilarities between current, remembered, and expected scenes provide cues to join and split scene sequences, forming a map-like directed graph. Visual servoing on features in successive scenes is used to traverse a path between robot and goal map locations. In our framework, a human guide serves as a scene recognition oracle during a map-learning phase; thereafter, assuming a known starting position, the robot can independently determine its location without general scene recognition ability. A prototype implementation of this framework uses as features color patches, sum-of-squared differences (SSD) subimages, or image projections of rectangles.
Original language | English (US) |
---|---|
Pages | 938-943 |
Number of pages | 6 |
State | Published - Dec 1 1996 |
Externally published | Yes |
Event | Proceedings of the 1996 13th National Conference on Artificial Intelligence. Part 2 (of 2) - Portland, OR, USA Duration: Aug 4 1996 → Aug 8 1996 |
Other
Other | Proceedings of the 1996 13th National Conference on Artificial Intelligence. Part 2 (of 2) |
---|---|
City | Portland, OR, USA |
Period | 8/4/96 → 8/8/96 |
ASJC Scopus subject areas
- Software
- Artificial Intelligence