Abstract
Laboratory-based models of oculomotor strategy that differ in the amount and type of top-down information were evaluated against a baseline case of random scanning for predicting the gaze patterns of subjects performing a real-world activity - walking to a target. Images of four subjects' eyes and field of view were simultaneously recorded as they performed the mobility task. Offline analyses generated movies of the eye on scene and a categorization scheme was used to classify the locations of the fixations. Frames from each subject's eye-on-scene movie served as input to the models, and the location of each model's predicted fixations was classified using the same categorization scheme. The results showed that models with no top-down information (visual salience model) or with only coarse feature information performed no better than a random scanner; the models' ordered fixation locations (gaze pattern) matched less than a quarter of the subjects' gaze patterns. A model that used only geographic information outperformed the random scanner and matched approximately a third of the gaze patterns. The best performance was obtained from an oculomotor strategy that used both coarse feature and geographic information, matching nearly half the gaze patterns (48%). Thus, a model that uses top-down information about a target's coarse features and general vicinity does a fairly good job predicting fixation behavior, but it does not fully specify the gaze pattern of a subject walking to a target. Additional information is required, perhaps in the form of finer feature information or knowledge of a task's procedure.
Original language | English (US) |
---|---|
Pages (from-to) | 333-346 |
Number of pages | 14 |
Journal | Vision Research |
Volume | 43 |
Issue number | 3 |
DOIs | |
State | Published - Feb 2003 |
Externally published | Yes |
Keywords
- Gaze
- Guided search
- Mobility
- Oculomotor search strategies
- Visual saliency
ASJC Scopus subject areas
- Ophthalmology
- Sensory Systems