Audio-visual saliency map: Overview, basic models and hardware implementation

Sudarshan Ramenahalli, Daniel R. Mendat, Salvador Dura-Bernal, Eugenio Culurciello, Ernst Niebur, Andreas Andreou

Research output: Chapter in Book/Report/Conference proceedingConference contribution

6 Scopus citations

Abstract

In this paper we provide an overview of audiovisual saliency map models. In the simplest model, the location of auditory source is modeled as a Gaussian and use different methods of combining the auditory and visual information. We then provide experimental results with applications of simple audio-visual integration models for cognitive scene analysis. We validate the simple audio-visual saliency models with a hardware convolutional network architecture and real data recorded from moving audio-visual objects. The latter system was developed under Torch language by extending the attention.lua (code) and attention.ui (GUI) files that implement Culurciello's visual attention model.

Original languageEnglish (US)
Title of host publication2013 47th Annual Conference on Information Sciences and Systems, CISS 2013
DOIs
StatePublished - Aug 20 2013
Event2013 47th Annual Conference on Information Sciences and Systems, CISS 2013 - Baltimore, MD, United States
Duration: Mar 20 2013Mar 22 2013

Publication series

Name2013 47th Annual Conference on Information Sciences and Systems, CISS 2013

Other

Other2013 47th Annual Conference on Information Sciences and Systems, CISS 2013
Country/TerritoryUnited States
CityBaltimore, MD
Period3/20/133/22/13

ASJC Scopus subject areas

  • Information Systems

Fingerprint

Dive into the research topics of 'Audio-visual saliency map: Overview, basic models and hardware implementation'. Together they form a unique fingerprint.

Cite this