Sparse dictionary-based representation and recognition of action attributes

Qiang Qiu, Zhuolin Jiang, Rama Chellappa

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

We present an approach for dictionary learning of action attributes via information maximization. We unify the class distribution and appearance information into an objective function for learning a sparse dictionary of action attributes. The objective function maximizes the mutual information between what has been learned and what remains to be learned in terms of appearance information and class distribution for each dictionary item. We propose a Gaussian Process (GP) model for sparse representation to optimize the dictionary objective function. The sparse coding property allows a kernel with a compact support in GP to realize a very efficient dictionary learning process. Hence we can describe an action video by a set of compact and discriminative action attributes. More importantly, we can recognize modeled action categories in a sparse feature space, which can be generalized to unseen and unmodeled action categories. Experimental results demonstrate the effectiveness of our approach in action recognition applications.

Original languageEnglish (US)
Title of host publication2011 International Conference on Computer Vision, ICCV 2011
Pages707-714
Number of pages8
DOIs
StatePublished - 2011
Externally publishedYes
Event2011 IEEE International Conference on Computer Vision, ICCV 2011 - Barcelona, Spain
Duration: Nov 6 2011Nov 13 2011

Publication series

NameProceedings of the IEEE International Conference on Computer Vision

Conference

Conference2011 IEEE International Conference on Computer Vision, ICCV 2011
Country/TerritorySpain
CityBarcelona
Period11/6/1111/13/11

ASJC Scopus subject areas

  • Software
  • Computer Vision and Pattern Recognition

Fingerprint

Dive into the research topics of 'Sparse dictionary-based representation and recognition of action attributes'. Together they form a unique fingerprint.

Cite this