Segment-Based Methods for Facial Attribute Detection from Partial Faces

Upal Mahbub, Sayantan Sarkar, Rama Chellappa

Research output: Contribution to journalArticlepeer-review


State-of-the-art methods of attribute detection from faces almost always assume the presence of a full, unoccluded face. Hence, their performance degrades for partially visible and occluded faces. In this paper, we introduce SPLITFACE, a deep convolutional neural network-based method that is explicitly designed to perform attribute detection in partially occluded faces. Taking several facial segments and the full face as input, the proposed method takes a data driven approach to determine which attributes are localized in which facial segments. The unique architecture of the network allows each attribute to be predicted by multiple segments, which permits the implementation of committee machine techniques for combining local and global decisions to boost performance. With access to segment-based predictions, SPLITFACE can predict well those attributes which are localized in the visible parts of the face, without having to rely on the presence of the whole face. We use the CelebA and LFWA facial attribute datasets for standard evaluations. We also modify both datasets, to occlude the faces, so that we can evaluate the performance of attribute detection algorithms on partial faces. Our evaluation shows that SPLITFACE significantly outperforms other recent methods especially for partial faces.

Original languageEnglish (US)
Article number8326549
Pages (from-to)601-613
Number of pages13
JournalIEEE Transactions on Affective Computing
Issue number4
StatePublished - Oct 1 2020
Externally publishedYes


  • Attribute detection
  • committee machines
  • facial segment
  • local to global decision propagation
  • score fusion

ASJC Scopus subject areas

  • Software
  • Human-Computer Interaction


Dive into the research topics of 'Segment-Based Methods for Facial Attribute Detection from Partial Faces'. Together they form a unique fingerprint.

Cite this