TY - CONF
T1 - Block coordinate descent for sparse NMF
AU - Potluru, Vamsi K.
AU - Plis, Sergey M.
AU - Le Roux, Jonathan
AU - Pearlmutter, Barak A.
AU - Calhoun, Vince D.
AU - Hayes, Thomas P.
N1 - Funding Information:
The first author would like to acknowledge the support from NIBIB grants 1 R01 EB 000840 and 1 R01 EB 005846. The second author was supported by NIMH grant 1 R01 MH076282-01. The latter two grants were funded as part of the NSF/NIH Collaborative Research in Computational Neuroscience Program.
Publisher Copyright:
© 2013 International Conference on Learning Representations, ICLR. All rights reserved.
Copyright:
Copyright 2020 Elsevier B.V., All rights reserved.
PY - 2013
Y1 - 2013
N2 - Nonnegative matrix factorization (NMF) has become a ubiquitous tool for data analysis. An important variant is the sparse NMF problem which arises when we explicitly require the learnt features to be sparse. A natural measure of sparsity is the L0 norm, however its optimization is NP-hard. Mixed norms, such as L1/L2 measure, have been shown to model sparsity robustly, based on intuitive attributes that such measures need to satisfy. This is in contrast to computationally cheaper alternatives such as the plain L1 norm. However, present algorithms designed for optimizing the mixed norm L1/L2 are slow and other formulations for sparse NMF have been proposed such as those based on L1 and L0 norms. Our proposed algorithm allows us to solve the mixed norm sparsity constraints while not sacrificing computation time. We present experimental evidence on real-world datasets that shows our new algorithm performs an order of magnitude faster compared to the current state-of-the-art solvers optimizing the mixed norm and is suitable for large-scale datasets.
AB - Nonnegative matrix factorization (NMF) has become a ubiquitous tool for data analysis. An important variant is the sparse NMF problem which arises when we explicitly require the learnt features to be sparse. A natural measure of sparsity is the L0 norm, however its optimization is NP-hard. Mixed norms, such as L1/L2 measure, have been shown to model sparsity robustly, based on intuitive attributes that such measures need to satisfy. This is in contrast to computationally cheaper alternatives such as the plain L1 norm. However, present algorithms designed for optimizing the mixed norm L1/L2 are slow and other formulations for sparse NMF have been proposed such as those based on L1 and L0 norms. Our proposed algorithm allows us to solve the mixed norm sparsity constraints while not sacrificing computation time. We present experimental evidence on real-world datasets that shows our new algorithm performs an order of magnitude faster compared to the current state-of-the-art solvers optimizing the mixed norm and is suitable for large-scale datasets.
UR - http://www.scopus.com/inward/record.url?scp=85083951291&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85083951291&partnerID=8YFLogxK
M3 - Paper
AN - SCOPUS:85083951291
T2 - 1st International Conference on Learning Representations, ICLR 2013
Y2 - 2 May 2013 through 4 May 2013
ER -