skip to main content
10.1145/2339530.2339582acmconferencesArticle/Chapter ViewAbstractPublication PageskddConference Proceedingsconference-collections
research-article

Fast bregman divergence NMF using taylor expansion and coordinate descent

Authors Info & Claims
Published:12 August 2012Publication History

ABSTRACT

Non-negative matrix factorization (NMF) provides a lower rank approximation of a matrix. Due to nonnegativity imposed on the factors, it gives a latent structure that is often more physically meaningful than other lower rank approximations such as singular value decomposition (SVD). Most of the algorithms proposed in literature for NMF have been based on minimizing the Frobenius norm. This is partly due to the fact that the minimization problem based on the Frobenius norm provides much more flexibility in algebraic manipulation than other divergences. In this paper we propose a fast NMF algorithm that is applicable to general Bregman divergences. Through Taylor series expansion of the Bregman divergences, we reveal a relationship between Bregman divergences and Euclidean distance. This key relationship provides a new direction for NMF algorithms with general Bregman divergences when combined with the scalar block coordinate descent method. The proposed algorithm generalizes several recently proposed methods for computation of NMF with Bregman divergences and is computationally faster than existing alternatives. We demonstrate the effectiveness of our approach with experiments conducted on artificial as well as real world data.

Skip Supplemental Material Section

Supplemental Material

311a_m_talk_7.mp4

mp4

119.8 MB

References

  1. http://www.cl.cam.ac.uk/research/dtg/attarchive /facedatabase.html.Google ScholarGoogle Scholar
  2. A. Banerjee. Optimal bregman prediction and jensen's equality. In In Proc. International Symposium on Information Theory (ISIT), page 2004, 2004.Google ScholarGoogle ScholarCross RefCross Ref
  3. A. Banerjee, S. Merugu, I. S. Dhillon, and J. Ghosh. Clustering with bregman divergences. J. Mach. Learn. Res., 6:1705--1749, December 2005. Google ScholarGoogle ScholarCross RefCross Ref
  4. P. Breheny and J. Huang. Coordinate descent algorithms for nonconvex penalized regression, with applications to biological feature selection. Annals of Applied Statistics, 5(1):232--253, 2011.Google ScholarGoogle ScholarCross RefCross Ref
  5. A. Cichocki and A.-H. Phan. Fast local algorithms for large scale nonnegative matrix and tensor factorizations. IEICE Transactions on Fundamentals of Electronics, 92:708--721, 2009.Google ScholarGoogle ScholarCross RefCross Ref
  6. A. Cichocki and R. Zdunek. Nmflab for signal and image processing. In tech. rep, Laboratory for Advanced Brain Signal Processing, Saitama, Japan, 2006. BSI, RIKEN.Google ScholarGoogle Scholar
  7. A. Cichocki, R. Zdunek, and S. A. A.-H. Phan. Nonnegative matrix and tensor factorizations: Applications to exploratory multi-way data analysis and blind source separation. New York, USA, 2009. Wiley. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. I. S. Dhillon and S. Sra. Generalized nonnegative matrix approximations with bregman divergences. In Neural Information Proc. Systems, pages 283--290, 2005.Google ScholarGoogle Scholar
  9. C. Ding, T. Li, and W. Peng. On the equivalence between non-negative matrix factorization and probabilistic latent semantic indexing. Comput. Stat. Data Anal., 52:3913--3927, April 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. J. Fan and R. Li. Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of the American Statistical Association, 96(456).Google ScholarGoogle Scholar
  11. C. Fevotte, N. Bertin, and J.-L. Durrieu. Nonnegative matrix factorization with the itakura-saito divergence: With application to music analysis. Neural Comput., 21:793--830, March 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. M. Figueiredo, R. Nowak, and S. J. Wright. Gradient projection for sparse reconstruction: Application to compressed sensing and other inverse problems. IEEE J. of Selected Topics in Signal Proc, 1:586--598, 2007.Google ScholarGoogle ScholarCross RefCross Ref
  13. J. Friedman, T. Hastie, and R. Tibshirani. Regularization paths for generalized linear models via coordinate descent. Journal of Statistical Software, 33(1), 1 2010.Google ScholarGoogle ScholarCross RefCross Ref
  14. N. Gillis and F. Glineur. Accelerated multiplicative updates and hierarchical als algorithms for nonnegative matrix factorization. Neural Comput., 24(4):1085--1105, 4 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. M. R. Hestenes and E. Stiefel. Methods of conjugate gradients for solving linear systems. Journal of Research of the National Bureau of Standards, 49(6), 1952.Google ScholarGoogle ScholarCross RefCross Ref
  16. T. Hofmann. Probabilistic latent semantic indexing. In SIGIR '99, pages 50--57, New York, NY, USA, 1999. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. C.-J. Hsieh and I. S. Dhillon. Fast coordinate descent methods with variable selection for non-negative matrix factorization. In Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining, KDD '11, pages 1064--1072, New York, NY, USA, 2011. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. C.-J. Hsieh, M. A. Sustik, I. S. Dhillon, and P. Ravikumar. Sparse inverse covariance matrix estimation using quadratic approximation. In Advances in Neural Information Processing Systems 24, pages 2330--2338, 2011.Google ScholarGoogle Scholar
  19. H. Kim and H. Park. Sparse non-negative matrix factorizations via alternating non-negativity-constrained least squares for microarray data analysis. Bioinformatics, 23:1495--1502, June 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. H. Kim and H. Park. Nonnegative matrix factorization based on alternating nonnegativity constrained least squares and active set method. SIAM J. Matrix Anal. Appl., 30:713--730, July 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. J. Kim, Y. He, and H. Park. Algorithms for nonnegative matrix and tensor factorizations: A unified view based on block coordinate descent framework. Under review.Google ScholarGoogle Scholar
  22. J. Kim and H. Park. Toward faster nonnegative matrix factorization: A new algorithm and comparisons. IEEE International Conference on Data Mining, 0:353--362, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. J. Kim and H. Park. Fast nonnegative matrix factorization: An active-set-like method and comparisons. In SIAM Journal on Scientific Computing, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. G. Lebanon. Axiomatic geometry of conditional models. Information Theory, IEEE Transactions, 51:1283--1294, April 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. D. D. Lee and H. S. Seung. Algorithms for non-negative matrix factorization. In NIPS, pages 556--562. MIT Press, 2000.Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. Y. Li and S. Osher. Coordinate descent optimization for l1 minimization with application to compressed sensing; a greedy algorithm. Inverse Probl. Imaging, 3(3).Google ScholarGoogle Scholar
  27. C.-J. Lin. Projected gradient methods for non-negative matrix factorization. Neural Computation, 19:2756--2779, October 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. C. Y. Lin and E. Hovy. Automatic evaluation of summaries using n-gram co-occurrence statistics. In NAACL, pages 71--78, Morristown, NJ, USA, 2003. Association for Computational Linguistics. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. R. Mazumder, J. Friedman, and T. Hastie. Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of the American Statistical Association, 106(495).Google ScholarGoogle Scholar
  30. S. D. Pietra, V. D. Pietra, and J. Lafferty. Duality and auxiliary functions for bregman distances. Technical report, School of Computer Science, Carnegie Mellon University, 2002.Google ScholarGoogle Scholar
  31. A. P. Singh and G. J. Gordon. A unified view of matrix factorization models. In Proceedings of the European conference on Machine Learning and Knowledge Discovery in Databases - Part II, ECML PKDD '08, pages 358--373, Berlin, Heidelberg, 2008. Springer-Verlag. Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. S. Wang and D. Schuurmans. Learning continuous latent variable models with bregman divergences. In In Proc. IEEE International Conference on Algorithmic Learning Theory, page 2004, 2003.Google ScholarGoogle ScholarCross RefCross Ref
  33. T. Wu and K. Lange. Coordinate descent algorithms for lasso penalized regression. The Annals of Applied Statistics, 2(1):224--244, 2008.Google ScholarGoogle ScholarCross RefCross Ref
  34. S. Yun and K.-C. Toh. A coordinate gradient descent method for l1-regularized convex minimization. Computational Optimization and Applications, 48(2). Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Fast bregman divergence NMF using taylor expansion and coordinate descent

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in
        • Published in

          cover image ACM Conferences
          KDD '12: Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining
          August 2012
          1616 pages
          ISBN:9781450314626
          DOI:10.1145/2339530

          Copyright © 2012 ACM

          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 12 August 2012

          Permissions

          Request permissions about this article.

          Request Permissions

          Check for updates

          Qualifiers

          • research-article

          Acceptance Rates

          Overall Acceptance Rate1,133of8,635submissions,13%

          Upcoming Conference

          KDD '24

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader