skip to main content
10.1145/1401890.1401949acmconferencesArticle/Chapter ViewAbstractPublication PageskddConference Proceedingsconference-collections
research-article

Cut-and-stitch: efficient parallel learning of linear dynamical systems on smps

Authors Info & Claims
Published:24 August 2008Publication History

ABSTRACT

Multi-core processors with ever increasing number of cores per chip are becoming prevalent in modern parallel computing. Our goal is to make use of the multi-core as well as multi-processor architectures to speed up data mining algorithms. Specifically, we present a parallel algorithm for approximate learning of Linear Dynamical Systems (LDS), also known as Kalman Filters (KF). LDSs are widely used in time series analysis such as motion capture modeling, visual tracking etc. We propose Cut-And-Stitch (CAS), a novel method to handle the data dependencies from the chain structure of hidden variables in LDS, so as to parallelize the EM-based parameter learning algorithm. We implement the algorithm using OpenMP on both a supercomputer and a quad-core commercial desktop. The experimental results show that parallel algorithms using Cut-And-Stitch achieve comparable accuracy and almost linear speedups over the serial version. In addition, Cut-And-Stitch can be generalized to other models with similar linear structures such as Hidden Markov Models (HMM) and Switching Kalman Filters (SKF).

References

  1. C. M. Bishop. Pattern Recognition and Machine Learning. Springer, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. G. Buehrer, S. Parthasarathy, S. Tatikonda, T. Kurc, and J. Saltz. Toward terabyte pattern mining: an architecture-conscious solution. In Proc. of the 12th ACM SIGPLAN, pages 2--12. ACM, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. E. Chang, K. Zhu, H. Wang, H. Bai, J. Li, Z. Qiu, and H. Cui. Parallelizing support vector machines on distributed computers. In J. Platt, D. Koller, Y. Singer, and S. Roweis, editors, NIPS 20, pages 257--264. MIT Press, 2007.Google ScholarGoogle Scholar
  4. C.-T. Chu, S. K. Kim, Y.-A. Lin, Y. Yu, G. R. Bradski, A. Y. Ng, and K. Olukotun. Map-reduce for machine learning on multicore. In B. Schölkopf, J. C. Platt, and T. Hoffman, editors, NIPS 19, pages 281--288. MIT Press, 2006.Google ScholarGoogle Scholar
  5. R. Collobert, S. Bengio, and Y. Bengio. A Parallel Mixture of SVMs for Very Large Scale Problems. In T. G. Dietterich, S. Becker, and Z. Ghahramani, editors, NIPS. MIT Press, 2002.Google ScholarGoogle Scholar
  6. C. B. Colohan, A. Ailamaki, J. G. Steffan, and T. C. Mowry. Tolerating dependences between large speculative threads via sub-threads. In ISCA '06, pages 216--226. IEEE Computer Society, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. S. Cong, J. Han, and D. Padua. Parallel mining of closed sequential patterns. In Proc. of the 11th ACM SIGKDD, pages 562--567. ACM, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. J. Dean and S. Ghemawat. Mapreduce: Simplified data processing on large clusters. In OSDI'04: Sixth Symposium on Operating System Design and Implementation, 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. H. P. Graf, E. Cosatto, L. Bottou, I. Dourdanovic, and V. Vapnik. Parallel support vector machines: The cascade SVM. In NIPS 17, 2004.Google ScholarGoogle Scholar
  10. Intel. Intel research advances 'era of tera': www.intel.com/pressroom/archive/releases/20070204comp.htm.Google ScholarGoogle Scholar
  11. L. Li, J. McCann, C. Faloutsos, and N. Pollard. Laziness is a virtue: Motion stitching using effort minimization. In Short Papers Proceedings of EUROGRAPHICS, 2008.Google ScholarGoogle Scholar
  12. C. Ranger, R. Raghuraman, A. Penmetsa, G. Bradski, and C. Kozyrakis. Evaluating mapreduce for multi-core and multiprocessor systems. In HPCA '07, pages 13--24. IEEE Computer Society, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. S. Reinhardt and G. Karypis. A multi-level parallel implementation of a program for finding frequent patterns in a large sparse graph. In IPDPS, pages 1--8. IEEE, 2007.Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. Cut-and-stitch: efficient parallel learning of linear dynamical systems on smps

              Recommendations

              Comments

              Login options

              Check if you have access through your login credentials or your institution to get full access on this article.

              Sign in
              • Published in

                cover image ACM Conferences
                KDD '08: Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining
                August 2008
                1116 pages
                ISBN:9781605581934
                DOI:10.1145/1401890
                • General Chair:
                • Ying Li,
                • Program Chairs:
                • Bing Liu,
                • Sunita Sarawagi

                Copyright © 2008 ACM

                Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

                Publisher

                Association for Computing Machinery

                New York, NY, United States

                Publication History

                • Published: 24 August 2008

                Permissions

                Request permissions about this article.

                Request Permissions

                Check for updates

                Qualifiers

                • research-article

                Acceptance Rates

                KDD '08 Paper Acceptance Rate118of593submissions,20%Overall Acceptance Rate1,133of8,635submissions,13%

                Upcoming Conference

                KDD '24

              PDF Format

              View or Download as a PDF file.

              PDF

              eReader

              View online with eReader.

              eReader