skip to main content
10.1145/2517208.2517209acmconferencesArticle/Chapter ViewAbstractPublication PagesgpceConference Proceedingsconference-collections
research-article

Family-based performance measurement

Published:27 October 2013Publication History

ABSTRACT

Most contemporary programs are customizable. They provide many features that give rise to millions of program variants. Determining which feature selection yields an optimal performance is challenging, because of the exponential number of variants. Predicting the performance of a variant based on previous measurements proved successful, but induces a trade-off between the measurement effort and prediction accuracy. We propose the alternative approach of family-based performance measurement, to reduce the number of measurements required for identifying feature interactions and for obtaining accurate predictions. The key idea is to create a variant simulator (by translating compile-time variability to run-time variability) that can simulate the behavior of all program variants. We use it to measure performance of individual methods, trace methods to features, and infer feature interactions based on the call graph. We evaluate our approach by means of five feature-oriented programs. On average, we achieve accuracy of 98%, with only a single measurement per customizable program. Observations show that our approach opens avenues of future research in different domains, such an feature-interaction detection and testing.

References

  1. G. Ammons, T. Ball, and J. Larus. Exploiting hardware performance counters with flow and context sensitive profiling. In Proc., PLDI, pages 85--96. ACM, 1997. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. S. Apel, H. Speidel, P. Wendler, A. von Rhein, and D. Beyer. Detection of feature interactions using feature-aware verification. In Proc. ASE, pages 372--375. IEEE, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. S. Apel, C. Kästner, and C. Lengauer. Language-independent and automated software composition: The FeatureHouse experience. IEEE Transactions on Software Engineering, 39 (1): 63--79, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. S. Apel, A. von Rhein, P. Wendler, A. Größlinger, and D. Beyer. Strategies for product-line verification: Case studies and experiments. In Proc. ICSE, pages 482--491. IEEE, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. S. Balsamo, A. Di Marco, P. Inverardi, and M. Simeoni. Model-based performance prediction in software development: A survey. IEEE Transactions on Software Engineering, 30 (5): 295--310, 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. D. Batory, J. N. Sarvela, and A. Rauschmayer. Scaling step-wise refinement. IEEE Transactions on Software Engineering, 30 (6): 355--371, 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. D. Batory, P. Höfner, and J. Kim. Feature interactions, products, and composition. In Proc. GPCE, pages 13--22. ACM, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. C. Brabrand, M. Ribeiro, T. Tolêdo, J. Winther, and P. Borba. Intraprocedural dataflow analysis for software product lines. Transactions on Aspect-Oriented Software Development, 10: 73--108, 2013.Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. M. Calder and A. Miller. Feature interaction detection by pairwise analysis of LTL properties: A case study. Formal Methods in System Design, 28 (3): 213--261, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. M. Calder, M. Kolberg, E. H. Magill, and S. Reiff-Marganiec. Feature interaction: A critical review and considered forecast. Computer Networks and ISDN Systems, 41: 115--141, 2003. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. S. Chen, Y. Liu, I. Gorton, and A. Liu. Performance prediction of component-based applications. Journal of Systems and Software, 74 (1): 35--43, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. E. Clarke, O. Grumberg, and D. Peled. Model Checking. MIT Press, 1999. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. A. Classen, P. Heymans, P.-Y. Schobbens, A. Legay, and J.-F. Raskin. Model checking lots of systems: Efficient verification of temporal properties in software product lines. In Proc. ICSE, pages 335--344. ACM, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. M. Erwig and E. Walkingshaw. The choice calculus: A representation for software variation. ACM Transactions on Software Engineering and Methodology, 21 (1): 1--27, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. C. Ghezzi and A. Sharifloo. Model-based verification of quantitative non-functional properties for software product lines. Information and Software Technology, 55 (3): 508--524, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. M. Grechanik, C. Fu, and Q. Xie. Automatically finding performance problems with feedback-directed learning software testing. In Proc., ICSE, pages 156--166. IEEE, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. J. Guo, K. Czarnecki, S. Apel, N. Siegmund, and A. Wasowski. Variability-aware performance prediction: A statistical learning approach. In Proc., ASE. IEEE, 2013. to appear.Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. R. Hall. Fundamental nonmodularity in electronic mail. Automated Software Engineering, 12 (1): 41--79, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. M. Jovic, A. Adamoli, and M. Hauswirth. Catch me if you can: Performance bug detection in the wild. In Proc.,OOPSLA, pages 155--170. ACM, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. C. Kästner and S. Apel. Type-checking software product lines - a formal approach. In Proc., ASE, pages 258--267. IEEE, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. C. Kästner, S. Apel, and M. Kuhlemann. A model of refactoring physically and virtually separated features. In Proc., GPCE, pages 157--166, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. C. Kästner, P. Giarrusso, T. Rendel, S. Erdweg, K. Ostermann, and T. Berger. Variability-aware parsing in the presence of lexical macros and conditional compilation. In Proc. OOPSLA, pages 805--824. ACM, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. C. Kästner, S. Apel, T. Thüm, and G. Saake. Type checking annotation-based product lines. ACM Transactions on Software Engineering and Methodology, 21 (3): 14:1--14:39, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. S. Kolesnikov, S. Apel, N. Siegmund, S. Sobernig, C. Kästner, and S. Senkaya. Predicting quality attributes of software product lines using software and network measures and sampling. In Proc., VaMoS, pages 25--29. ACM, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. Y. Kwon, S. Lee, H. Yi, D. Kwon, S. Yang, B.-G. Chun, L. Huang, P. Maniatis, M. Naik, and Y. Paek. Automatic generation of efficient performance predictors for smartphone applications. In Proc., USENIX, pages 297--308. Usenix Association, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. J. Liebig, S. Apel, C. Lengauer, C. Kästner, and M. Schulze. An analysis of the variability in forty preprocessor-based software product lines. In Proc. ICSE, pages 105--114. ACM, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. J. Liebig, A. von Rhein, C. Kästner, S. Apel, J. Dörre, and C. Lengauer. Scalable Analysis of Variable Software. In Proc. ESEC/FSE. ACM, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. J. Liu, D. Batory, and C. Lengauer. Feature-oriented refactoring of legacy applications. In Proc., ICSE, pages 112--121. ACM, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. M. Plath and M. Ryan. Feature integration using a feature construct. Science of Computer Programming, 41 (1): 53--84, 2001. Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. K. Pomakis and J. Atlee. Reachability analysis of feature interactions: A progress report. In Proc., ISSTA, pages 216--223. ACM, 1996. Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. H. Post and C. Sinz. Configuration lifting: Verification meets software configuration. In Proc. ASE, pages 347--350. IEEE, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. N. Siegmund, M. Rosenmüller, C. Kästner, P. Giarrusso, S. Apel, and S. Kolesnikov. Scalable prediction of non-functional properties in software product lines. In Proc. SPLC, pages 160--169. IEEE, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. N. Siegmund, S. Kolesnikov, C. Kästner, S. Apel, D. Batory, M. Rosenmüller, and G. Saake. Predicting performance via automated feature-interaction detection. In Proc. ICSE, pages 167--177. IEEE, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. N. Siegmund, M. Rosenmüller, C. Kästner, P. Giarrusso, S. Apel, and S. Kolesnikov. Scalable prediction of non-functional properties in software product lines: Footprint and memory consumption. Information and Software Technology, 55 (3): 491--507, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. C. Song, A. Porter, and J. Foster. iTree: Efficiently discovering high-coverage configurations using interaction trees. In Proc. ICSE, pages 903--913. IEEE, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. T. Thüm, S. Apel, C. Kästner, M. Kuhlemann, I. Schäfer, and G. Saake. Analysis strategies for software product lines. Technical report, University of Magdeburg, Nb.: FIN-04-2012, 2012.Google ScholarGoogle Scholar
  37. T. Thüm, I. Schaefer, S. Apel, and M. Hentschel. Family-based deductive verification of software product lines. In Proc. GPCE, pages 11--20. ACM, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. I. H. Witten and E. Frank. Data mining: Practical machine learning tools and techniques. Elsevier, Morgan Kaufman, 2. edition, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Family-based performance measurement

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in
        • Published in

          cover image ACM Conferences
          GPCE '13: Proceedings of the 12th international conference on Generative programming: concepts & experiences
          October 2013
          198 pages
          ISBN:9781450323734
          DOI:10.1145/2517208

          Copyright © 2013 ACM

          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 27 October 2013

          Permissions

          Request permissions about this article.

          Request Permissions

          Check for updates

          Qualifiers

          • research-article

          Acceptance Rates

          GPCE '13 Paper Acceptance Rate20of59submissions,34%Overall Acceptance Rate56of180submissions,31%

          Upcoming Conference

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader