skip to main content
10.1145/2934466.2934472acmotherconferencesArticle/Chapter ViewAbstractPublication PagessplcConference Proceedingsconference-collections
research-article

Using machine learning to infer constraints for product lines

Published:16 September 2016Publication History

ABSTRACT

Variability intensive systems may include several thousand features allowing for an enormous number of possible configurations, including wrong ones (e.g. the derived product does not compile). For years, engineers have been using constraints to a priori restrict the space of possible configurations, i.e. to exclude configurations that would violate these constraints. The challenge is to find the set of constraints that would be both precise (allow all correct configurations) and complete (never allow a wrong configuration with respect to some oracle). In this paper, we propose the use of a machine learning approach to infer such product-line constraints from an oracle that is able to assess whether a given product is correct. We propose to randomly generate products from the product line, keeping for each of them its resolution model. Then we classify these products according to the oracle, and use their resolution models to infer cross-tree constraints over the product-line. We validate our approach on a product-line video generator, using a simple computer vision algorithm as an oracle. We show that an interesting set of cross-tree constraint can be generated, with reasonable precision and recall.

References

  1. M. Acher, M. Alferez, J. A. Galindo, P. Romenteau, and B. Baudry. Vivid: A variability-based tool for synthesizing video sequences. In SPLC'14. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. M. Acher, P. Collet, P. Lahire, and R. France. Familiar: A domain-specific language for large scale management of feature models. Science of Computer Programming (SCP), 78(6):657--681, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. M. Al-Hajjaji, T. Thüm, J. Meinicke, M. Lochau, and G. Saake. Similarity-based prioritization in software product-line testing. In SPLC'14. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. N. Andersen, K. Czarnecki, S. She, and A. Wasowski. Efficient synthesis of feature models. In Proceedings of SPLC'12. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. G. Bécan, M. Acher, B. Baudry, and S. Ben Nasr. Breathing ontological knowledge into feature model synthesis: an empirical study. Empirical Software Engineering, 2015.Google ScholarGoogle Scholar
  6. G. Bécan, R. Behjati, A. Gotlieb, and M. Acher. Synthesis of attributed feature models from product descriptions. In SPLC'15, jul 2015. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. S. Chakraborty, D. J. Fremont, K. S. Meel, S. A. Seshia, and M. Y. Vardi. On parallel scalable uniform SAT witness generation. In Tools and Algorithms for the Construction and Analysis of Systems TACAS'15. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. M. B. Cohen, M. B. Dwyer, and I. C. Society. Constructing Interaction Test Suites for Highly-Configurable Systems in the Presence of Constraints : A Greedy Approach. 2008.Google ScholarGoogle Scholar
  9. J.-M. Davril, E. Delfosse, N. Hariri, M. Acher, J. Cleland-Huang, and P. Heymans. Feature model extraction from large collections of informal product descriptions. In ESEC/FSE, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. I. do Carmo Machado, J. D. McGregor, and E. Santana de Almeida. Strategies for testing products in software product lines. ACM SIGSOFT Software Engineering Notes, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. R. W. Dosselman and X. D. Yang. No-reference noise and blur detection via the fourier transform. Technical report, University of Regina, CANADA, 2012.Google ScholarGoogle Scholar
  12. J. A. Galindo, M. Alférez, M. Acher, B. Baudry, and D. Benavides. A variability-based testing approach for synthesizing video sequences. In International Symposium on Software Testing and Analysis, ISSTA 2014, pages 293--303. ACM, 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. P. Godefroid, N. Klarlund, and K. Sen. Dart: Directed automated random testing. SIGPLAN Not., 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. P. Godefroid, M. Y. Levin, and D. Molnar. Sage: Whitebox fuzzing for security testing. Queue, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. J. Guo, K. Czarnecki, S. Apel, N. Siegmund, and A. Wasowski. Variability-aware performance prediction: A statistical learning approach. In ASE, 2013.Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. C. Henard, M. Papadakis, G. Perrouin, J. Klein, P. Heymans, and Y. L. Traon. Bypassing the combinatorial explosion: Using similarity to generate and prioritize t-wise test configurations for software product lines. IEEE Trans. Software Eng., 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. C. Henard, M. Papadakis, G. Perrouin, J. Klein, and Y. Le Traon. Towards automated testing and fixing of re-engineered feature models. In ICSE '13 (NIER track), 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. M. F. Johansen, O. y. Haugen, and F. Fleurey. An algorithm for generating t-wise covering arrays from large feature models. SPLC'12, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. C. Kim, S. Khurshid, and D. Batory. Shared execution for efficiently testing product lines. In Software Reliability Engineering (ISSRE), 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. C. H. P. Kim, D. S. Batory, and S. Khurshid. Reducing combinatorics in testing product lines. In AOSD'11. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. C. H. P. Kim, D. Marinov, S. Khurshid, D. Batory, S. Souto, P. Barros, and M. D.Amorim. Splat: Lightweight dynamic analysis for reducing combinatorics in testing configurable systems. In ESEC/FSE 2013, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. B. P. Lamancha and M. P. Usaola. Testing Product Generation in Software Product Lines. 2010.Google ScholarGoogle Scholar
  23. F. Medeiros, C. Kästner, M. Ribeiro, R. Gheyi, and S. Apel. A comparison of 10 sampling algorithms for configurable systems. In ICSE'16. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. S. Nadi, T. Berger, C. Kästner, and K. Czarnecki. Where do configuration constraints stem from? an extraction approach and an empirical study. IEEE Trans. Software Eng.Google ScholarGoogle Scholar
  25. H. V. Nguyen, C. Kästner, and T. N. Nguyen. Exploring variability-aware execution for testing plugin-based web applications. In ICSE'14. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. C. Prud'homme, J.-G. Fages, and X. Lorca. Choco3 Documentation. TASC, INRIA Rennes, LINA CNRS UMR 6241, COSLING S.A.S., 2014.Google ScholarGoogle Scholar
  27. A. Sarkar, J. Guo, N. Siegmund, S. Apel, and K. Czarnecki. Cost-efficient sampling for performance prediction of configurable systems (t). In ASE'15.Google ScholarGoogle Scholar
  28. S. She, R. Lotufo, T. Berger, A. Wasowski, and K. Czarnecki. Reverse engineering feature models. In ICSE'11. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. N. Siegmund, A. Grebhahn, C. Kästner, and S. Apel. Performance-influence models for highly configurable systems. In ESEC/FSE'15. Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. N. Siegmund, M. RosenmüLler, C. KäStner, P. G. Giarrusso, S. Apel, and S. S. Kolesnikov. Scalable prediction of non-functional properties in software product lines: Footprint and memory consumption. Inf. Softw. Technol., 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. S. Souto, D. Gopinath, M. d'Amorim, D. Marinov, S. Khurshid, and D. Batory. Faster bug detection for software product lines with incomplete feature models. In SPLC '15. Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. T. Thüm, S. Apel, C. Kästner, I. Schaefer, and G. Saake. A classification and survey of analysis strategies for software product lines. ACM Computing Surveys, 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. T. Thüm, D. Batory, and C. Kästner. Reasoning about edits to feature models. In ICSE'09, pages 254--264. ACM, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. P. Valov, J. Guo, and K. Czarnecki. Empirical comparison of regression methods for variability-aware performance prediction. In SPLC'15. Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. A. von Rhein, A. Grebhahn, S. Apel, N. Siegmund, D. Beyer, and T. Berger. Presence-condition simplification in highly configurable systems. In ICSE, 2015. Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. L. Yi, W. Zhang, H. Zhao, Z. Jin, and H. Mei. Mining binary constraints in the construction of feature models. In RE'12, 2012.Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. Y. Zhang, J. Guo, E. Blais, and K. Czarnecki. Performance prediction of configurable software systems by fourier learning (T). In ASE'15.Google ScholarGoogle Scholar

Recommendations

Comments

Login options

Check if you have access through your login credentials or your institution to get full access on this article.

Sign in
  • Published in

    cover image ACM Other conferences
    SPLC '16: Proceedings of the 20th International Systems and Software Product Line Conference
    September 2016
    367 pages
    ISBN:9781450340502
    DOI:10.1145/2934466
    • General Chair:
    • Hong Mei

    Copyright © 2016 ACM

    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    • Published: 16 September 2016

    Permissions

    Request permissions about this article.

    Request Permissions

    Check for updates

    Qualifiers

    • research-article

    Acceptance Rates

    Overall Acceptance Rate167of463submissions,36%

PDF Format

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader