skip to main content
10.1145/2483760.2483783acmconferencesArticle/Chapter ViewAbstractPublication PagesisstaConference Proceedingsconference-collections
research-article

Hybrid learning: interface generation through static, dynamic, and symbolic analysis

Published:15 July 2013Publication History

ABSTRACT

This paper addresses the problem of efficient generation of component interfaces through learning. Given a white-box component C with specified unsafe states, an interface captures safe orderings of invocations of C's public methods. In previous work we presented Psyco, an interface generation framework that combines automata learning with symbolic component analysis: learning drives the framework in exploring different combinations of method invocations, and symbolic analysis computes method guards corresponding to constraints on the method parameters for safe execution. In this work we propose solutions to the two main bottlenecks of Psyco. The explosion of method sequences that learning generates to validate its computed interfaces is reduced through partial order reduction resulting from a static analysis of the component. To avoid scalability issues associated with symbolic analysis, we propose novel algorithms that are primarily based on dynamic, concrete component execution, while resorting to symbolic analysis on a limited, as needed, basis. Dynamic execution enables the introduction of a concept of state matching, based on which our proposed approach detects, in some cases, that it has exhausted the exploration of all component behaviors. On the other hand, symbolic analysis is enhanced with symbolic summaries. Our new approach, X-Psyco, has been implemented in the Java PathFinder (JPF) software model checking platform. We demonstrated the effectiveness of X-Psyco on a number of realistic software components by generating more complete and precise interfaces than was previously possible.

References

  1. F. Aarts, B. Jonsson, and J. Uijen. Generating models of infinite-state communication protocols using regular inference with abstraction. In International Conference on Testing Software and Systems (ICTSS), pages 188–204, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Circuit—simple J2ME game. http://gamesdev.wordpress.com/2008/11/03/ circuit-simple-j2me-game/.Google ScholarGoogle Scholar
  3. R. Alur, P. Cern´ y, P. Madhusudan, and W. Nam. Synthesis of interface specifications for Java classes. In ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages (POPL), pages 98–109, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. G. Ammons, R. Bod´ık, and J. R. Larus. Mining specifications. In ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages (POPL), pages 4–16, 2002. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. S. Anand, C. S. Păsăreanu, and W. Visser. Symbolic execution with abstract subsumption checking. In International conference on Model Checking Software (SPIN), pages 163–181, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. D. Angluin. Learning regular sets from queries and counterexamples. Information and Computation, 75(2):87–106, 1987. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. S. Burckhardt, C. Dern, M. Musuvathi, and R. Tan. Line-up: A complete and automatic linearizability checker. In ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI), pages 330–340, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. S. Chaki and O. Strichman. Three optimizations for assume-guarantee reasoning with L*. Formal Methods in System Design (FMSD), 32(3):267–284, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Y.-F. Chen, E. M. Clarke, A. Farzan, M.-H. Tsai, Y.-K. Tsay, and B.-Y. Wang. Automated assume-guarantee reasoning through implicit learning. In International Conference on Computer Aided Verification (CAV), pages 511–526, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Y.-F. Chen, A. Farzan, E. M. Clarke, Y.-K. Tsay, and B.-Y. Wang. Learning minimal separating DFA’s for compositional verification. In International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS), pages 31–45, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. C. Y. Cho, D. Babi´ c, P. Poosankam, K. Z. Chen, E. X. Wu, and D. Song. MACE: Model-inference-assisted concolic exploration for protocol and vulnerability discovery. In USENIX Security Symposium, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. L. A. Clarke. A system to generate test data and symbolically execute programs. IEEE Transactions on Software Engineering, 2:215–222, 1976. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. L. de Moura and N. Bjørner. Z3: An efficient SMT solver. In International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS), pages 337–340, 2008.Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. M. Gheorghiu, D. Giannakopoulou, and C. S. Pasareanu. Refining interface alphabets for compositional verification. In International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS), pages 292–307, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. D. Giannakopoulou and C. S. Pasareanu. Interface generation and compositional verification in JavaPathfinder. In International Conference on Fundamental Approaches to Software Engineering (FASE), pages 94–108, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. D. Giannakopoulou, Z. Rakamari´ c, and V. Raman. Symbolic learning of component interfaces. In International Static Analysis Symposium (SAS), pages 248–264, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. P. Godefroid, N. Klarlund, and K. Sen. DART: directed automated random testing. In ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI), pages 213–223, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Guava: Google core libraries. http://code.google.com/p/guava-libraries/.Google ScholarGoogle Scholar
  19. A. Gupta, K. L. McMillan, and Z. Fu. Automated assumption generation for compositional verification. In International Conference on Computer Aided Verification (CAV), pages 420–432, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. T. A. Henzinger, R. Jhala, and R. Majumdar. Permissive interfaces. In European Software Engineering Conference (ESEC) held jointly with ACM SIGSOFT International Symposium on Foundations of Software Engineering (FSE), pages 31–40, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. F. Howar, B. Steffen, and M. Merten. Automata learning with automated alphabet abstraction refinement. In International Conference on Verification, Model Checking, and Abstract Interpretation (VMCAI), pages 263–277, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. S. Joshi, S. K. Lahiri, and A. Lal. Underspecified harnesses and interleaved bugs. In ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages (POPL), pages 19–30, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. JPF (Java Pathfinder). http://babelfish.arc.nasa.gov/trac/jpf.Google ScholarGoogle Scholar
  24. J. C. King. Symbolic execution and program testing. Communications of the ACM, 19(7):385–394, 1976. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. D. Lorenzoli, L. Mariani, and M. Pezzè. Automatic generation of software behavioral models. In International Conference on Software Engineering (ICSE), pages 501–510, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. M. Merten, B. Steffen, F. Howar, and T. Margaria. Next generation LearnLib. In International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS), pages 220–223, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. C. S. Pasareanu, D. Giannakopoulou, M. G. Bobaru, J. M. Cobleigh, and H. Barringer. Learning to divide and conquer: applying the L* algorithm to automate assume-guarantee reasoning. Formal Methods in System Design (FMSD), 32(3):175–205, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. R. L. Rivest and R. E. Schapire. Inference of finite automata using homing sequences. Information and Computation, 103(2):299–347, 1993. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. K. Sen, D. Marinov, and G. Agha. CUTE: A concolic unit testing engine for C. In European Software Engineering Conference (ESEC) held jointly with ACM SIGSOFT International Symposium on Foundations of Software Engineering (FSE), pages 263–272, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. R. Singh, D. Giannakopoulou, and C. S. Pasareanu. Learning component interfaces with may and must abstractions. In International Conference on Computer Aided Verification (CAV), pages 527–542, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. B. Steffen, F. Howar, and M. Merten. Introduction to active automata learning from a practical perspective. In Formal Methods for Eternal Networked Software Systems (SFM), pages 256–296, 2011.Google ScholarGoogle ScholarCross RefCross Ref
  32. O. Tkachuk. OCSEGen. http://code.google.com/p/envgen/.Google ScholarGoogle Scholar
  33. O. Tkachuk. OCSEGen: Open components and systems environment generator. In ACM SIGPLAN International Workshop on the State Of the Art in Java Program Analysis (SOAP), 2013. To appear.Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. O. Tkachuk and M. B. Dwyer. Adapting side effects analysis for modular program model checking. In European Software Engineering Conference (ESEC) held jointly with ACM SIGSOFT International Symposium on Foundations of Software Engineering (FSE), pages 188–197, 2003. Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. J. Whaley, M. C. Martin, and M. S. Lam. Automatic extraction of object-oriented component interfaces. In International Symposium on Software Testing and Analysis (ISSTA), pages 218–228, 2002. Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. T. Xie, D. Marinov, W. Schulte, and D. Notkin. Symstra: a framework for generating object-oriented unit tests using symbolic execution. In International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS), pages 365–381, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Hybrid learning: interface generation through static, dynamic, and symbolic analysis

          Recommendations

          Comments

          Login options

          Check if you have access through your login credentials or your institution to get full access on this article.

          Sign in
          • Published in

            cover image ACM Conferences
            ISSTA 2013: Proceedings of the 2013 International Symposium on Software Testing and Analysis
            July 2013
            381 pages
            ISBN:9781450321594
            DOI:10.1145/2483760

            Copyright © 2013 ACM

            Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

            Publisher

            Association for Computing Machinery

            New York, NY, United States

            Publication History

            • Published: 15 July 2013

            Permissions

            Request permissions about this article.

            Request Permissions

            Check for updates

            Qualifiers

            • research-article

            Acceptance Rates

            Overall Acceptance Rate58of213submissions,27%

            Upcoming Conference

            ISSTA '24

          PDF Format

          View or Download as a PDF file.

          PDF

          eReader

          View online with eReader.

          eReader