skip to main content
10.1145/3092703.3092715acmconferencesArticle/Chapter ViewAbstractPublication PagesisstaConference Proceedingsconference-collections

Combining symbolic execution and search-based testing for programs with complex heap inputs

Published:10 July 2017Publication History

ABSTRACT

Despite the recent improvements in automatic test case generation, handling complex data structures as test inputs is still an open problem. Search-based approaches can generate sequences of method calls that instantiate structured inputs to exercise a relevant portion of the code, but fall short in building inputs to execute program elements whose reachability is determined by the structural features of the input structures themselves. Symbolic execution techniques can effectively handle structured inputs, but do not identify the sequences of method calls that instantiate the input structures through legal interfaces. In this paper, we propose a new approach to automatically generate test cases for programs with complex data structures as inputs. We use symbolic execution to generate path conditions that characterise the dependencies between the program paths and the input structures, and convert the path conditions to optimisation problems that we solve with search-based techniques to produce sequences of method calls that instantiate those inputs. Our preliminary results show that the approach is indeed effective in generating test cases for programs with complex data structures as inputs, thus opening a promising research direction.

Skip Supplemental Material Section

Supplemental Material

References

  1. E. Albert, I. Cabanas, A. Flores-Montoya, M. Gómez-Zamalloa, and S. Gutierrez. jPET: An Automatic Test-Case Generator for Java. In Proceedings of The Working Conference on Reverse Engineering, WCRE ’11, pages 441–442. IEEE Computer Society, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. A. Baars, M. Harman, Y. Hassoun, K. Lakhotia, P. McMinn, P. Tonella, and T. Vos. Symbolic search-based testing. In Proceedings of the International Conference on Automated Software Engineering, ASE ’11, pages 53–62. IEEE Computer Society, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. M. Baluda. EvoSE: Evolutionary symbolic execution. In Proceedings of the 6th International Workshop on Automating Test Case Design, Selection and Evaluation, A-TEST ’15, pages 16–19. ACM, 2015. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. M. Baluda, P. Braione, G. Denaro, and M. Pezzè. Enhancing structural software coverage by incrementally computing branch executability. Software Quality Journal, 19(4):725–751, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. M. Baluda, G. Denaro, and M. Pezzè. Bidirectional symbolic analysis for effective branch testing. IEEE Transactions on Software Engineering, 42(5):403–426, 2015.Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. C. Barrett, M. Deters, L. Moura, A. Oliveras, and A. Stump. 6 years of SMT-COMP. Journal of Automated Reasoning, 50(3):243–277, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. C. Boyapati, S. Khurshid, and D. Marinov. Korat: Automated testing based on java predicates. In Proceedings of the International Symposium on Software Testing and Analysis, ISSTA ’02, pages 123–133. ACM, 2002. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. P. Braione, G. Denaro, A. Mattavelli, M. Vivanti, and A. Muhammad. Software testing with code-based test generators: Data and lessons learned from a case study with an industrial software component. Software Quality Journal, 22:1–23, 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. P. Braione, G. Denaro, and M. Pezzè. Enhancing symbolic execution with built-in term rewriting and constrained lazy initialization. In Proceedings of the European Software Engineering Conference held jointly with the ACM SIGSOFT International Symposium on Foundations of Software Engineering, ESEC/FSE ’13, pages 411–421. ACM, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. P. Braione, G. Denaro, and M. Pezzè. Symbolic execution of programs with heap inputs. In Proceedings of the European Software Engineering Conference held jointly with the ACM SIGSOFT International Symposium on Foundations of Software Engineering, ESEC/FSE ’15, pages 602–613, 2015. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. P. Braione, G. Denaro, and M. Pezzè. JBSE: A symbolic executor for java programs with complex heap inputs. In Proceedings of the European Software Engineering Conference held jointly with the ACM SIGSOFT International Symposium on Foundations of Software Engineering, ESEC/FSE ’16, pages 1018–1022, 2016. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. J. Burnim and K. Sen. Heuristics for scalable dynamic test generation. In Proceedings of the International Conference on Automated Software Engineering, pages 443–446. IEEE Computer Society, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. C. Cadar, D. Dunbar, and D. Engler. Klee: Unassisted and automatic generation of high-coverage tests for complex systems programs. In Proceedings of the Symposium on Operating Systems Design and Implementation, OSDI ’08, pages 209–224. USENIX Association, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. C. Cadar, P. Godefroid, S. Khurshid, C. S. Păsăreanu, K. Sen, N. Tillmann, and W. Visser. Symbolic execution for software testing in practice: Preliminary assessment. In Proceedings of the International Conference on Software Engineering, ICSE ’11, pages 1066–1071. ACM, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. C. Cadar and K. Sen. Symbolic execution for software testing: Three decades later. Communications of the ACM, 56(2):82–90, Feb. 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. C. Csallner and Y. Smaragdakis. Jcrasher: an automatic robustness tester for java. Proceedings of the Symposium on Principles and Practice of Parallel Programming, 34(11):1025–1050, Sept. 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. L. Cseppento and Z. Micskei. Evaluating symbolic execution-based test tools. In Proceedings of the International Conference on Software Testing, Verification and Validation, ICST ’10, pages 1–10. IEEE Computer Society, 2015.Google ScholarGoogle Scholar
  18. E. Daka, J. Campos, G. Fraser, J. Dorn, and W. Weimer. Modeling readability to improve unit tests. In Proceedings of the European Software Engineering Conference held jointly with the ACM SIGSOFT International Symposium on Foundations of Software Engineering, ESEC/FSE 2015, pages 107–118. ACM, 2015. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. L. De Moura and N. Bjørner. Satisfiability modulo theories: Introduction and applications. Communications of the ACM, 54(9):69–77, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. X. Deng, J. Lee, and Robby. Bogor/Kiasan: A k-bounded symbolic execution for checking strong heap properties of open systems. In Proceedings of the International Conference on Automated Software Engineering, ASE ’06, pages 157–166. ACM, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. G. D. Dennis. Tsafe: building a trusted computing base for air traffic control software. Master’s thesis, Massachusetts Institute of Technology, 2003.Google ScholarGoogle Scholar
  22. P. Dinges and G. Agha. Solving complex path conditions through heuristic search on induced polytopes. In Proceedings of the ACM SIGSOFT International Symposium on Foundations of Software Engineering, FSE ’14, pages 425–436. ACM, 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. H. Do, S. Elbaum, and G. Rothermel. Supporting controlled experimentation with testing techniques: An infrastructure and its potential impact. Empirical Software Engineering, 10(4):405–435, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. B. Dutertre. Yices 2.2. In Proceedings of the International Conference on Computer Aided Verification, CAV ’2014, pages 737–744. Springer, 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. J. Edvardsson. A survey on automatic test data generation. In Proceedings of the Second Conference on Computer Science and Engineering, pages 21–28. ECSEL, 1999.Google ScholarGoogle Scholar
  26. G. Fraser and A. Arcuri. 1600 faults in 100 projects: Automatically finding faults while achieving high coverage with evosuite. Empirical Software Engineering, 20(3):611–639, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. G. Fraser and A. Arcuri. Evosuite at the sbst 2013 tool competition. In Proceedings of the 2013 IEEE Sixth International Conference on Software Testing, Verification and Validation Workshops, ICSTW ’13, pages 406–409. IEEE Computer Society, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. G. Fraser and A. Arcuri. Whole test suite generation. IEEE Transactions on Software Engineering, 39(2):276–291, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. G. Fraser and A. Arcuri. Evosuite at the sbst 2015 tool competition. In Proceedings of the International Workshop on Search-Based Software Testing, SBST ’15, pages 25–27. IEEE Computer Society, 2015. Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. G. Fraser and A. Arcuri. Evosuite at the sbst 2016 tool competition. In Proceedings of the International Workshop on Search-Based Software Testing, SBST ’16, pages 33–36. ACM, 2016. Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. J. P. Galeotti, G. Fraser, and A. Arcuri. Improving search-based test suite generation with dynamic symbolic execution. In Proceedings of the International Symposium on Software Reliability Engineering, ISSRE ’13. IEEE Computer Society, 2013.Google ScholarGoogle ScholarCross RefCross Ref
  32. J. P. Galeotti, N. Rosner, C. G. López Pombo, and M. F. Frias. Analysis of invariants for efficient bounded verification. In Proceedings of the International Symposium on Software Testing and Analysis, ISSTA ’10, pages 25–36. ACM, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. M. Gligoric, T. Gvero, V. Jagannath, S. Khurshid, V. Kuncak, and D. Marinov. Test generation through programming in UDITA. In Proceedings of the International Conference on Software Engineering, ICSE ’10, pages 225–234. ACM, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. P. Godefroid, N. Klarlund, and K. Sen. Dart: directed automated random testing. In Proceedings of the Conference on Programming Language Design and Implementation, PLDI ’05, pages 213–223. ACM, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. K. Inkumsah and T. Xie. Improving structural testing of object-oriented programs via integrating evolutionary testing and symbolic execution. In Proceedings of the International Conference on Automated Software Engineering, ASE ’08, pages 297–306. IEEE Computer Society, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. S. Jha, R. Limaye, and S. A. Seshia. Beaver: Engineering an efficient smt solver for bit-vector arithmetic. In Proceedings of the International Conference on Computer Aided Verification, CAV ’09, pages 668–674. Springer, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. R. Just, D. Jalali, and M. D. Ernst. Defects4J: A database of existing faults to enable controlled testing studies for Java programs. In Proceedings of the International Symposium on Software Testing and Analysis, ISSTA ’14, pages 437–440. ACM, 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. S. A. Khalek, G. Yang, L. Zhang, D. Marinov, and S. Khurshid. Testera: A tool for testing java programs using alloy specifications. In Proceedings of the International Conference on Automated Software Engineering, ASE ’11, pages 608–611. IEEE Computer Society, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. S. Khurshid, C. S. P ˇ as ˇ areanu, and W. Visser. Generalized symbolic execution for model checking and testing. In Proceedings of the International Conference on Tools and Algorithms for Construction and Analysis of Systems, TACAS ’03. Springer, 2003. Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. B. Korel. Automated software test data generation. IEEE Transactions on Software Engineering, 16(8):870–879, 1990. Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. K. Lakhotia, M. Harman, and P. McMinn. Handling dynamic data structures in search based testing. In Proceedings of the conference on Genetic and Evolutionary Computation, GECCO ’08, pages 1759–1766. ACM, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  42. K. Lakhotia, N. Tillmann, M. Harman, and J. De Halleux. Flopsy: Search-based floating point constraint solving for symbolic execution. In Proceedings of the International Conference on Testing Software and Systems, ICTSS ’10, pages 142– 157. Springer, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  43. J. Malburg and G. Fraser. Combining search-based and constraint-based testing. In Proceedings of the International Conference on Automated Software Engineering, ASE ’11, pages 436–439. IEEE Computer Society, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  44. P. McMinn. Search-based software test data generation: a survey. Software Testing, Verification and Reliability, 14:105–156, 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  45. C. Pacheco, S. K. Lahiri, M. D. Ernst, and T. Ball. Feedback-directed random test generation. In Proceedings of the International Conference on Software Engineering, ICSE ’07, pages 75–84. ACM, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  46. R. Pargas, M. J. Harrold, and R. Peck. Test-data generation using genetic algorithms. Software Testing, Verification and Reliability, 9(4):263–282, 1999.Google ScholarGoogle ScholarCross RefCross Ref
  47. M. Pezzè and M. Young. Software Testing and Analysis: Process, Principles and Techniques. Wiley, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  48. N. Rosner, J. Geldenhuys, N. Aguirre, W. Visser, and M. F. Frias. BLISS: improved symbolic execution by bounded lazy initialization with SAT support. IEEE Transactions on Software Engineering, 41(7):639–660, 2015.Google ScholarGoogle ScholarDigital LibraryDigital Library
  49. ISSTA’17, July 2017, Santa Barbara, CA, USA Pietro Braione, Giovanni Denaro, Andrea Mattavelli, and Mauro PezzèGoogle ScholarGoogle Scholar
  50. K. Sen and G. Agha. CUTE and jCUTE: Concolic unit testing and explicit path model-checking tools. In Proceedings of the International Conference on Computer Aided Verification, CAV ’06, pages 419–423. Springer, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  51. S. Thummalapenta, T. Xie, N. Tillmann, J. de Halleux, and Z. Su. Synthesizing method sequences for high-coverage testing. In Proceedings of the Conference on Object-Oriented Programming Systems and Applications, OOPSLA ’11, pages 189–206. ACM, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  52. N. Tillmann and J. de Halleux. Pex: White box test generation for .NET. In Proceedings of the International Conference on Tests and Proofs, TAP ’08, pages 134–153. Springer, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  53. P. Tonella. Evolutionary testing of classes. In Proceedings of the International Symposium on Software Testing and Analysis, ISSTA ’04, pages 119–128. ACM, 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  54. W. Visser, C. S. P ˇ as ˇ areanu, and S. Khurshid. Test input generation with java pathfinder. In Proceedings of the International Symposium on Software Testing and Analysis, ISSTA ’04, pages 97–107. ACM, 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  55. T. Xie, D. Marinov, W. Schulte, and D. Notkin. Symstra: a framework for generating object-oriented unit tests using symbolic execution. In Proceedings of the International Conference on Tools and Algorithms for Construction and Analysis of Systems, TACAS ’05, pages 365–381. Springer, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  56. T. Xie, N. Tillmann, P. de Halleux, and W. Schulte. Fitness-guided path exploration in dynamic symbolic execution. In Proceedings of the International Conference on Dependable Systems and Networks, DSN ’09, pages 359–368, 2009.Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. Combining symbolic execution and search-based testing for programs with complex heap inputs

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader