skip to main content
10.1145/3338906.3338970acmconferencesArticle/Chapter ViewAbstractPublication PagesfseConference Proceedingsconference-collections
research-article

Diversity-based web test generation

Published:12 August 2019Publication History

ABSTRACT

Existing web test generators derive test paths from a navigational model of the web application, completed with either manually or randomly generated input values. However, manual test data selection is costly, while random generation often results in infeasible input sequences, which are rejected by the application under test. Random and search-based generation can achieve the desired level of model coverage only after a large number of test execution at- tempts, each slowed down by the need to interact with the browser during test execution. In this work, we present a novel web test generation algorithm that pre-selects the most promising candidate test cases based on their diversity from previously generated tests. As such, only the test cases that explore diverse behaviours of the application are considered for in-browser execution. We have implemented our approach in a tool called DIG. Our empirical evaluation on six real-world web applications shows that DIG achieves higher coverage and fault detection rates significantly earlier than crawling-based and search-based web test generators.

References

  1. Nadia Alshahwan and Mark Harman. 2012. Augmenting test suites effectiveness by increasing output diversity. In 34th International Conference on Software Engineering, ICSE. 1345–1348. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Andrea Arcuri. 2013. It really does matter how you normalize the branch distance in search-based software testing. Software Testing, Verification and Reliability 23, 2 (2013), 119–147.Google ScholarGoogle ScholarCross RefCross Ref
  3. Shay Artzi, Julian Dolby, Simon Holm Jensen, Anders Møller, and Frank Tip. 2011. A Framework for Automated Testing of Javascript Web Applications. In Proceedings of the 33rd International Conference on Software Engineering (ICSE ’11). ACM, New York, NY, USA, 571–580. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Kartik Bajaj, Karthik Pattabiraman, and Ali Mesbah. 2015. Synthesizing Web Element Locators. In Proceedings of 30th IEEE/ACM International Conference on Automated Software Engineering (ASE ’15). IEEE Computer Society, 331–341.Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Matteo Biagiola, Filippo Ricca, and Paolo Tonella. 2017. Search Based Path and Input Data Generation for Web Application Testing. In International Symposium on Search Based Software Engineering. Springer, 18–32.Google ScholarGoogle Scholar
  6. Matteo Biagiola, Andrea Stocco, Ali Mesbah, Filippo Ricca, and Paolo Tonella. 2019. Web Test Dependency Detection. In Proceedings of 27th ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering (ESEC/FSE 2019). ACM, 12 pages. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Robert V. Binder. 1996. Testing object-oriented software: a survey. Software Testing, Verification and Reliability 6, 3-4 (1996), 125–252.Google ScholarGoogle ScholarCross RefCross Ref
  8. Tsong Yueh Chen, Fei-Ching Kuo, Robert G Merkel, and TH Tse. 2010. Adaptive random testing: The art of test case diversity. Journal of Systems and Software 83, 1 (2010), 60–66. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Tsong Yueh Chen, Hing Leung, and IK Mak. 2004. Adaptive random testing. In Annual Asian Computing Science Conference. Springer, 320–329.Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Ilinca Ciupa, Andreas Leitner, Manuel Oriol, and Bertrand Meyer. 2008. ARTOO: adaptive random testing for object-oriented software. In Proceedings of the 30th international conference on Software engineering. ACM, 71–80. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. 2001. Introduction to Algorithms (2nd ed.). MIT Press and McGraw-Hill. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. dimeshift 2018. DimeShift: easiest way to track your expenses. https://github. com/jeka-kiselyov/dimeshift. (2018).Google ScholarGoogle Scholar
  13. Robert Feldt, Simon M. Poulding, David Clark, and Shin Yoo. 2016. Test Set Diameter: Quantifying the Diversity of Sets of Test Cases. In 2016 IEEE International Conference on Software Testing, Verification and Validation, ICST 2016, Chicago, IL, USA, April 11-15, 2016. 223–233.Google ScholarGoogle Scholar
  14. Mark Fewster and Dorothy Graham. 1999. Software Test Automation: Effective Use of Test Execution Tools. Addison-Wesley Longman Publishing Co., Inc. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Martin Fowler. 2013. PageObject. http://martinfowler.com/bliki/PageObject.html. (2013). Accessed: 2018-08-01.Google ScholarGoogle Scholar
  16. Gordon Fraser and Andrea Arcuri. 2011. Evolutionary Generation of Whole Test Suites. In 11 t h International Conference on Quality Software (QSIC), Manuel Núñez, Robert M. Hierons, and Mercedes G. Merayo (Eds.). IEEE Computer Society, Madrid, Spain, 31–40. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Gordon Fraser and Andrea Arcuri. 2013. Whole Test Suite Generation. IEEE Transactions on Software Engineering 39, 2 (feb. 2013), 276 –291. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Patrice Godefroid, Nils Klarlund, and Koushik Sen. 2005. DART: directed automated random testing. In Programming Language Design and Implementation (PLDI 2005), Vivek Sarkar and Mary W. Hall (Eds.). ACM, 213–223. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Mouna Hammoudi, Gregg Rothermel, and Andrea Stocco. 2016. WATERFALL: An Incremental Approach for Repairing Record-replay Tests of Web Applications. In Proceedings of 24th ACM SIGSOFT International Symposium on Foundations of Software Engineering (FSE ’16). ACM, 751–762. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Istanbul 2018. Istanbul: JavaScript test coverage made simple. https://istanbul.js. org. (2018). Accessed: 2018-08-01.Google ScholarGoogle Scholar
  21. JS-frameworks 2018. Front-end JavaScript frameworks. https://github.com/ collections/front-end-javascript-frameworks. (2018).Google ScholarGoogle Scholar
  22. O. Koresteleva. 2004. Nonparametric Methods in Statistics with SAS Applications. CRC Press, Boca Raton, FL.Google ScholarGoogle Scholar
  23. Maurizio Leotta, Diego Clerissi, Filippo Ricca, and Paolo Tonella. 2016. Approaches and Tools for Automated End-to-End Web Testing. Advances in Computers 101 (2016), 193–237.Google ScholarGoogle ScholarCross RefCross Ref
  24. Maurizio Leotta, Andrea Stocco, Filippo Ricca, and Paolo Tonella. 2015. Using Multi-Locators to Increase the Robustness of Web Test Cases. In Proceedings of 8th IEEE International Conference on Software Testing, Verification and Validation (ICST ’15). IEEE, 1–10.Google ScholarGoogle ScholarCross RefCross Ref
  25. Maurizio Leotta, Andrea Stocco, Filippo Ricca, and Paolo Tonella. 2016. ROBULA+: An Algorithm for Generating Robust XPath Locators for Web Testing. Journal of Software: Evolution and Process (2016), 28:177–204. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. Maurizio Leotta, Andrea Stocco, Filippo Ricca, and Paolo Tonella. 2018. PESTO: Automated migration of DOM-based Web tests towards the visual approach. Software Testing, Verification And Reliability 28, 4 (2018).Google ScholarGoogle Scholar
  27. Vladimir I Levenshtein. 1966. Binary codes capable of correcting deletions, insertions, and reversals. In Soviet physics doklady, Vol. 10. 707–710.Google ScholarGoogle Scholar
  28. Yu Lin, Xucheng Tang, Yuting Chen, and Jianjun Zhao. 2009. A divergenceoriented approach to adaptive random testing of Java programs. In Automated Software Engineering, 2009. ASE’09. 24th IEEE/ACM International Conference on. IEEE, 221–232. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. Sean Luke. 2013. Essentials of Metaheuristics (second ed.). Lulu. Available for free at http://cs.gmu.edu/∼sean/book/metaheuristics/.Google ScholarGoogle Scholar
  30. Riyadh Mahmood, Nariman Mirzaei, and Sam Malek. 2014. EvoDroid: Segmented Evolutionary Testing of Android Apps. In Proceedings of the 22Nd ACM SIGSOFT International Symposium on Foundations of Software Engineering (FSE 2014). ACM, New York, NY, USA, 599–609. Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. Ke Mao, Mark Harman, and Yue Jia. 2016. Sapienz: Multi-objective Automated Testing for Android Applications. In Proceedings of the 25th International Symposium on Software Testing and Analysis (ISSTA 2016). ACM, New York, NY, USA, 94–105. Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. Alessandro Marchetto, Paolo Tonella, and Filippo Ricca. 2008. State-Based Testing of Ajax Web Applications. In Proceedings of the First International Conference on Software Testing, Verification, and Validation (ICST). 121–130. Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. Alessandro Marchetto, Paolo Tonella, and Filippo Ricca. 2008. State-Based Testing of Ajax Web Applications. In Proceedings of the 2008 International Conference on Software Testing, Verification, and Validation (ICST ’08). IEEE Computer Society, Washington, DC, USA, 121–130. Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. Phil McMinn. 2004. Search-based software test data generation: a survey. Softw. Test., Verif. Reliab. 14, 2 (2004), 105–156. Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. Ali Mesbah and Arie van Deursen. 2009. Invariant-based Automatic Testing of AJAX User Interfaces. In Proceedings of the 31st International Conference on Software Engineering (ICSE ’09). IEEE Computer Society, Washington, DC, USA, 210–220. Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. Ali Mesbah, Arie van Deursen, and Stefan Lenselink. 2012. Crawling Ajax-Based Web Applications through Dynamic Analysis of User Interface State Changes. ACM Transactions on the Web (TWEB) 6, 1 (2012), 3:1–3:30. Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. Ali Mesbah, Arie van Deursen, and Danny Roest. 2012. Invariant-Based Automatic Testing of Modern Web Applications. IEEE Trans. Software Eng. 38, 1 (2012), 35– 53. Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. Amin Milani Fard and Ali Mesbah. 2013. Feedback-directed Exploration of Web Applications to Derive Test Models. In Proceedings of the International Symposium on Software Reliability Engineering (ISSRE). IEEE Computer Society, 278–287. http://www.ece.ubc.ca/~amesbah/docs/issre13.pdfGoogle ScholarGoogle ScholarCross RefCross Ref
  39. Frolin S Ocariza Jr, Karthik Pattabiraman, and Ali Mesbah. 2017. Detecting unknown inconsistencies in web applications. In Proceedings of the 32nd IEEE/ACM International Conference on Automated Software Engineering. IEEE Press, 566–577. Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. Carlos Pacheco, Shuvendu K. Lahiri, Michael D. Ernst, and Thomas Ball. 2007. Feedback-Directed Random Test Generation. In Proceedings of the 29th International Conference on Software Engineering (ICSE ’07). IEEE Computer Society, Washington, DC, USA, 75–84. Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. pagekit 2018. Pagekit: modular and lightweight CMS. https://github.com/pagekit/ pagekit. (2018).Google ScholarGoogle Scholar
  42. Annibale Panichella, Fitsum Meshesha Kifetew, and Paolo Tonella. 2018. Automated Test Case Generation as a Many-Objective Optimisation Problem with Dynamic Selection of the Targets. IEEE Trans. Software Eng. 44, 2 (2018), 122–158.Google ScholarGoogle ScholarCross RefCross Ref
  43. PetClinic 2018. Angular version of the Spring PetClinic web application. https: //github.com/spring-petclinic/spring-petclinic-angular. (2018).Google ScholarGoogle Scholar
  44. phoenix 2018. Phoenix: Trello tribute done in Elixir, Phoenix Framework, React and Redux. https://github.com/bigardone/phoenix-trello. (2018).Google ScholarGoogle Scholar
  45. retroboard 2018. Retrospective Board. https://github.com/antoinejaussoin/retroboard. (2018).Google ScholarGoogle Scholar
  46. Selenium 2018. SeleniumHQ Web Browser Automation. http://www.seleniumhq. org. (2018). Accessed: 2018-08-01.Google ScholarGoogle Scholar
  47. Koushik Sen, Darko Marinov, and Gul Agha. 2005. CUTE: a concolic unit testing engine for C. In 10 t h European Software Engineering Conference and 13th ACM International Symposium on Foundations of Software Engineering (ESEC/FSE ’05), Michel Wermelinger and Harald Gall (Eds.). ACM, 263–272. Google ScholarGoogle ScholarDigital LibraryDigital Library
  48. splittypie 2018. Splittypie: easy expense splitting. https://github.com/cowbell/ splittypie. (2018).Google ScholarGoogle Scholar
  49. Andrea Stocco, Maurizio Leotta, Filippo Ricca, and Paolo Tonella. 2016. Clustering-Aided Page Object Generation for Web Testing. In Proceedings of 16th International Conference on Web Engineering (ICWE ’16). Springer, 132–151.Google ScholarGoogle ScholarCross RefCross Ref
  50. Andrea Stocco, Maurizio Leotta, Filippo Ricca, and Paolo Tonella. 2017. APOGEN: Automatic Page Object Generator for Web Testing. Software Quality Journal 25, 3 (Sept. 2017), 1007–1039. Google ScholarGoogle ScholarDigital LibraryDigital Library
  51. Andrea Stocco, Rahulkrishna Yandrapally, and Ali Mesbah. 2018. Visual Web Test Repair. In Proceedings of the 26th ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering (ESEC/FSE ’18). ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  52. tool 2019. DIG: Diversity-based E2E web test generator. https://github.com/ matteobiagiola/FSE19-submission-material-DIG. (2019).Google ScholarGoogle Scholar
  53. Arie van Deursen. 2015. Beyond Page Objects: Testing Web Applications with State Objects. ACM Queue 13, 6 (2015), 20. ESEC/FSE ’19, August 26–30, 2019, Tallinn, Estonia Matteo Biagiola, Andrea Stocco, Filippo Ricca, Paolo Tonella Google ScholarGoogle ScholarDigital LibraryDigital Library
  54. Arie van Deursen. 2015. Testing Web Applications with State Objects. Commun. ACM 58, 8 (July 2015), 36–43. Google ScholarGoogle ScholarDigital LibraryDigital Library
  55. Rahulkrishna Yandrapally, Suresh Thummalapenta, Saurabh Sinha, and Satish Chandra. 2014. Robust Test Automation Using Contextual Clues. In Proceedings of 2014 International Symposium on Software Testing and Analysis (ISSTA ’14). ACM, 304–314. Google ScholarGoogle ScholarDigital LibraryDigital Library
  56. Bing Yu, Lei Ma, and Cheng Zhang. 2015. Incremental Web Application Testing Using Page Object. In Proceedings of the 2015 Third IEEE Workshop on Hot Topics in Web Systems and Technologies (HotWeb) (HOTWEB ’15). IEEE Computer Society, Washington, DC, USA, 1–6. Google ScholarGoogle ScholarDigital LibraryDigital Library
  57. Xun Yuan and Atif M. Memon. 2007. Using GUI Run-Time State as Feedback to Generate Test Cases. In ICSE ’07: Proceedings of the 29th International Conference on Software Engineering. IEEE Computer Society, Washington, DC, USA, 396–405. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Diversity-based web test generation

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      ESEC/FSE 2019: Proceedings of the 2019 27th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering
      August 2019
      1264 pages
      ISBN:9781450355728
      DOI:10.1145/3338906

      Copyright © 2019 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 12 August 2019

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article

      Acceptance Rates

      Overall Acceptance Rate112of543submissions,21%

      Upcoming Conference

      FSE '24

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader