skip to main content
10.1145/2931021.2931023acmconferencesArticle/Chapter ViewAbstractPublication PagespldiConference Proceedingsconference-collections
research-article

Toward an automated benchmark management system

Published:14 June 2016Publication History

ABSTRACT

The systematic evaluation of program analyses as well as software-engineering tools requires benchmark suites that are representative of real-world projects in the domains for which the tools or analyses are designed. Such benchmarks currently only exist for a few research areas and even where they exist, they are often not effectively maintained, due to the required manual effort. This makes evaluating new analyses and tools on software that relies on current technologies often impossible. We describe ABM, a methodology to semi-automatically mine software repositories to extract up-to-date and representative sets of applications belonging to specific domains. The proposed methodology facilitates the creation of such collections and makes it easier to release updated versions of a benchmark suite. Resulting from an instantiation of the methodology, we present a collection of current real-world Java business web applications. The collection and methodology serve as a starting point for creating current, targeted benchmark suites, and thus helps to better evaluate current program-analysis and software-engineering tools.

References

  1. Adempiere. Adempiere home page. http://adempiere.org/ site/.Google ScholarGoogle Scholar
  2. AppDynamics. Appdynamics home page. https://www. appdynamics.com/.Google ScholarGoogle Scholar
  3. S. Arzt, S. Rasthofer, C. Fritz, E. Bodden, A. Bartel, J. Klein, Y. L. Traon, D. Octeau, and P. McDaniel. Flowdroid: precise context, flow, field, object-sensitive and lifecycle-aware taint analysis for android apps. In ACM SIGPLAN Conference on Programming Language Design and Implementation, PLDI ’14, Edinburgh, United Kingdom - June 09 - 11, 2014, page 29, 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. S. M. Blackburn, R. Garner, C. Hoffman, A. M. Khan, K. S. McKinley, R. Bentzur, A. Diwan, D. Feinberg, D. Frampton, S. Z. Guyer, M. Hirzel, A. Hosking, M. Jump, H. Lee, J. E. B. Moss, A. Phansalkar, D. Stefanovi´c, T. VanDrunen, D. von Dincklage, and B. Wiedermann. The DaCapo benchmarks: Java benchmarking development and analysis. In 21st Conference on Object-Oriented Programing, Systems, Languages, and Applications. ACM, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. V. Dallmeier and T. Zimmermann. Extraction of bug localization benchmarks from history. In Proceedings of the Twenty-second IEEE/ACM International Conference on Automated Software Engineering, ASE ’07, pages 433–436, New York, NY, USA, 2007. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. J. Dujmovi´c. Automatic generation of benchmark and test workloads. In Proceedings of the First Joint WOSP/SIPEW International Conference on Performance Engineering, WOSP/SIPEW ’10, pages 263–274, New York, NY, USA, 2010. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. M. Eichberg, B. Hermann, M. Mezini, and L. Glanz. Hidden truths in dead software paths. In 10th Joint Meeting on Foundations of Software Engineering, ESEC/FSE 2015. ACM, 2015. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Enonic. Enonic home page. https://enonic.com/.Google ScholarGoogle Scholar
  9. GithubArchive. Githubarchive home page. https://www. githubarchive.org/.Google ScholarGoogle Scholar
  10. G. Gousios and D. Spinellis. Ghtorrent: Github’s data from a firehose. In Mining Software Repositories (MSR), 2012 9th IEEE Working Conference on, pages 12–21, June 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. INCF. Incf home page. http://www.incf.org/.Google ScholarGoogle Scholar
  12. B. Livshits. Defining a set of common benchmarks for web application security. In Proceedings of the Workshop on Defining the State of the Art in Software Security Tools, Aug. 2005.Google ScholarGoogle Scholar
  13. G. Richards, A. Gal, B. Eich, and J. Vitek. Automated construction of javascript benchmarks. In Proceedings of the 2011 ACM International Conference on Object Oriented Programming Systems Languages and Applications, OOPSLA ’11, pages 677–694, New York, NY, USA, 2011. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. A. Sewe. Design and Analysis of a Scala Benchmark Suite for the Java Virtual Machine. PhD thesis, TU Darmstadt, Darmstadt, Okt. 2012.Google ScholarGoogle Scholar
  15. J. Späth, L. N. Q. Do, K. Ali, and E. Bodden. Boomerang: Demanddriven flow- and context-sensitive pointer analysis for java. In ECOOP, 2016. To appear.Google ScholarGoogle Scholar
  16. E. Tempero, C. Anslow, J. Dietrich, T. Han, J. Li, M. Lumpe, H. Melton, and J. Noble. Qualitas Corpus: A Curated Collection of Java Code for Empirical Studies. In Asia Pacific Software Engineering Conference. IEEE, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Web Technology Surveys. Usage of content management systems for websites. http://w3techs.com/technologies/overview/ content_management/all.Google ScholarGoogle Scholar

Index Terms

  1. Toward an automated benchmark management system

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in
        • Published in

          cover image ACM Conferences
          SOAP 2016: Proceedings of the 5th ACM SIGPLAN International Workshop on State Of the Art in Program Analysis
          June 2016
          29 pages
          ISBN:9781450343855
          DOI:10.1145/2931021

          Copyright © 2016 ACM

          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 14 June 2016

          Permissions

          Request permissions about this article.

          Request Permissions

          Check for updates

          Qualifiers

          • research-article

          Acceptance Rates

          Overall Acceptance Rate11of11submissions,100%

          Upcoming Conference

          PLDI '24

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader