skip to main content
10.1145/3185768.3186286acmconferencesArticle/Chapter ViewAbstractPublication PagesicpeConference Proceedingsconference-collections
research-article

A Cloud Benchmark Suite Combining Micro and Applications Benchmarks

Published:02 April 2018Publication History

ABSTRACT

Micro and application performance benchmarks are commonly used to guide cloud service selection. However, they are often considered in isolation in a hardly reproducible setup with a flawed execution strategy. This paper presents a new execution methodology that combines micro and application benchmarks into a benchmark suite called RMIT Combined, integrates this suite into an automated cloud benchmarking environment, and implements a repeatable execution strategy. Additionally, a newly crafted Web serving benchmark called WPBench with three different load scenarios is contributed. A case study in the Amazon EC2 cloud demonstrates that choosing a cost-efficient instance type can deliver up to 40% better performance with 40% lower costs at the same time for the Web serving benchmark WPBench. Contrary to prior research, our findings reveal that network performance does not vary relevantly anymore. Our results also show that choosing a modern type of virtualization can improve disk utilization up to 10% for I/O-heavy workloads.

References

  1. Ali Abedi and Tim Brecht. 2017. Conducting Repeatable Experiments in Highly Variable Cloud Computing Environments. In 8th ACM/SPEC International Conference on Performance Engineering (ICPE). 287--292. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Michael Armbrust, Armando Fox, Rean Griffith, Anthony D. Joseph, Randy H. Katz, Andrew Konwinski, Gunho Lee, David A. Patterson, Ariel Rabkin, and Matei Zaharia. 2009. Above the Clouds: A Berkeley View of Cloud Computing. Technical Report. EECS Dep., Univ. of California, Berkeley. 25 pages.Google ScholarGoogle Scholar
  3. Carsten Binnig, Donald Kossmann, Tim Kraska, and Simon Loesing. 2009. How is the Weather Tomorrow?: Towards a Benchmark for the Cloud. In Proceedings of the 2nd International Workshop on Testing Database Systems (DBTest). ETH Zurich, ACM, New York, NY, USA, Article 9, 6 pages. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. A. H. Borhani, P. Leitner, B. S. Lee, X. Li, and T. Hung. 2014. WPress: An Application-Driven Performance Benchmark for Cloud-Based Virtual Machines. In 2014 IEEE 18th International Enterprise Distributed Object Computing Conference. 101--109. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. The SPEC Consortium. 2016. SPEC Cloud? IaaS 2016 Benchmark. (2016). http://spec.org/cloud_iaas2016/Google ScholarGoogle Scholar
  6. Brian F. Cooper, Adam Silberstein, Erwin Tam, Raghu Ramakrishnan, and Russell Sears. 2010. Benchmarking Cloud Serving Systems with YCSB. In Proceedings of the 1st ACM Symposium on Cloud Computing (SoCC). 143--154. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Matheus Cunha, Nabor Mendonça, and Américo Sampaio. 2013. A Declarative Environment for Automatic Performance Evaluation in IaaS Clouds. In 6th IEEE International Conference on Cloud Computing (CLOUD). 285--292. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Jiang Dejun, Guillaume Pierre, and Chi-Hung Chi. 2010. EC2 Performance Analysis for Resource Provisioning of Service-Oriented Applications. Springer Berlin Heidelberg, Berlin, Heidelberg, 197--207.Google ScholarGoogle Scholar
  9. Benjamin Farley, Ari Juels, Venkatanathan Varadarajan, Thomas Ristenpart, Kevin D. Bowers, and Michael M. Swift. 2012. More for Your Money: Exploiting Performance Heterogeneity in Public Clouds. In Proceedings of the 3rd ACM Symposium on Cloud Computing (SoCC '12). Article 20, 14 pages. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Michael Ferdman, Almutaz Adileh, Onur Kocberber, Stavros Volos, Mohammad Alisafaee, Djordje Jevdjic, Cansu Kaynak, Adrian Daniel Popescu, Anastasia Ailamaki, and Babak Falsafi. 2012. Clearing the clouds: a study of emerging scale-out workloads on modern hardware. In ACM International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS). 37--48. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Enno Folkerts, Alexander Alexandrov, Kai Sachs, Alexandru Iosup, Volker Markl, and Cafer Tosun. 2013. Benchmarking in the Cloud: What It Should, Can, and Cannot Be. In Selected Topics in Performance Evaluation and Benchmarking. Vol. 7755. Springer, 173--188.Google ScholarGoogle Scholar
  12. Brendan Gregg. 2013. Systems Performance: Enterprise and the Cloud. Prentice Hall. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Alexandru Iosup, Simon Ostermann, Nezih Yigitbasi, Radu Prodan, Thomas Fahringer, and Dick Epema. 2011. Performance Analysis of Cloud Computing Services for Many-Tasks Scientific Computing. IEEE Transactions on Parallel and Distributed Systems 22, 6 (June 2011), 931--945. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Alexandru Iosup, Radu Prodan, and Dick Epema. 2014. IaaS Cloud Benchmarking: Approaches, Challenges, and Experience. Springer, 83--104.Google ScholarGoogle Scholar
  15. Alexandru Iosup, Nezih Yigitbasi, and Dick Epema. 2011. On the Performance Variability of Production Cloud Services. In 11th IEEE/ACM Int. Symp. on CCGrid. 104--113. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Deepal Jayasinghe, Galen Swint, Simon Malkowski, Jack Li, Qingyang Wang, Junhee Park, and Calton Pu. 2012. Expertus: A Generator Approach to Automate Performance Testing in IaaS Clouds. In 5th IEEE Int. Conf. on Cloud Computing (CLOUD). 115--122. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Philipp Leitner and Jürgen Cito. 2016. Patterns in the Chaos A Study of Performance Variation and Predictability in Public IaaS Clouds. ACM Transactions on Internet Technology (TOIT) 16, 3, Article 15 (April 2016), 23 pages. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Philipp Leitner and Joel Scheuner. 2015. Bursting With Possibilities -- an Empirical Study of Credit-Based Bursting Cloud Instance Types. In 8th IEEE/ACM International Conferrence on Utility and Cloud Computing (UCC).Google ScholarGoogle Scholar
  19. Simon Ostermann, Alexandria Iosup, Nezih Yigitbasi, Radu Prodan, Thomas Fahringer, and Dick Epema. 2009. A performance analysis of EC2 cloud computing services for scientific computing. In Cloud Computing. Vol. 34. Springer, 115--131.Google ScholarGoogle Scholar
  20. Z. Ou, H. Zhuang, A. Lukyanenko, J. K. Nurminen, P. Hui, V. Mazalov, and A. Ylä-Jääski. 2013. Is the Same Instance Type Created Equal? Exploiting Heterogeneity of Public Clouds. IEEE Transactions on Cloud Computing 1, 2 (July 2013), 201--214. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Tapti Palit, Yongming Shen, and Michael Ferdman. 2016. Demystifying Cloud Benchmarking. In IEEE International Symposium on Performance Analysis of Systems and Software (ISPASS). 122--132.Google ScholarGoogle Scholar
  22. Jörg Schad, Jens Dittrich, and Jorge-Arnulfo Quiané-Ruiz. 2010. Runtime Measurements in the Cloud: Observing, Analyzing, and Reducing Variance. Proceedings of the VLDB Endowment 3, 1 (Sept. 2010), 460--471. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Joel Scheuner. 2017. Cloud Benchmarking -- Estimating Cloud Application Performance Based on Micro Benchmark Profiling. Master's thesis. University of Zurich. http://www.merlin.uzh.ch/publication/show/15364Google ScholarGoogle Scholar
  24. Joel Scheuner, Jürgen Cito, Philipp Leitner, and Harald Gall. 2015. Cloud WorkBench: Benchmarking IaaS Providers based on Infrastructure-as-Code. In Proceedings of the 24th International World Wide Web Conference (WWW) - Demo Track. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. Joel Scheuner, Philipp Leitner, Jürgen Cito, and Harald Gall. 2014. Cloud WorkBench - Infrastructure-as-Code Based Cloud Benchmarking. In Proceedings of the 6th IEEE International Conference on Cloud Computing Technology and Science (CloudCom). Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. M. Silva, M.R. Hines, D. Gallo, Qi Liu, Kyung Dong Ryu, and D. Da Silva. 2013. CloudBench: Experiment Automation for Cloud Environments. In IEEE International Conference on Cloud Engineering (IC2E). 302--311. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. Will Sobel, Shanti Subramanyam, Akara Sucharitakul, Jimmy Nguyen, Hubert Wong, Arthur Klepchukov, Sheetal Patil, Armando Fox, and David Patterson. 2008. Cloudstone: Multi-platform, multi-language benchmark and measurement tools for web 2.0. (2008).Google ScholarGoogle Scholar
  28. Cloud Spectator. 2017. Price-Performance Analysis of the Top 10 Public IaaS Vendors. Technical Report. Cloud Spectator.Google ScholarGoogle Scholar
  29. B. Varghese, O. Akgun, I. Miguel, L. Thai, and A. Barker. 2017. Cloud Benchmarking For Maximising Performance of Scientific Applications. IEEE Transactions on Cloud Computing PP, 99 (2017), 1--1.Google ScholarGoogle Scholar
  30. V. Vedam and J. Vemulapati. 2012. Demystifying Cloud Benchmarking Paradigm - An in Depth View. In 36th IEEE Computer Software and Applications Conference (COMPSAC). 416--421. Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. Edward Walker. 2008. Benchmarking Amazon EC2 for High-Performance Scientific Computing. Usenix Login 33, 5 (October 2008), 18--23.Google ScholarGoogle Scholar

Index Terms

  1. A Cloud Benchmark Suite Combining Micro and Applications Benchmarks

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Conferences
        ICPE '18: Companion of the 2018 ACM/SPEC International Conference on Performance Engineering
        April 2018
        212 pages
        ISBN:9781450356299
        DOI:10.1145/3185768

        Copyright © 2018 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 2 April 2018

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article

        Acceptance Rates

        Overall Acceptance Rate252of851submissions,30%

        Upcoming Conference

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader