ABSTRACT
Micro and application performance benchmarks are commonly used to guide cloud service selection. However, they are often considered in isolation in a hardly reproducible setup with a flawed execution strategy. This paper presents a new execution methodology that combines micro and application benchmarks into a benchmark suite called RMIT Combined, integrates this suite into an automated cloud benchmarking environment, and implements a repeatable execution strategy. Additionally, a newly crafted Web serving benchmark called WPBench with three different load scenarios is contributed. A case study in the Amazon EC2 cloud demonstrates that choosing a cost-efficient instance type can deliver up to 40% better performance with 40% lower costs at the same time for the Web serving benchmark WPBench. Contrary to prior research, our findings reveal that network performance does not vary relevantly anymore. Our results also show that choosing a modern type of virtualization can improve disk utilization up to 10% for I/O-heavy workloads.
- Ali Abedi and Tim Brecht. 2017. Conducting Repeatable Experiments in Highly Variable Cloud Computing Environments. In 8th ACM/SPEC International Conference on Performance Engineering (ICPE). 287--292. Google ScholarDigital Library
- Michael Armbrust, Armando Fox, Rean Griffith, Anthony D. Joseph, Randy H. Katz, Andrew Konwinski, Gunho Lee, David A. Patterson, Ariel Rabkin, and Matei Zaharia. 2009. Above the Clouds: A Berkeley View of Cloud Computing. Technical Report. EECS Dep., Univ. of California, Berkeley. 25 pages.Google Scholar
- Carsten Binnig, Donald Kossmann, Tim Kraska, and Simon Loesing. 2009. How is the Weather Tomorrow?: Towards a Benchmark for the Cloud. In Proceedings of the 2nd International Workshop on Testing Database Systems (DBTest). ETH Zurich, ACM, New York, NY, USA, Article 9, 6 pages. Google ScholarDigital Library
- A. H. Borhani, P. Leitner, B. S. Lee, X. Li, and T. Hung. 2014. WPress: An Application-Driven Performance Benchmark for Cloud-Based Virtual Machines. In 2014 IEEE 18th International Enterprise Distributed Object Computing Conference. 101--109. Google ScholarDigital Library
- The SPEC Consortium. 2016. SPEC Cloud? IaaS 2016 Benchmark. (2016). http://spec.org/cloud_iaas2016/Google Scholar
- Brian F. Cooper, Adam Silberstein, Erwin Tam, Raghu Ramakrishnan, and Russell Sears. 2010. Benchmarking Cloud Serving Systems with YCSB. In Proceedings of the 1st ACM Symposium on Cloud Computing (SoCC). 143--154. Google ScholarDigital Library
- Matheus Cunha, Nabor Mendonça, and Américo Sampaio. 2013. A Declarative Environment for Automatic Performance Evaluation in IaaS Clouds. In 6th IEEE International Conference on Cloud Computing (CLOUD). 285--292. Google ScholarDigital Library
- Jiang Dejun, Guillaume Pierre, and Chi-Hung Chi. 2010. EC2 Performance Analysis for Resource Provisioning of Service-Oriented Applications. Springer Berlin Heidelberg, Berlin, Heidelberg, 197--207.Google Scholar
- Benjamin Farley, Ari Juels, Venkatanathan Varadarajan, Thomas Ristenpart, Kevin D. Bowers, and Michael M. Swift. 2012. More for Your Money: Exploiting Performance Heterogeneity in Public Clouds. In Proceedings of the 3rd ACM Symposium on Cloud Computing (SoCC '12). Article 20, 14 pages. Google ScholarDigital Library
- Michael Ferdman, Almutaz Adileh, Onur Kocberber, Stavros Volos, Mohammad Alisafaee, Djordje Jevdjic, Cansu Kaynak, Adrian Daniel Popescu, Anastasia Ailamaki, and Babak Falsafi. 2012. Clearing the clouds: a study of emerging scale-out workloads on modern hardware. In ACM International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS). 37--48. Google ScholarDigital Library
- Enno Folkerts, Alexander Alexandrov, Kai Sachs, Alexandru Iosup, Volker Markl, and Cafer Tosun. 2013. Benchmarking in the Cloud: What It Should, Can, and Cannot Be. In Selected Topics in Performance Evaluation and Benchmarking. Vol. 7755. Springer, 173--188.Google Scholar
- Brendan Gregg. 2013. Systems Performance: Enterprise and the Cloud. Prentice Hall. Google ScholarDigital Library
- Alexandru Iosup, Simon Ostermann, Nezih Yigitbasi, Radu Prodan, Thomas Fahringer, and Dick Epema. 2011. Performance Analysis of Cloud Computing Services for Many-Tasks Scientific Computing. IEEE Transactions on Parallel and Distributed Systems 22, 6 (June 2011), 931--945. Google ScholarDigital Library
- Alexandru Iosup, Radu Prodan, and Dick Epema. 2014. IaaS Cloud Benchmarking: Approaches, Challenges, and Experience. Springer, 83--104.Google Scholar
- Alexandru Iosup, Nezih Yigitbasi, and Dick Epema. 2011. On the Performance Variability of Production Cloud Services. In 11th IEEE/ACM Int. Symp. on CCGrid. 104--113. Google ScholarDigital Library
- Deepal Jayasinghe, Galen Swint, Simon Malkowski, Jack Li, Qingyang Wang, Junhee Park, and Calton Pu. 2012. Expertus: A Generator Approach to Automate Performance Testing in IaaS Clouds. In 5th IEEE Int. Conf. on Cloud Computing (CLOUD). 115--122. Google ScholarDigital Library
- Philipp Leitner and Jürgen Cito. 2016. Patterns in the Chaos A Study of Performance Variation and Predictability in Public IaaS Clouds. ACM Transactions on Internet Technology (TOIT) 16, 3, Article 15 (April 2016), 23 pages. Google ScholarDigital Library
- Philipp Leitner and Joel Scheuner. 2015. Bursting With Possibilities -- an Empirical Study of Credit-Based Bursting Cloud Instance Types. In 8th IEEE/ACM International Conferrence on Utility and Cloud Computing (UCC).Google Scholar
- Simon Ostermann, Alexandria Iosup, Nezih Yigitbasi, Radu Prodan, Thomas Fahringer, and Dick Epema. 2009. A performance analysis of EC2 cloud computing services for scientific computing. In Cloud Computing. Vol. 34. Springer, 115--131.Google Scholar
- Z. Ou, H. Zhuang, A. Lukyanenko, J. K. Nurminen, P. Hui, V. Mazalov, and A. Ylä-Jääski. 2013. Is the Same Instance Type Created Equal? Exploiting Heterogeneity of Public Clouds. IEEE Transactions on Cloud Computing 1, 2 (July 2013), 201--214. Google ScholarDigital Library
- Tapti Palit, Yongming Shen, and Michael Ferdman. 2016. Demystifying Cloud Benchmarking. In IEEE International Symposium on Performance Analysis of Systems and Software (ISPASS). 122--132.Google Scholar
- Jörg Schad, Jens Dittrich, and Jorge-Arnulfo Quiané-Ruiz. 2010. Runtime Measurements in the Cloud: Observing, Analyzing, and Reducing Variance. Proceedings of the VLDB Endowment 3, 1 (Sept. 2010), 460--471. Google ScholarDigital Library
- Joel Scheuner. 2017. Cloud Benchmarking -- Estimating Cloud Application Performance Based on Micro Benchmark Profiling. Master's thesis. University of Zurich. http://www.merlin.uzh.ch/publication/show/15364Google Scholar
- Joel Scheuner, Jürgen Cito, Philipp Leitner, and Harald Gall. 2015. Cloud WorkBench: Benchmarking IaaS Providers based on Infrastructure-as-Code. In Proceedings of the 24th International World Wide Web Conference (WWW) - Demo Track. Google ScholarDigital Library
- Joel Scheuner, Philipp Leitner, Jürgen Cito, and Harald Gall. 2014. Cloud WorkBench - Infrastructure-as-Code Based Cloud Benchmarking. In Proceedings of the 6th IEEE International Conference on Cloud Computing Technology and Science (CloudCom). Google ScholarDigital Library
- M. Silva, M.R. Hines, D. Gallo, Qi Liu, Kyung Dong Ryu, and D. Da Silva. 2013. CloudBench: Experiment Automation for Cloud Environments. In IEEE International Conference on Cloud Engineering (IC2E). 302--311. Google ScholarDigital Library
- Will Sobel, Shanti Subramanyam, Akara Sucharitakul, Jimmy Nguyen, Hubert Wong, Arthur Klepchukov, Sheetal Patil, Armando Fox, and David Patterson. 2008. Cloudstone: Multi-platform, multi-language benchmark and measurement tools for web 2.0. (2008).Google Scholar
- Cloud Spectator. 2017. Price-Performance Analysis of the Top 10 Public IaaS Vendors. Technical Report. Cloud Spectator.Google Scholar
- B. Varghese, O. Akgun, I. Miguel, L. Thai, and A. Barker. 2017. Cloud Benchmarking For Maximising Performance of Scientific Applications. IEEE Transactions on Cloud Computing PP, 99 (2017), 1--1.Google Scholar
- V. Vedam and J. Vemulapati. 2012. Demystifying Cloud Benchmarking Paradigm - An in Depth View. In 36th IEEE Computer Software and Applications Conference (COMPSAC). 416--421. Google ScholarDigital Library
- Edward Walker. 2008. Benchmarking Amazon EC2 for High-Performance Scientific Computing. Usenix Login 33, 5 (October 2008), 18--23.Google Scholar
Index Terms
- A Cloud Benchmark Suite Combining Micro and Applications Benchmarks
Recommendations
SPEC Cloud™ IaaS 2016 Benchmark
ICPE '17: Proceedings of the 8th ACM/SPEC on International Conference on Performance EngineeringThe SPEC Cloud (TM) IaaS2016 benchmark is the Standard Performance Evaluation Corporation's (SPEC) first benchmark suite to measure cloud performance. The benchmark suite's use is targeted at cloud providers, cloud consumers, hardware vendors, ...
Performance Benchmarking of Infrastructure-as-a-Service (IaaS) Clouds with Cloud WorkBench
ICPE '19: Companion of the 2019 ACM/SPEC International Conference on Performance EngineeringThe continuing growth of the cloud computing market has led to an unprecedented diversity of cloud services with different performance characteristics. To support service selection, researchers and practitioners conduct cloud performance benchmarking by ...
A Benchmark Characterization of the EEMBC Benchmark Suite
Benchmark consumers expect benchmark suites to be complete, accurate, and consistent, and benchmark scores serve as relative measures of performance. However, it is important to understand how benchmarks stress the processors that they aim to test. This ...
Comments