Elsevier

Computer Networks

Volume 70, 9 September 2014, Pages 75-95
Computer Networks

Energy-aware joint management of networks and Cloud infrastructures

https://doi.org/10.1016/j.comnet.2014.04.011Get rights and content

Abstract

Fueled by the massive adoption of Cloud services, the CO2 emissions of the Information and Communication Technology (ICT) systems are rapidly increasing. Overall service centers and networks account for 2–4% of global CO2 emissions and it is expected they can reach up to 10% in 5–10 years. Service centers and communication networks have been managed independently so far, but the new generation of Cloud systems can be based on a strict integration of service centers and networking infrastructures. Moreover, geographically-distributed service centers are being widely adopted to keep Cloud services close to end users and to guarantee high performance. The geographical distribution of the computing facilities offers many opportunities for optimizing energy consumption and costs by means of a clever distribution of the computational workload exploiting different availability of renewable energy sources, but also different time zones and hourly energy pricing. Energy and cost savings can be pursued by dynamically allocating computing resources to applications at a global level, while communication networks allow to assign flexibly load requests and to move data.

Even if in the last years a quite large research effort has been devoted to the energy efficiency of service centers and communication networks, limited work has been done for exploring the opportunities of integrated approaches able to exploit possible synergies between geographically distributed service centers and networks for accessing and interconnecting them. In this paper we propose an optimization framework able to jointly manage the use of brown and green energy in an integrated system and to guarantee quality requirements. We propose an efficient and accurate problem formulation that can be solved for real-size instances in few minutes to optimality. Numerical results, on a set of randomly generated instances and a case study representative of a large Cloud provider, show that the availability of green energy have a big impact on optimal energy management policies and that the contribution of the network is far from being negligible.

Introduction

The debate on climate change and carbon dioxide emission reduction is fostering the development of new “green policies” aimed at decreasing the environmental impact of human activities. How to contrast global warming and how to enhance energy efficiency have been put on top of the list of the world global challenges. ICT plays a key role in this greening process. Since the beginning, ICT applications have been considered as part of the solution as they can greatly improve the environmental performance of all the other sectors of the world economy. More recently, the awareness of the potential impact of the carbon emissions of the ICT sector itself has rapidly increased.

Overall, the combination of the energy consumption of service centers and communication networks accounts for 2–4% of global CO2 emissions (comparable, e.g., to the emissions due to the global air traffic) and it is projected to reach up to 10% in 5–10 years, fueled by the expected massive adoption of Cloud computing [1], [2]. Service centers investment grew by 22.1% during 2012 and it is expected it will further grow by another 14.5% in 2013 [1]. So, one of the main challenges for Cloud computing is to be able to reduce its carbon and energy footprints, while keeping up with the high growth rate of storage, server and communication infrastructures.

Even if computing and networking components of the system have been designed and managed quite independently so far, the current trend is to have them more strongly integrated for improving performance and efficiency of Cloud services offered to end users [3]. The integration of computing and networking components into a new generation of Cloud systems can be used not only to provide service flexibility to end users, but also to manage in a flexible way resources available in geographically-distributed computing centers and in the network interconnecting them.

A key enabler of Cloud/network cooperation is the use of geographically distributed service centers. Distributed Cloud service provisioning allows to better balance the traffic/computing workload and to bring computing and storage services closer to the end users, for a better application experience [4]. Moreover, from an energy point of view, having geographically distributed service centers allows Cloud providers to optimize energy consumption by exploiting load variations and energy cost variations over time in different locations.

The level of flexibility in the use of resources in different service centers depends on the application domains and basically comes from the geographic distribution of virtual machines hosting service applications, the use of dynamic geographically load redirect mechanisms, and the intelligent use of storage systems with data partitioning and replication techniques. This flexibility in service centers management also has a relevant impact on the communication network mainly for two reasons. First, since the service requests from users are delivered through an “access network” to service centers hosting the virtual machines, moving dynamically the workload among service centers can completely change the traffic pattern observed by the network. Second, an “interconnection network” is used to internally connect service centers and redirect end-user request or even move virtual machines and data with, again, a non-negligible impact on the traffic load when reconfiguration decisions are taken by the Cloud management system. Access and interconnection networks can be actually implemented in several different ways as private Intranets or using the public Internet, depending on the Cloud provider policies and the specific application domains. In any case, there is a large number of possible interaction models among telecommunications and Cloud providers that regulate their service agreements from a technical and economical perspective.

The main contribution of this paper is to explore the possibility of jointly and optimally managing service centers and the network connecting them, with the aim of reducing the Cloud energy cost and consumption. We develop an optimization framework that is able to show the potential savings that can be achieved with the joint management and to point out the relevant parameters that impact on the overall system performance. Concerning the network, we propose two representations, a high level approximated model and a detailed one. We show that the energy consumption obtained with the approximated version is very close to the one obtained with the detailed network representation and that the approximated version requires significantly smaller computational effort. Thus, it is suitable to assess the importance and impact of the joint optimization. The resource utilization and load allocation scheduling is performed on a daily basis assuming a central decision point (one of the service centers) and the availability of traffic patterns for different time periods. However, since the computational time is quite small (order of a few minutes), the time period length can be decreased, and the granularity can be finer, so as to follow better unpredictable traffic variation.

The approach proposed is based on a Mixed Integer Linear Programming (MILP) model which is solved to optimality with a state-of-the-art solver. The model assumes a Cloud service provider adopting the PaaS (Platform as a Service) approach and optimizes the load allocation to a set of geographically distributed Service Centers (SCs) where virtual machines (VMs) are assigned to physical servers in order to serve requests belonging to different classes. The goal is minimize the total energy cost considering the time-varying nature of energy costs and the availability of green energy at different locations. The traffic can be routed to SCs using a geographical network whose capacity constraints and energy consumption are accounted for. We formally prove that the problem is NP-hard since it is equivalent to a Set Covering Problem.

We present a set of numerical results on a realistic case study that considers the real worldwide geographical distribution of SCs of a Cloud provider and the variations of energy cost and green energy availability in different world regions. Moreover, we show some results to characterize the scalability of the optimization framework proposed and the sensitivity to workload and green energy prediction errors.

Even if a large literature on the improvement of energy efficiency of service centers (see e.g. [5]) and communication networks (see e.g. [6]) exists, very few studies have considered so far the cooperation between Cloud and network providers for a joint energy management, and, to the best of our knowledge, no one has proposed a joint optimization framework able to exploit low energy modes in physical servers and networking devices (see Section 5 for a detailed analysis of related work). From a practical point of view, we assume that the economic advantages coming from the energy savings can be managed through service agreements between Cloud and network operators taking into account the contributions of different system components that can be quantified by the proposed model.

The paper is organized as follows. Section 2 describes the integrated approach and the problem addressed. Section 3 describes the proposed MILP models. Section 4 reports on the experimental tests and the obtained results. Section 5 overviews other literature approaches. Conclusions are finally drawn in Section 6.

Section snippets

The integrated framework for sustainable Clouds

In this work we consider a PaaS provider operating a virtualized service infrastructure comprising multiple Service Centers (SCs) distributed over multiple physical sites. This scenario is frequent nowadays. Indeed, Cloud providers own multiple geographically distributed SCs, each including thousands of physical servers. For example, Amazon offers EC2 services in nine worldwide regions (located in USA, South America, Europe, Australia, and Asia) and each region is further dispersed in different

An optimization approach for energy management

The load management problem is formulated as a Mixed Integer Linear Programming (MILP) optimization model which takes into account many other important features, such as energy consumption, bandwidth and capacity constraints, and green energy generation. For the sake of clearness, we introduce first the approximated network representation.

Experimental analysis

Our resource management model has been evaluated under a variety of systems and workload configurations. Section 4.1 presents the settings for our experiments. Section 4.2 reports a comparison between the two MILP models proposed. Section 4.3 reports on the scalability results we achieved while using a commercial MILP solver. Section 4.4 presents a cost-benefit evaluation of our solution, which is compared to the case where resource allocation is performed locally and independently at each SC

Related work

As discussed before, for many years energy management of Cloud systems has been studied considering separately the service centers and the network components.

From the service center side, three main approaches have been developed: (i) control theoretic feedback loop techniques, (ii) adaptive machine learning approaches, and (iii) utility-based optimization techniques.

A main advantage of a control theoretic feedback loop is system stability guarantees. Upon workload changes, these techniques can

Conclusions and open issues

In this paper we propose a new optimization framework for the management of the energy usage in an integrated system for Cloud services that includes both service centers and communication networks for accessing and interconnecting them. The optimization framework considers a PaaS scenario where VMs serving an application can be allocated to a set of SCs geographically distributed and traffic load coming from different world regions can be assigned to VMs in order to optimize the energy cost

Acknowledgments

The authors wish to thank master students of Politecnico di Milano Ahmad Allam, Riccardo Chiodaroli, Francesco Lunetta, Stefano Viganoó, and Stefano Ziller for their help in obtaining numberical results.

Bernardetta Addis is Assistant Professor at Laboratoire Lorrain de Recherche en Informatique et ses Applications and Ecole des Mines de Nancy. She received the M.S. and Ph.D. degrees in computer science engineering from the University of Florence in 2001 and 2005, respectively. Her expertise is on optimization with particular reference to nonlinear global optimization and to heuristics and exact methods for integer optimization.

References (87)

  • Amazon Inc. Amazon Elastic Cloud....
  • X. Zhu et al.

    1000 islands: an integrated approach to resource management for virtualized data centers

    J. Clust. Comput.

    (2009)
  • S. Casolari et al.

    Short-term prediction models for server management in internet-based contexts

    Decis. Support Syst.

    (2009)
  • Bernardetta Addis, Danilo Ardagna, Barbara Panicucci, Mark Squillante, Li Zhang, A hierarchical approach for the...
  • Robert Birke, Lydia Y. Chen, Evgenia Smirni, Data centers in the cloud: a large scale performance study, in: IEEE...
  • Minghong Lin, A. Wierman, L.L.H. Andrew, E. Thereska, Dynamic right-sizing for power-proportional data centers, in:...
  • Lei Rao, Xue Liu, Le Xie, Wenyu Liu, Minimizing electricity cost: optimization of distributed internet data centers in...
  • Patrick Wendell et al.

    Donar: decentralized server selection for cloud services

    SIGCOMM Comput. Commun. Rev.

    (2010)
  • J. Rolia, L. Cherkasova, C. McCarthy, Configuring workload manager control parameters for resource pools, in: 10th IEEE...
  • C. Tang, M. Steinder, M. Spreitzer, G. Pacifici, A scalable application placement controller for enterprise data...
  • D. Carrera et al.

    Autonomic placement of mixed batch and transactional workloads

    IEEE Trans. Parall. Distrib. Syst.

    (2012)
  • Claudia Canali, Riccardo Lancellotti, Automated clustering of vms for scalable cloud monitoring and management, in:...
  • Claudia Canali et al.

    Automated clustering of virtual machines based on correlation of resource usage

    J. Commun. Softw. Syst. (JCOMSS)

    (2013)
  • G. Scott Graham et al.

    Quant. Syst. Perform., Comput. Syst. Anal. Queueing Netw. Models

    (1984)
  • Baris Aksanli, Jagannathan Venkatesh, Liuyi Zhang, Tajana Rosing, Utilizing green energy prediction to schedule mixed...
  • Zhenhua Liu, Yuan Chen, Cullen Bash, Adam Wierman, Daniel Gmach, Zhikui Wang, Manish Marwah, Chris Hyser, Renewable and...
  • Ulrich Focken, Matthias Lange, Hans-Peter Waldl, Previento – a wind power prediction system with an innovative...
  • A. Wolke, G. Meixner, Twospot: a cloud platform for scaling out web applications dynamically, in: ServiceWave,...
  • Amazon Inc., AWS Elastic Beanstalk....
  • L.A. Barroso et al.

    The datacenter as a computer: an introduction to the design of warehouse-scale machines

    Synthesis Lect. Comput. Archit.

    (2009)
  • D. Kusic, J.O. Kephart, N. Kandasamy, G. Jiang, Power and performance management of virtualized computing environments...
  • D. Ardagna et al.

    Energy-aware autonomic resource allocation in multitier virtualized environments

    IEEE Trans. Serv. Comput.

    (2012)
  • Miniwatts Marketing Group, Internet World Statistics, 2011....
  • Microsoft Corporation, Microsoft Windows Azure Virtual Machines....
  • SPEC, SPECWeb 2009, 2009....
  • Andrew J. Younge, Gregor von Laszewski, Lizhe Wang, Sonia Lopez-Alarcon, Warren Carithers, Efficient resource...
  • SPEC, SPECvirt_sc2010....
  • Juniper Networks, E120 and E320 Hardware Guide, March...
  • China Energy Intelligence and Communication Forum....
  • Google, Google’s Green PPAs: What, How, and Why, Revision 2, April...
  • Google, Google Green – The Big Picture....
  • National Renewable Energy Laboratory, Geothermal Resource of the United States, October 2009....
  • National Renewable Energy Laboratory, Photovoltaic Solar Resource of the United States, October 2008....
  • Cited by (0)

    Bernardetta Addis is Assistant Professor at Laboratoire Lorrain de Recherche en Informatique et ses Applications and Ecole des Mines de Nancy. She received the M.S. and Ph.D. degrees in computer science engineering from the University of Florence in 2001 and 2005, respectively. Her expertise is on optimization with particular reference to nonlinear global optimization and to heuristics and exact methods for integer optimization.

    Danilo Ardagna is Assistant Professor at the Dipartimento di Elettronica Informazione e Bioingegneria, at Politecnico di Milano. He received his Masters degree and Ph.D. in Computer Engineering from Politecnico di Milano. His work focuses on the design, prototyping and evaluation of optimization algorithms for resource management and planning of self-managing and Cloud computing systems. He has also been the principal investigator of the GAME-IT project (Green Active Management of Energy in IT systems), founded by Politecnico di Milano and he is the Secretary Treasurer and an Information Officer of the IEEE Special Technical Community on Sustainable Computing.

    Antonio Capone is Full Professor at the ICT Department (DEIB) of Politecnico di Milano (Technical University), where he is the director of the Advanced Network Technologies Laboratory (ANTLab). His expertise is on networking and his main research activities include protocol design (MAC and routing) and performance evaluation of wireless access and multi-hop networks, traffic management and quality of service issues in IP networks, and network planning and optimization. On these topics he has published more than 200 peer-reviewed papers in international journals and conference proceedings. He received the M.S. and Ph.D. degrees in electrical engineering from the Politecnico di Milano in 1994 and 1998, respectively. In 2000 he was visiting professor at UCLA, Computer Science department. He currently serves as editor of ACM/IEEE Trans. on Networking, Wireless Communications and Mobile Computing (Wiley), Computer Networks (Elsevier), and Computer Communications (Elsevier). He serves often in the TPC of the major conferences of the networking research community and in their organizing committees. He is a Senior Member of the IEEE.

    Giuliana Carello is Assistant Professor at the Dipartimento di Elettronica Informazione e Bioingegneria, at Politecnico di Milano. She received her Masters degree and Ph.D. from Politecnico di Torino. Her research work interests are exact and heuristic optimization algorithms, applied to integer and binary variable problems. Her research is mainly devoted to real life application: she focused on telecommunication network design problems. Besides the hub location problems, she worked on two layer wired network design problems and design and resource allocation problems in wireless networks. She published in international journals.

    View full text