Skip to main content

2022 | Buch

Computing at the EDGE

New Challenges for Service Provision

insite
SUCHEN

Über dieses Buch

This book describes solutions to the problems of energy efficiency, resiliency and cyber security in the domain of Edge Computing and reports on early deployments of the technology in commercial settings. This book takes a business focused view, relating the technological outcomes to new business opportunities made possible by the edge paradigm. Drawing on the experience of end user deploying prototype edge technology, the authors discuss applications in financial management, wireless management, and social networks. Coverage includes a chapter on the analysis of total cost of ownership, thereby enabling readers to calculate the efficiency gain for use of the technology in their business.

Provides a single-source reference to the state-of-the art of edge computing;Describes how researchers across the world are addressing challenges relating to power efficiency, ease of programming and emerging cyber security threats in this domain;Discusses total cost of ownership for applications in financial management and social networks;Discusses security challenges in wireless management.

Inhaltsverzeichnis

Frontmatter
Introduction
Abstract
The shape of the Internet has changed significantly in recent years with the advent of mobile devices and increasingly powerful fixed devices deployed at the edge of the network. Each intelligent device is pushing a small amount of data to the Internet, and these small amounts multiplied by billions of devices aggregate to become big data. The traditional infrastructure, based on the cloud model, cannot scale to handle the new demand of the Internet of Things (IoT). Particularly, energy efficiency is the key driver in this new era. This chapter summarizes the development of the Internet and discusses the new challenges that it faces. The UniServer project funded by the European Commission under its Horizon Programme carried out research in the field from 2016–2019 publishing a number of significant papers in the area. The chapter explains the basis of the UniServer architecture and points to the chapters that follow incorporating the significant details of the research.
Charles J. Gillan, George Karakonstantis
Challenges on Unveiling Voltage Margins from the Node to the Datacentre Level
Abstract
In this chapter, we present and discuss one of the most important aspects of technology scaling: the improvement of power consumption of microprocessors. First, we present the current established techniques, which contribute to either unveil the pessimistic voltage margins of modern microprocessors or propose mitigation techniques to make the microprocessors more tolerant to low-voltage conditions. Unveiling the potential power savings by characterizing the pessimistic voltage margins of microprocessor chips is a very challenging and time-consuming process, which requires to take into consideration several important aspects. To this end, we first present an approach for an automated characterization framework, and then we describe in detail a comprehensive approach for fast and accurate system-level voltage margins characterization, which contributes to eliminate the time required for an extensive characterization.
George Papadimitriou, Dimitris Gizopoulos
Harnessing Voltage margins for Balanced Energy and Performance
Abstract
During chip fabrication, process variations can affect transistor dimensions (length, width, oxide thickness, etc. [1]), which have a direct impact on the threshold voltage of a MOS device [2]. As technology scales, the percentage of these variations compared to the overall transistor size increases and raises major concerns for designers, who aim to improve energy efficiency. Devices variation during fabrication known as static variation and remains constant during the chip lifetime. On top of that, transistor aging and dynamic variation in supply voltage and temperature, caused by different workload interactions, is also of primary importance. Both static and dynamic variations lead microprocessor architects to apply conservative guard bands (operating voltage and frequency settings) to avoid timing failures and guarantee correct operation, even in the worst-case conditions excited by unknown workloads or the operating environment [3, 4]. However, these guard bands impede the power consumption. To bridge the gap between energy efficiency and performance improvements, several hardware and software techniques have been proposed, such as Dynamic Voltage and Frequency Scaling (DVFS) [5]. The premise of DVFS is that the microprocessor’s workloads as well as the cores’ activity vary. Voltage and frequency-scaling during epochs where peak performance is not required enables a DVFS-capable system to achieve average energy-efficiency gains without affecting peak-performance adversely. However, energy-efficiency gains are limited by the pessimistic guard bands.
George Papadimitriou, Dimitris Gizopoulos
Exploiting Reduced Voltage Margins: From Node- to the Datacenter-level
Abstract
In the past decade, the costs of power and cooling have doubled for data centers. A key aspect of designing a data center, therefore, is to optimize these costs. CPUs account for up to 60% of the energy used by the computer nodes. Increasingly, modern data centers and high-performance computing (HPC) systems need to operate under a tight power budget. Other chapters in this book discussed the existence of voltage margins on modern hardware and techniques for revealing and quantifying them on CPUs. This chapter focuses on whether it is possible, practical and profitable to exploit these margins using Intel Xeon E3 Skylake CPUs as a case study. Both the node level and the data center level are considered, including the practicality of operation at reduced CPU voltage margins for cloud infrastructure providers, from a profit maximization perspective.
Panos Koutsovasilis, Christos Kalogirou, Konstantinos Parasyris, Christos D. Antonopoulos, Nikolaos Bellas, Spyros Lalis
Improving DRAM Energy-efficiency
Abstract
The rapid growth of IoT triggered the exponential growth of generated data transferred to/from Cloud and emerging Edge servers. As a result, recent projections forecast that the DRAM subsystem will soon be responsible for more than 40% of the overall power consumption within most servers [1]. One of the reasons for the high energy consumed by the DRAM devices is the usage of pessimistic DRAM operating parameters, such as voltage, refresh rate and timing parameters, set by the vendors. Vendors use these parameters to handle possible failures induced by the charge leakage and cell-to-cell interference. Moreover, such failures prevent scaling further scaling the size of DRAM cells. This reality has led researchers to question if such pessimistic parameters can be relaxed and if the induced failures can be handled at the hardware or software level. In this chapter, we discuss the challenges related to the DRAM reliability and present a systematic study on exceeding the conservative operating DRAM margins to improve the energy efficiency of Edge servers. We demonstrate a machine learning-based technique that enables us to scale down the DRAM operating parameters and the hardware/software stack that handles all the induced failures.
Lev Mukhanov, Georgios Karakonstantis
Total Cost of Ownership Perspective of Cloud vs Edge Deployments of IoT Applications
Abstract
The number of intelligent Internet-connected devices is growing rapidly and will soon be in the order of tens of billions, forming the Internet of Things (IoT). Each of these devices is pushing data to the Internet that are soon expected to reach tens of exabytes. It is expected that such data growth will put an unprecedented pressure on the current Internet infrastructure and the centralized (Cloud) data centers (DCs). In order to successfully deal with this imminent data flood, it is imperative to enhance the processing capabilities of the current servers. Redesigning data communication and processing across the Internet is equally important. Additionally, a new paradigm has emerged which makes Cloud services available at the Edge. One key ramification of these developments is an increase in the cost of both Cloud and Edge DCs.
The main goal of this chapter is to improve the Total cost of Ownership (TCO) of Edge and Cloud deployments by discovering the capability of the underlying hardware components to function beyond nominal operating points. By taking the advantage of the extended operating margins, inherent to processors and memories, it is possible to improve the power efficiency of servers running in the Cloud or Edge. The successful exploitation of these margins will lead to significant cost savings. Thus, in this chapter we present an End-to-End TCO tool to evaluate two IoT applications. In order to come up with an End-to-End TCO evaluation, we have firstly determined the requirements and parameters of each of the two applications. Furthermore, the metrics of success related to each application have been determined as well. For instance, to evaluate Polaris application, we use TCO under various constraints, and for Social CRM we use TCO over client’s degree of satisfaction in square metric.
Panagiota Nikolaou, Yiannakis Sazeides, Alejandro Lampropulos, Denis Guilhot, Andrea Bartoli, George Papadimitriou, Athanasios Chatzidimitriou, Dimitris Gizopoulos, Konstantinos Tovletoglou, Lev Mukhanov, Georgios Karakonstantis, Marios Kleanthous, Arnau Prat
Software Engineering for Edge Computing
Abstract
Edge computing has been recently introduced by both industry and academia to quench the need for a computing paradigm close to mobile devices. Edge computing bridges the gap between the cloud and mobile devices by enabling computing, storage, networking, and data management in edge nodes within the close vicinity of end users’ devices. While there are various surveys about Edge computing in the literature, what is currently missing is the description of the software-engineering aspects of the applications that are built/deployed via the edge. The contribution of the current chapter is twofold. We first highlight the software-engineering aspects of the current edge-computing approaches. In particular, we specify the core concepts of the general-purpose software-engineering process, the multi-tier architecture of edge infrastructure, and how software applications are deployed to such an infrastructure. Secondly, we abstract a software-engineering process suitable for edge computing and we outline the research challenges in this process.
Dionysis Athanasopoulos
Overcoming Wifi Jamming and other security challenges at the Edge
Abstract
Wi-Fi technology is seen as a commodity today because it is very widely used. Given that Wi-Fi involves transmitting electromagnetic waves through the air, it is susceptible to attack vectors, at both the physical and data-link layers, that do not exist in fixed line networks such as Ethernet. Signal jamming impacts the physical layer and attacks the digital signal processing of the modulated electromagnetic waves at the receiving station. Control and management frames exist at the media access control (MAC) sub-layer of the data-link layer. Spoofing of these frames is possible using even single board computers as long as they have Wi-Fi chips that can function as access points. A particular sub-class of attack targets leakage of information from a system, and this kind of attack is primarily concerned with the discovery of the secret information such as encryption keys that underpins modern cryptographic processing. This chapter explains the underlying principles of these aspects of Wi-Fi technology and suggests, where possible, defences against attacks.
Charles J. Gillan, Denis Guilhot
Backmatter
Metadaten
Titel
Computing at the EDGE
herausgegeben von
Georgios Karakonstantis
Charles J. Gillan
Copyright-Jahr
2022
Electronic ISBN
978-3-030-74536-3
Print ISBN
978-3-030-74535-6
DOI
https://doi.org/10.1007/978-3-030-74536-3

Neuer Inhalt