Skip to main content
Top

2020 | Book

Convergence of Artificial Intelligence and the Internet of Things

Editors: Dr. George Mastorakis, Prof. Dr. Constandinos X. Mavromoustakis, Dr. Jordi Mongay Batalla, Prof. Evangelos Pallis

Publisher: Springer International Publishing

Book Series : Internet of Things

insite
SEARCH

About this book

This book gathers recent research work on emerging Artificial Intelligence (AI) methods for processing and storing data generated by cloud-based Internet of Things (IoT) infrastructures. Major topics covered include the analysis and development of AI-powered mechanisms in future IoT applications and architectures. Further, the book addresses new technological developments, current research trends, and industry needs. Presenting case studies, experience and evaluation reports, and best practices in utilizing AI applications in IoT networks, it strikes a good balance between theoretical and practical issues. It also provides technical/scientific information on various aspects of AI technologies, ranging from basic concepts to research grade material, including future directions.

The book is intended for researchers, practitioners, engineers and scientists involved in the design and development of protocols and AI applications for IoT-related devices. As the book covers a wide range of mobile applications and scenarios where IoT technologies can be applied, it also offers an essential introduction to the field.

Table of Contents

Frontmatter
Fog Computing: Data Analytics for Time-Sensitive Applications
Abstract
Fog computing has been initiated to reduce communications delays between users and cloud systems. The idea of Fog computing allows users to interact with intermediate servers, while reaping the benefits of reliability and elasticity, which are inherent in cloud computing. Fog computing can leverage Internet of Things (IoT) by providing a reliable service layer for time-sensitive applications and real-time analytics. While the concept of fog computing is still evolving, it is pertinent to study the domain of fog computing and analyze its strengths and weaknesses. Motivated by this need, this chapter describes the architecture of fog computing and explain its efficacy with respect to different applications. The chapter highlights some of the key challenges associated with this evolving platform along with future directions of research.
Jawwad A. Shamsi, Muhammad Hanif, Sherali Zeadally
Medical Image Watermarking in Four Levels Decomposition of DWT Using Multiple Wavelets in IoT Emergence
Abstract
Medical images will be an inseparable part for evaluating medical conditions of a person in real-time. This process will become efficient by exploiting the characteristics that are offered by IoT in the healthcare sectoral issues. This will allow more efficiency in the processing of a medical image. However, the medical images are exposed to the major risks through frequent attacks which may lead up to misinformation the physician in the diagnosis of the disease. Piracy eradication remains the major challenge in the present-day world in IoT platform. Subsequently, the medical watermarked image is significant technique of ensuring the clinical information that exists in the medical images. In this regard, this paper labors on a medical image based on digital watermarking can be utilized to protect health sign embedding in a medical image within an invisible status. The proposed method performance is evaluated by utilizing MSE, PSNR, SSIM, and NC, which are necessary to get the best result for performance metrics. This work is achieved in four levels Discrete Wavelet Transform (DWT). Each level is utilized different wavelet family. These wavelets family are composed of biorthogonal wavelet, reverse biorthogonal wavelet, discrete meyer wavelet, symlet wavelet, and coiflets wavelet transform. The proposed technique is highly robust against numerous sorts of attacks. The results refer that this proposed algorithm permits prevention at a higher level compared with other current structures and algorithms.
Tamara K. Al-Shayea, Constandinos X. Mavromoustakis, Jordi Mongay Batalla, George Mastorakis, Evangelos Pallis, Evangelos K. Markakis, Spyros Panagiotakis, Imran Khan
Optimised Statistical Model Updates in Distributed Intelligence Environments
Abstract
This paper explores a sequential decision making methodology of when to update statistical learning models in Intelligent Edge Computing devices given underlying changes in the contextual data distribution. The proposed model update scheduling takes into consideration the optimal decision time for minimizing the network overhead while preserving the prediction accuracy of the models. The paper reports on a comparison between the proposed approach with four other update delaying policies found in the literature, an evaluation of the performances using linear and support vector regression models over real contextual data streams and a discussion on the strengths and weaknesses of the proposed policy.
Ekaterina Aleksandrova, Christos Anagnotopoulos
Intelligent Vehicular Networking Protocols
Abstract
This chapter will introduce Vehicular Ad Hoc Network; abbreviated as VANET; which is a variation of Mobile Ad Hoc Network, MANET. In VANET the nodes are either vehicles or fixed roadside units considered as VANET infrastructure. This chapter will discuss the five categories of VANET routing protocols with some of the most known examples in each. Each of the mentioned protocols will have a brief overview of its design and implementation. Besides, we will mention some of the protocols’ benefits, drawbacks, and enhancements. With the emergence of the Internet of Things (IoT) in the last decade, vehicles have a subcategory known as Internet of Vehicles (IoV) due to the special characteristics of vehicles and road topology. Therefore, the second section of this chapter will discuss the classification of routing protocols in IoV which is the promising future in the vehicular network world.
Grace Khayat, Constandinos X. Mavromoustakis, George Mastorakis, Hoda Maalouf, Jordi Mongay Batalla, Evangelos Pallis, Evangelos K. Markakis
Towards Ubiquitous Privacy Decision Support: Machine Prediction of Privacy Decisions in IoT
Abstract
We present a mechanism to predict privacy decisions of users in Internet of Things (IoT) environments, through data mining and machine learning techniques. To construct predictive models, we tested several different machine learning models, combinations of features, and model training strategies on human behavioral data collected from an experience-sampling study. Experimental results showed that a machine learning model called linear model and deep neural networks (LMDNN) outperforms conventional methods for predicting users’ privacy decisions for various IoT services. We also found that a feature vector, composed of both contextual parameters and privacy segment information, provides LMDNN models with the best predictive performance. Lastly, we proposed a novel approach called one-size-fits-segment modeling, which provides a common predictive model to a segment of users who share a similar notion of privacy. We confirmed that one-size-fits-segment modeling outperforms previous approaches, namely individual and one-size-fits-all modeling. From a user perspective, our prediction mechanism takes contextual factors embedded in IoT services into account and only utilizes a small amount of information polled from the users. It is therefore less burdensome and privacy-invasive than the other mechanisms. We also discuss practical implications for building predictive models that make privacy decisions on behalf of users in IoT.
Hosub Lee, Alfred Kobsa
Energy-Efficient Design of Data Center Spaces in the Era of IoT Exploiting the Concept of Digital Twins
Abstract
Research on the power management of server units and server room spaces can ease the installation of a Data Center, inferring cost reduction, and environmental protection for its operation. This type of research focuses either on the restriction of power consumption of the devices of a data center, or on the minimization of energy exchange between the data center space as a whole and its environment. The work presented here focuses mainly on the latter. In particular, we attempt to estimate the effect that can infer on the energy performance of a data room, various interventions on the structural envelop of the building that hosts the data center. The performance of the data center is evaluated in this work by measuring its PUE index. Our modeling methodology includes the selection of a proper simulation tool for the creation of the digital twin, that is of a digital clone, of our real data center, and then building its thermal model via energy measurements. In this work, the researchers exploit the potential of EnergyPlus software for modelling and simulating the behavior of a data center space. The energy measurements were conducted for a long period, using low cost IoT infrastructure, in the data center space of an Educational building in Heraklion Crete. This paper describes the procedures and the results of this work.
Spyros Panagiotakis, Yannis Fandaoutsakis, Michael Vourkas, Kostas Vassilakis, Athanasios Malamos, Constandinos X. Mavromoustakis, George Mastorakis
In-Network Machine Learning Predictive Analytics: A Swarm Intelligence Approach
Abstract
This chapter addresses the problem of collaborative Predictive Modelling via in-network processing of contextual information captured in Internet of Things (IoT) environments. In-network predictive modelling allows the computing and sensing devices to disseminate only their local predictive Machine Learning (ML) models instead of their local contextual data. The data center, which can be an Edge Gate- way or the Cloud, aggregates these local ML predictive models to predict future outcomes. Given that communication between devices in IoT environments and a centralised data center is energy consuming and communication bandwidth demanding, the local ML predictive models in our proposed in-network processing are trained using Swarm Intelligence for disseminating only their parameters within the network. We further investigate whether dissemination overhead of local ML predictive models can be reduced by sending only relevant ML models to the data center. This is achieved since each IoT node adopts the Particle Swarm Optimisation algorithm to locally train ML models and then collaboratively with their network neighbours one representative IoT node fuses the local ML models. We provide comprehensive experiments over Random and Small World network models using linear and non-linear regression ML models to demonstrate the impact on the predictive accuracy and the benefit of communication-aware in-network predictive modelling in IoT environments.
Hristo Ivanov, Christos Anagnostopoulos, Kostas Kolomvatsos
Machine Learning Techniques for Wireless-Powered Ambient Backscatter Communications: Enabling Intelligent IoT Networks in 6G Era
Abstract
Machine learning is a rapidly evolving paradigm that has the potential to bring intelligence and automation to low-powered devices. In parallel, there have been considerable improvements in the ambient backscatter communications due to high research interest over the last few years. The combination of these two technologies is inevitable which would eventually pave the way for intelligent Internet-of-things (IoT) in 6G wireless networks. There are several use cases for machine learning-enabled ambient backscatter networks that range from healthcare network, industrial automation, and smart farming. Besides this, it would also be helpful in enabling services like ultra-reliable and low-latency communications (uRLLC), massive machine-type communications (mMTC), and enhanced mobile broadband (eMBB). Also, machine learning techniques can help backscatter communications overcome its limiting factors. The information-driven machine learning does not require the need of a tractable scientific model as the models can be prepared to deal with channel imperfections and equipment flaws in backscatter communications. Particularly, with the use of reinforcement learning approaches, the performance of backscatter devices can be further improved. On-going examinations have likewise demonstrated that machine learning methodologies can be used to protect backscatter devices to enhance their ability to handle security and privacy vulnerabilities. These previously mentioned propositions alongside the ease-of-use of machine learning techniques inspire us to investigate the feasibility of machine learning-based methodologies for backscatter communications. To do such, we start this chapter by talking about the basics and different flavors of machine learning. This includes supervised learning, unsupervised learning, and reinforcement learning. We also shed light on the deep learning models like artificial neural networks (ANN) and deep Q-learning and discuss the hardware requirements of machine learning models. Then, we go on to describe some of the potential uses of machine learning in ambient backscatter communications. In the subsequent sections, we provide a detailed analysis of reinforcement learning for wireless-powered ambient backscatter devices and give some insightful results along with relevant discussion. In the end, we present some concluding remarks and highlight some future research directions.
Furqan Jameel, Navuday Sharma, Muhammad Awais Khan, Imran Khan, Muhammad Mahtab Alam, George Mastorakis, Constandinos X. Mavromoustakis
Processing Systems for Deep Learning Inference on Edge Devices
Abstract
Deep learning models are taking place at many artificial intelligence tasks. These models are achieving better results but need more computing power and memory. Therefore, training and inference of deep learning models are made at cloud centers with high-performance platforms. In many applications, it is more beneficial or required to have the inference at the edge near the source of data or action requests avoiding the need to transmit the data to a cloud service and wait for the answer. In many scenarios, transmission of data to the cloud is not reliable or even impossible, or has a high latency with uncertainty about the round-trip delay of the communication, which is not acceptable for applications sensitive to latency with real-time decisions. Other factors like security and privacy of data force the data to stay in the edge. With all these disadvantages, inference is migrating partial or totally to the edge. The problem, is that deep learning models are quite hungry in terms of computation, memory and energy which are not available in today’s edge computing devices. Therefore, artificial intelligence devices are being deployed by different companies with different markets in mind targeting edge computing. In this chapter we describe the actual state of algorithms and models for deep learning and analyze the state of the art of computing devices and platforms to deploy deep learning on edge. We describe the existing computing devices for deep learning and analyze them in terms of different metrics, like performance, power and flexibility. Different technologies are to be considered including GPU, CPU, FPGA and ASIC. We will explain the trends in computing devices for deep learning on edge, what has been researched, and what should we expect from future devices.
Mário Véstias
Power Domain Based Multiple Access for IoT Deployment: Two-Way Transmission Mode and Performance Analysis
Abstract
As one of the promising radio access techniques in two-way relaying network, we consider Power Domain based Multiple Access (PDMA) and such PDMA is effective way to deploy in 5G communication networks. In this paper, a cooperative PDMA two-way scheme with two-hop transmission is proposed to enhance the outage performance under consideration on how exact successive interference cancellation (SIC) performs at each receiver. In the proposed scheme, performance gap of two NOMA users are examined, and power allocation factors are main impairment in such performance evaluation. In order to reveal the benefits of the proposed scheme, we choose full-duplex at relay to improve bandwidth efficiency. As important result, its achieved outage probability is mathematical analyzed with imperfect SIC taken into account. Our examination shows that the proposed scheme can significantly outperforms existing schemes in terms of achieving a acceptable outage probability, i.e. a lower outage probability given.
Tu-Trinh Thi Nguyen, Dinh-Thuan Do, Imran Khan, George Mastorakis, Constandinos X. Mavromoustakis
Big Data Thinning: Knowledge Discovery from Relevant Data
Abstract
Using statistical learning theory and machine learning techniques surrounding the principles of Rival Penalised Competitive Learning (RPCL), this chapter proposes a novel approach aiming to aid Big Data Thinning, i.e., analysing only the potential data sub-spaces and not the entire extensive data space. Data scientists, data analysts, IoT applications and Edge-centric services are in need for predictive modelling and analytics. This is achieved by learning from past issued analytics queries and exploiting the analytics query access patterns over the large distributed data-sets revealing the most interested and important sub-spaces for further exploratory analysis. By analysing user queries and respectively mapping them into relatively small-scale predictive local regression models, we can yield higher predictive accuracy. This is done by thinning the data space and freeing it of irrelevant and non-popular data sub-spaces; thus, making use of less training data instances. Experimental results and statistical analysis support the research idea proposed in this work.
Naji Shehab, Christos Anagnostopoulos
Optimizing Blockchain Networks with Artificial Intelligence: Towards Efficient and Reliable IoT Applications
Abstract
Blockchain is one of the new tools which is still experiencing a serious lack of understanding and awareness in general public. Besides, to say that blockchain is a versatile technology with many applications would be an understatement. The several features of blockchain technology have led to an immense amount of research interest in the blockchain systems, especially from the perspective of enabling the Internet-of-things (IoT). It is expected that its adoption will be gradual, starting with researchers and leading up to startups and companies who will discover its potential for change. Then the general public who will demand changes would use it as an everyday technology. Finally, organizations that had until then resisted change would adopt it at a large scale, thus, paving the way for globalization of blockchain. In this regard, it is important to understand that the blockchain is not a single object, a single trend or a particular feature, but a composition and collection of several autonomous as well as manually operated entities. Because of these modular features, the blockchain offers an infinite range of application choices. Another paradigm which is emerging in parallel is the so-called artificial intelligence. The evolution of artificial intelligence is an incredible tale which started with neural networks in the last century and continued its way to the development of complex deep learning techniques. From the perspective of IoT networks, it has the potential to provide services like ultra-reliable and low-latency communications (uRLLC), long-distance and high-mobility communications (LDHMC), and ultra-massive machine-type communications (umMTC) which can be the key to realizing tactile Internet for next generation of wireless networks. Moreover, artificial intelligence techniques do not require a mathematically tractable model to optimize the performance of the devices. This means that they can be used for a number of environments that includes both indoor and outdoor communication scenarios. Although the applications of blockchain in IoT networks have much potential, there are several open issues and research challenges that need to be addressed first. In this context, one of the main issues in blockchain networks is branching, i.e., forking event. The forking not only introduces excessive overhead in the network but can also lead to potential security attacks. Thus, to overcome this interesting problem, we use deep neural networks in a distributed IoT setup. More specifically, the communication delay between the IoT miner and communication point is directly related to the rate of the link. We aim to maximize the rate of the communication link from IoT miner to the communication point with the help of neural networks. The numerical results show a significant improvement in the rate which can lead to a reduction in transmission delays. We anticipate that this work on artificial intelligence-enabled blockchain system would pave the way for future studies and research.
Furqan Jameel, Uzair Javaid, Biplab Sikdar, Imran Khan, George Mastorakis, Constandinos X. Mavromoustakis
Industrial and Artificial Internet of Things with Augmented Reality
Abstract
Internet of Things (IoT) is a concept that proposes the inclusion of physical devices as a new form of communication, connecting them with various information systems. Nowadays, IoT cannot be reduced to smart homes as due to the recent technological advances, this concept has evolved from small to large-scale environments. There was also a need to adopt IoT in several business sectors, such as manufacturing, logistics or transportation in order to converge information technologies and technological operations. Due to this convergence, it was possible to arrive at a new IoT paradigm, called the Industrial Internet of Things (IIOT). However, in IIoT, it is also necessary to analyze and interact with a real system through a virtual production, which refers to the use of Augmented Reality (AR) as a method to achieve this interaction. AR is the overlay of digital content in the real world, and even play an important role in the life cycle of a product, from its design to its support, thus allowing greater flexibility. The adoption of interconnected systems and the use of IoT have motivated the use of Artificial Intelligence (AI) because much of the data coming from several sources is unstructured. Several AI algorithms have been used for decades aiming at “making sense” of unstructured data, and transforming it into relevant information. Therefore, converging IoT, AR, and AI makes systems become increasingly autonomous and problem solving in many scenarios.
Pedro Gomes, Naercio Magaia, Nuno Neves
IoT Detection Techniques for Modeling Post-Fire Landscape Alteration Using Multitemporal Spectral Indices
Abstract
It is challenging to detect burn severity due to the long-time period needed to capture the ecosystem characteristics. Whether there is a warning from the IoT devices, multitemporal remote sensing data is being received via satellites to contribute to multitemporal observations before, during and after a bushfire, and enhance the difference on detection accuracy. In this study, we strive to design an infrastructure to fire detection, to perform a qualitative assessment of the condition as quickly as possible in order to avoid a major disaster. Studying the multitemporal spectral indicators such as Normalized difference vegetation index (NDVI), Enhanced vegetation index (EVI), Normalized burn ratio (NBR), Soil-adjusted vegetation index (SAVI), Normalized Difference Moisture (Water) Index (NDMI or NDWI) and Normalized wildfire ash index (NWAI), we can draw reliable conclusions about the seriousness of an area. The scope of this project is to examine the correlation between multitemporal spectral indices and field-observed conditions and provide a practical method for immediate fire detection to assess its burn severity in advance. Furthermore, a quantified mapping model is presented to illustrate the spatial distribution of fire severity across the burnt area. The study focuses on the recent bushfire that took place in Mati Athens in Greece on 24 July 2018 which had as a result not only material and environmental disaster but also was responsible for the loss of human lives.
Despina E. Athanasaki, George Mastorakis, Constandinos X. Mavromoustakis, Evangelos K. Markakis, Evangelos Pallis, Spyros Panagiotakis
Internet of Things and Artificial Intelligence—A Wining Partnership?
Abstract
Hardware/Software (hw/sw) systems changed the human way of living. Internet of Things (IoT) and Artificial Intelligence (AI), now two dominant research themes, are intended and expected to change it more. Hopefully, for the good. In this book chapter, relevant challenges associated with the development of a “society” of intelligent smart objects are highlighted. Humans and smart objects are expected to interact. Humans with natural intelligence (people) and smart objects (things) with artificial intelligence. The Internet, the platform of globalization, has connected people around the world, and will be progressively the platform for connecting “things”. Will humans be able to build up an IoT that benefit them, while keeping a sustainable environment on this planet? How will designers guarantee that the IoT world will not run out of control? What are the standards? How to implement them? These issues are addressed in this chapter from the engineering and educational points of view. In fact, when dealing with “decision making systems”, not only design and test should guarantee the correct and safe operation, but also the soundness of the decisions such smart objects take, during their lifetime. The concept of Design for Accountability (DfA) is, thus, proposed and some initial guidelines are outlined.
J. Semião, M. B. Santos, I. C. Teixeira, J. P. Teixeira
AI Architectures for Very Smart Sensors
Abstract
The chapter describes modern neural network designs and discusses their advantages and disadvantages. The state-of-the-art neural networks are usually too much computationally difficult which limits their use in mobile and IoT applications. However, they can be modified with special design techniques which would make them suitable for mobile or IoT applications with limited computational power. These techniques for designing more efficient neural networks are described in great detail. Using them opens a way to create extremely efficient neural networks for mobile or even IoT applications. Such neural networks make the applications very intelligent which paves the way for very smart sensors.
Peter Malík, Štefan Krištofík
Metadata
Title
Convergence of Artificial Intelligence and the Internet of Things
Editors
Dr. George Mastorakis
Prof. Dr. Constandinos X. Mavromoustakis
Dr. Jordi Mongay Batalla
Prof. Evangelos Pallis
Copyright Year
2020
Electronic ISBN
978-3-030-44907-0
Print ISBN
978-3-030-44906-3
DOI
https://doi.org/10.1007/978-3-030-44907-0

Premium Partner