Skip to main content
Erschienen in: Complex & Intelligent Systems 5/2022

Open Access 29.08.2022 | Editorial

Intelligent mobile edge computing for IoT big data

verfasst von: Gwanggil Jeon, Marcelo Albertini, Valerio Bellandi, Abdellah Chehri

Erschienen in: Complex & Intelligent Systems | Ausgabe 5/2022

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
download
DOWNLOAD
print
DRUCKEN
insite
SUCHEN
loading …
Hinweise

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Introduction

The progression of Internet of Things (IoT) has influenced several application areas during last decade and is expected to multiply further. The number of IoT connected IT devices has reached 40 billion, and is assumed to exceed 70 billion in 5 years. This fact highly advocates the added-value possibility of IoT but brings new challenges, such as effective handling of big volumes of data continually produced by IoT devices. The cloud-based IoT devices often fail to achieve the growing claims of their end user clients, particularly regarding the delivery of real time services and high quality of experience, keeping the privacy and security of the integrated system. This has led to IoT deployments that send data managing operations at the edge of the IT network, providing rise to IoT solutions based on edge computing. This can allow data processing and storage to be processed in a distributed fashion, close to the end user data sources, and handling network bandwidth conditions, high latency issues, and privacy and performance matters. However, there are still a lot of objections that need to be tackled before IoT big data systems can apply the edge computing paradigm in full, such as heterogeneity and resource constraints of most IoT devices, performance, scalability, and privacy.
Mobile edge computing is a European Telecommunications Standards Institute defined network architecture concept that enables cloud computing capabilities and an IT service environment at the edge of the cellular network and, more in general at the edge of any network. The basic idea behind mobile edge computing is that by running applications and performing related processing tasks closer to the cellular customer, network congestion is reduced and applications perform better. MEC technology is designed to be implemented at the cellular base stations or other edge nodes, and enables flexible and rapid deployment of new applications and services for customers. The high-performance mobile edge computing for IoT big data is embraced as a significant research topic by many researchers and professions due to its capabilities to handle complex and challenging big data issues. Due to the tremendous interests from industry and academia in the design and development of innovative techniques and tools for the large and complex information processing issues, researchers have adopted high-performance mobile edge computing for the big data information system problems to deliver efficient solutions within a reasonable time. The rapid development of technologies such as the IoT, online social media, vehicular communications, machine to machine communications creates computational and storage problems. The big data generated from the aforementioned technologies requires robust data processing techniques with high computational and massive storage capabilities, and mobile edge computing is the effective solution to handle these issues. In other words, with the proven and promising potential, mobile edge computing is used to solve many real-time large-scale big data information system challenges.

Themes of this special issue

Starting from the above considerations, we solicited original contributions in four categories, all of which were expected to have an emphasis on intelligent mobile edge computing for IoT big data:
  • State-of-the-art theories and novel application scenarios;
  • Novel time series analysis methods and applications;
  • Surveys of recent progress in this area;
  • The building of benchmark datasets.
    This edition of the special issue serves as a forum for researchers all over the world to discuss their works and recent advances in intelligent mobile edge computing for IoT big data. The special issue seeked for the original contribution of works that addresses the challenges of Complex & Intelligent Systems and is intended to provide a highly recognized international forum to present recent advances in Complex & Intelligent Systems. Papers addressing interesting real-world applications are especially encouraged. The list of possible topics includes, but not limited to:
  • Architectures, models, and protocols for intelligent mobile edge computing in IoT big data
  • Modelling and high-performance mobile edge computing for big data processing
  • Mobile edge computing along with cloud, fog, mist computing for IoT big data
  • Resource allocation/management tools for mobile edge computing in IoT big data
  • Data science, distribution, management, and storage in mobile edge computing-based IoT big data
  • Security, privacy and trust issues in mobile edge computing-based IoT big data
  • AI and ML for edge computing in IoT big data
  • Performance assessment and evaluation for mobile edge computing in IoT big data
  • Services and applications in mobile edge computing-based IoT big data
After review, a total of 22 papers have been accepted for publication in this issue.

Models

Evaluation of car damages from an accident is one of the most important processes in the car insurance business. Currently, it still needs a manual examination of every basic part. It is expected that a smart device will be able to do this evaluation more efficiently in the future. In the contribution by Pasupa et al. “Evaluation of Deep Learning Algorithms for Semantic Segmentation of Car Parts,” the authors evaluated and compared five deep learning algorithms for semantic segmentation of car parts. The baseline reference algorithm was Mask R-CNN, and the other algorithms were HTC, CBNet, PANet, and GCNet. Runs of instance segmentation were conducted with those five algorithms. HTC with ResNet-50 was the best algorithm for instance segmentation on various kinds of cars such as sedans, trucks, and SUVs. It achieved a mean average precision at 55.2 on their original data set, that assigned different labels to the left and right sides and 59.1 when a single label was assigned to both sides. In addition, the models from every algorithm were tested for robustness, by running them on images of parts, in a real environment with various weather conditions, including snow, frost, fog and various lighting conditions. The findings from this study may directly benefit developers of automated car damage evaluation system in their quest for the best design.
The contribution by Shan and Fang “DRAC: A Delta Recurrent Neural Network based Arithmetic Coding Algorithm for Edge Computing” develops an arithmetic coding algorithm based on delta recurrent neural network for edge computing devices called DRAC. Their algorithm is implemented on a Xilinx Zynq 7000 Soc board. The authors evaluate DRAC with four datasets and compare it with the state-of-the-art compressor DeepZip. The experimental results show that DRAC outperforms DeepZip and achieves 5X speedup ratio and 20X power consumption saving.
The progression of Internet of Things (IoT) has resulted in generation of huge amount of data. Effective handling and analysis of such big volumes of data proposes a crucial challenge. Existing cloud-based frameworks of Big Data visualization are rising costs for servers, equipment, and energy consumption. There is a need for a green solution targeting lesser cost and energy consumption with tamper-proof record-keeping, storage, and interactive visualization with only demanded data. In the contribution by Din et al. “Blockchain-based Green bigdata Visualisation: BGbV,” the authors have proposed a Blockchain-based Green big data Visualization (BGbV) solution using Hyperledger Sawtooth for optimum utilization of organization resources. BGbV will support current distributed data visualization platforms and guarantee benefits like security and data availability with lesser storage costs. It helps reduce costs by utilizing small resources that are already available and consume less energy, making it an environmentally friendly solution.
Non-standard license plates are a part of current traffic trends in Pakistan. Private number plates should be recognized and, monitored for several purposes including security as well as a well-developed traffic system. There is a challenging task for the authorities to recognize and trace the locations for the certain number plate vehicle. In a developing country like Pakistan, it is tough to have higher constraints on the efficiency of any license plate identification and recognition algorithm. Character recognition efficiency should be a route map for the achievement of the desired results within the specified constraints. In the contribution by Shafi et al. “License Plate Identification and Recognition in a Non-standard Environment Using Neural Pattern Matching,” the main goal of this study is to devise a robust detection and recognition mechanism for non-standard, transitional vehicle license plates generally found in developing countries. Improvement in the character recognition efficiency of drawn and printed plates in different styles and fonts using single using multiple state-of-the-art technologies including machine-learning (ML) models. The proposed method is successfully tested on a large image dataset consisting of eight different types of license plates from different provinces in Pakistan. The proposed system is expected to play an important role in implementing vehicle tracking, payment for parking fees, detection of vehicle over-speed limits, reducing road accidents, and identification of unauthorized vehicles. The outcome shows that the proposed approach achieves a plate detection accuracy of 97.82% and the character recognition accuracy of 96%.
The number of vehicles is increasing at a very high rate throughout the globe. It reached 1 billion in 2010, in 2020 it was around 1.5 billion and experts say this could reach up to 2–2.5 billion by 2050. A large part of these vehicles will be electrically driven and connected to a vehicular network. Rapid advancements in vehicular technology and communications have led to the evolution of vehicular edge computing (VEC). Computation resource allocation is a vehicular network’s primary operations as vehicles have limited onboard computation. Different resource allocation schemes in VEC operate in different environments such as cloud computing, artificial intelligence, blockchain, software defined networks and require specific network performance characteristics for their operations to achieve maximum efficiency. At present, researchers have proposed numerous computation resource allocation schemes which optimize parameters such as power consumption, network stability, quality of service (QoS), etc. These schemes are based on widely used optimization and mathematical models such as the Markov process, Shannon’s law, etc. So, there is a need to present an organized overview of these schemes to help in the future research of the same. In the contribution by Chamola et al. “A Survey on Computation Resource Allocation in IoT enabled Vehicular Edge Computing,” the authors classify state-of-the-art computation resource allocation schemes based on three criteria: (1) Their optimization goal, (2) Mathematical models/algorithms used, and (3) Major technologies involved. The authors also identify and discuss current issues in computation resource allocation in VEC and mention the future research directions.
The discriminative correlation filter (DCF)-based tracking methods have achieved remarkable performance in visual tracking. However, the existing DCF paradigm still suffers from dilemmas such as boundary effect, filter degradation, and aberrance. To address these problems, the authors propose a spatio-temporal joint aberrance suppressed regularization (STAR) correlation filter tracker under a unified framework of response map. Specifically, a dynamic spatio-temporal regularizer is introduced into the DCF to alleviate the boundary effect and filter degradation, simultaneously. Meanwhile, an aberrance suppressed regularizer is exploited to reduce the interference of background clutter. The contribution by Gao et al. “Spatio-temporal joint aberrance suppressed correlation filter for visual tracking” proposed STAR model, which is effectively optimized using the alternating direction method of multipliers (ADMM). Finally, comprehensive experiments on TC128, OTB2013, OTB2015 and UAV123 benchmarks demonstrate that the STAR tracker achieves compelling performance compared with the state-of-the-art (SOTA) trackers.
Three-dimensional (3D) semantic segmentation of point clouds is important in many scenarios, such as automatic driving, robotic navigation, while edge computing is indispensable in the devices. Deep learning methods based on point sampling prove to be computation and memory efficient to tackle large-scale point clouds (e.g. millions of points). However, some local features may be abandoned while sampling. In the contribution by Wang et al. “Semantic segmentation of large-scale point clouds based on dilated nearest neighbors graph,” the authors present one end-to-end 3D semantic segmentation framework based on dilated nearest neighbor encoding. Instead of down-sampling point cloud directly, the authors propose a dilated nearest neighbor encoding module to broaden the network’s receptive field to learn more 3D geometric information. Without increase of network parameters, their method is computation and memory efficient for large-scale point clouds. The authors have evaluated the dilated nearest neighbor encoding in two different networks. The first is the random sampling with local feature aggregation. The second is the Point Transformer. The authors have evaluated the quality of the semantic segmentation on the benchmark 3D dataset S3DIS, and demonstrate that the proposed dilated nearest neighbor encoding exhibited stable advantages over baseline and competing methods.
Computation offloading enables intensive computational tasks to be separated into multiple computing resources for overcoming hardware limitations. Leveraging cloud computing, edge computing can be enabled to apply not only large-scale and personalized data but also intelligent algorithms based on offloading the intelligent models to high-performance servers for working with huge volumes of data in the cloud. In the contribution by Jin et al. “Decision Making of IoT Device Operation Based on Intelligent-Task Offloading for Improving Environmental Optimization,” the authors propose a getaway-centric Internet of Things (IoT) system to enable the intelligent and autonomous operation of IoT devices in edge computing. In the proposed edge computing, IoT devices are operated by a decision-making model that selects an optimal control factor from multiple intelligent services and applies it to the device. The intelligent services are provided based on offloading multiple intelligent and optimization approaches to the intelligent service engine in the cloud. Therefore, the decision-making model in the gateway is enabled to select the best solution from the candidates. Also, the proposed IoT system provides monitoring and visualization to users through device management based on resource virtualization using the gateway.
The 5G IoT is very complicated and there are many factors that affect the network performance. Presently, the optimization of network is still the focus of research. Although the existing literature has done a large number of researches in this aspect, there have always been problems, such as complex algorithms. In the contribution by Xie et al. “Research on Energy Saving Technology at Mobile Edge Networks of IoTs Based on Big Data Analysis,” based on the previous research, the authors propose a big data mining analysis method, which improves the comprehensive performance of the network by analyzing the relationship of massive data variables so as to optimize the combination of the network. In this paper, according to each of terminal variables at any moment such as power consumption, bandwidth, noise power, subcarrier bandwidth, interference power and coding efficiency, etc. the authors develop the mathematical modeling of principal component multiple regression. Then the authors simulate this scheme by edge computing technology and combine it with intelligent algorithms. The research results show that this method can effectively predict the data concerned, and the residual is the smallest. Therefore, the research provides an important basic for application of the approach to the mobile edge network optimization of IoTs.
Propaganda is a rhetorical technique designed to serve a specific topic, which is often used purposefully in news article to achieve our intended purpose because of its specific psychological effect. Therefore, it is significant to be clear where and what propaganda techniques are used in the news for people to understand its theme efficiently during our daily lives. Recently, some relevant researches are proposed for propaganda detection but unsatisfactorily. As a result, detection of propaganda techniques in news articles is badly in need of research. In the contribution by Li et al. “Span Identification and Technique Classification of Propaganda in News Articles,” the authors introduced their systems for detection of propaganda techniques in news articles, which is split into two tasks, Span Identification and Technique Classification. For these two tasks, the authors design a system based on the popular pretrained BERT model, respectively. Furthermore, the authors adopt the over-sampling and EDA strategies, propose a sentence-level feature concatenating method in their systems. Experiments on the dataset of about 550 news articles offered by SEMEVAL show that their systems perform state-of-the-art.
Intrusion detection is an approach to protect the network by detecting attacks. Existing detection models can detect only the known attacks and the efficiency for monitoring the real-time network traffic is low. The existing intrusion detection solutions cannot identify new unknown attacks. Hence, there is a need of an Edge-based Hybrid Intrusion Detection Framework (EHIDF) that not only detects known attacks but also capable of detecting unknown attacks in real time with low False Alarm Rate (FAR). The contribution by Singh et al. “An edge based hybrid intrusion detection framework for mobile edge computing” aims to propose an EHIDF which is mainly considered the Machine Learning (ML) approach for detecting intrusive traffics in the MEC environment. The proposed framework consists of three intrusion detection modules with three different classifiers. The Signature Detection Module (SDM) uses a C4.5 classifier, Anomaly Detection Module (ADM) uses Naive-based classifier, and Hybrid Detection Module (HDM) uses the Meta-AdaboostM1 algorithm. The developed EHIDF can solve the present detection problems by detecting new unknown attacks with low FAR. The implementation results illustrate that EHIDF accuracy is 90.25% and FAR is 1.1%. These results are compared with previous works and found improved performance. The accuracy is improved up to 10.78% and FAR is reduced up to 93%. A game-theoretical approach is also discussed to analyze the security strength of the proposed framework.
The number of autism spectrum disorder individuals is dramatically increasing. For them, it is difficult to get an early diagnosis or to intervene for preventing challenging behaviors, which may be the cause of social isolation and economic loss for all their family. This SLR aims at understanding and summarizing the current research work on this topic and analyze the limitations and open challenges to address future work. The authors of the contribution by Francese and Yang “Supporting Autism Spectrum Disorder Screening and Intervention with Machine Learning and Wearables: a Systematic Literature Review” consider papers published between 2015 and the beginning of 2021. The initial selection included about 2140 papers. 11 of them respected their selection criteria. The papers have been analyzed by mainly considering: (1) the kind of action taken on the autistic individual, (2) the considered wearables, (3) the machine learning approaches, and (4) the evaluation strategies. Results revealed that the topic is very relevant, but there are many limitations in the considered studies, such as reduced number of participants, absence of datasets and experimentation in real contexts, need for considering privacy issues, and the adoption of appropriate validation approaches. The issues highlighted in this analysis may be useful for improving machine learning techniques and highlighting areas of interest in which experimenting with the use of different noninvasive sensors.
Edge computing is a distributed architecture that features decentralized processing of data near the source/devices, where data are being generated. These devices are known as Internet of Things (IoT) devices or edge devices. As the authors continue to rely on IoT devices, the amount of data generated by the IoT devices have increased significantly due to which it has become infeasible to transfer all the data over to the Cloud for processing. Since these devices contain insufficient storage and processing power, it gives rise to the edge computing paradigm. In edge computing data are processed by edge devices and only the required data are sent to the Cloud to increase robustness and decrease overall network overhead. IoT edge devices are inherently suffering from various security risks and attacks causing a lack of trust between devices. To reduce this malicious behavior, in the contribution by Latif et al. “A Novel Trust Management Model for Edge Computing,” a lightweight trust management model is proposed that maintains the trust of a device and manages the service level trust along with quality of service (QoS). The model calculates the overall trust of the devices by using QoS parameters to evaluate the trust of devices through assigned weights. Trust management models using QoS parameters show improved results that can be helpful in identifying malicious edge nodes in edge computing networks and can be used for industrial purposes.
In the case of new technology application, the cognitive radio network (CRN) addresses the bandwidth shortfall and the fixed spectrum problem. The method for CRN routing, however, often encounters issues with regard to road discovery, diversity of resources and mobility. In the contribution by Dhiman et al. “SHANN: An IoT and Machine-Learning Assisted Edge Cross-Layered Routing Protocol using Spotted Hyena Optimizer,” the authors present a reconfigurable CRN-based cross-layer routing protocol with the purpose of increasing routing performance and optimizing data transfer in reconfigurable networks. Recently developed spotted hyena optimizer (SHO) is used for tuning the hyperparameters of machine-learning models. The system produces a distributor built with a number of tasks, such as load balance, quarter sensing and the development path of machine learning. The proposed technique is sensitive to traffic and charges, as well as a series of other network metrics and interference (2bps/Hz/W average). The tests are performed with classic models that demonstrate the residual energy and strength of the resistant scalability and resource.
With the rapid development of information technology construction, increasing specialized data in the field of informatization have become a hot spot for research. Among them, meteorological data, as one of the foundations and core contents of meteorological informatization, is the key production factor of meteorology in the era of digital economy as well as the basis of meteorological services for people and decision-making services. However, the existing centralized cloud computing service model is unable to satisfy the performance demand of low latency, high reliability and high bandwidth for weather data quality control. In addition, strong convective weather is characterized by rapid development, small convective scale and short life cycle, making the complexity of real-time weather data quality control increased to provide timely strong convective weather monitoring services. In order to solve the above problems, the contribution by Hu et al. “Cloud-Edge Cooperation for Meteorological Radar Big Data: A Review of Data Quality Control” proposed the cloud–edge cooperation approach, whose core idea is to effectively combine the advantages of edge computing and cloud computing by taking full advantage of the computing resources distributed at the edge to provide service environment for users to satisfy the real-time demand. The powerful computing and storage resources of the cloud data center are utilized to provide users with massive computing services to fulfill the intensive computing demands.
The history of human development has proven that medical and healthcare applications for humanity always are the main driving force behind the development of science and technology. The advent of Cloud technology for the first time allows providing systems infrastructure as a service, platform as a service and software as a service. Cloud technology has dominated healthcare information systems for decades now. However, one limitation of cloud-based applications is the high service response time. In some emergency scenarios, the control and monitoring of patient status, decision-making with related resources are limited such as hospital, ambulance, doctor, medical conditions in seconds and has a direct impact on the life of patients. In the contribution by Quy et al. “Smart Healthcare IoT Applications based on Fog Computing: Architecture, Applications and Challenges,” to solve these challenges, optimal computing technologies have been proposed such as cloud computing, edge computing, and fog computing technologies. In this article, the authors make a comparison between computing technologies. Then, the authors present a common architectural framework based on fog computing for Internet of Health Things (Fog-IoHT) applications. Besides, the authors also indicate possible applications and challenges in integrating fog computing into IoT Healthcare applications. The analysis results indicated that there is huge potential for IoHT applications based on fog computing.
With the advancement of edge computing, the computing power that was originally located in the center is deployed closer to the terminal, which directly accelerates the iteration speed of the ‘sensing-communication-decision-feedback’ chain in the complex marine environments, including ship avoidance. The increase in sensor equipment, such as cameras, have also accelerated the speed of ship identification technology based on feature detection in the maritime field. Based on the SSD framework, the contribution by Wang et al. “Ship Feature Recognition Methods for Deep Learning in Complex Marine Environments” proposes a deep learning model called DP-SSD. By adjusting the size of the detection frame, different feature parameters can be detected. Through actual data learning and testing, it is compatible with Faster RCNN, SSD and other classic algorithms. It was found that the proposed method provided high-quality results in terms of the calculation time, the processed frame rate, and the recognition accuracy. As an important part of future smart ships, this method has theoretical value and an influence on engineering.
The Internet of Things (IoT) applications and services are increasingly becoming a part of daily life; from smart homes to smart cities, industry, agriculture, it is penetrating practically in every domain. Data collected over the IoT applications, mostly through the sensors connected over the devices, and with the increasing demand, it is not possible to process all the data on the devices itself. The data collected by the device sensors are in vast amount and require high-speed computation and processing, which demand advanced resources. Various applications and services that are crucial require meeting multiple performance parameters like time-sensitivity and energy efficiency, computation offloading framework comes into play to meet these performance parameters and extreme computation requirements. Computation or data offloading tasks to nearby devices or the fog or cloud structure can aid in achieving the resource requirements of IoT applications. In the contribution by Bajaj et al. “Implementation Analysis of IoT based offloading Frameworks on Cloud/Edge Computing for Sensor Generated Big Data,” the role of context or situation to perform the offloading is studied and drawn to a conclusion, that to meet the performance requirements of IoT enabled services, context-based offloading can play a crucial role. Some of the existing frameworks EMCO, MobiCOP-IoT, Autonomic Management Framework, CSOS, Fog Computing Framework, based on their novelty and optimum performance are taken for implementation analysis and compared with the MAUI, AnyRun Computing (ARC), AutoScaler, Edge computing and Context-Sensitive Model for Offloading System (CoSMOS) frameworks. Based on the study of drawn results and limitations of the existing frameworks, future directions under offloading scenarios are discussed.
The contribution by Taniguchi et al. “Counseling (ro)bot as a use case for 5G/6G” presents a counseling (ro)bot called Visual Counseling Agent (VICA) which focuses on remote mental healthcare. It is an agent system leveraging artificial intelligence (AI) to aid mentally distressed persons through speech conversation. The system terminals are connected to servers by the Internet exploiting Cloud-nativeness, so that anyone who has any type of terminal can use it from anywhere. Despite a promising voice communication interface, VICA shows limitations in conversation continuity on conventional 4G networks. Concretely, the use of the current 4G networks produces word dropping, delayed response, and the occasional connection failure. The objective of this paper is to mitigate these issues by leveraging a 5G/6G slice inclusive of mobile/multiple edge computing (MEC). First, the authors propose and partly implement the enhanced and advanced version of VICA. Servers of enhanced versions collaborate to increase speech recognition reliability. Although it significantly increases generated data volume, the advanced version enables a recognition of the facial expressions to greatly enhance counseling quality. Then, the authors propose a quality assurance mechanism using multiple levels of catalog, as well as 5G/6G slice inclusive of MEC, and conduct experiments to uncover issues related to the 4G. Results indicate that the number of speech recognition errors in Internet Cloud is more than twofold compared to edge computing, implying that quality assurance using 5G/6G in conjunction with VICA Counseling (ro)bot has higher efficiency.
The purpose of the contribution by Li et al. “Energy-saving Service Management Technology of Internet of Things Using Edge Computing and Deep Learning” is to solve the problems of high transmission rate and low delay in the deployment of mobile edge computing network, ensure the security and effectiveness of the Internet of things (IoT), and save resources. Dynamic power management is adopted to control the working state transition of Edge Data Center (EDC) servers. A load prediction model based on long-short term memory (LSTM) is creatively proposed. The innovation of the model is to shut down the server in idle state or low utilization in EDC, consider user mobility and EDC location information, learn the global optimal dynamic timeout threshold strategy and N-policy through trial and error reinforcement learning method, reasonably control the working state switching of the server, and realize load prediction and analysis. The results show that the performance of AdaGrad optimization solver is the best when the feature dimension is 3, the number of LSTM network layers is 6, the time series length is 30–45, the batch size is 128, the training time is 788 s, the number of units is 250, and the number of times is 350. Compared with the traditional methods, the proposed load prediction model and power management mechanism improve the prediction accuracy by 4.21%. Compared with autoregressive integrated moving average (ARIMA) load prediction, the dynamic power management method of LSTM load prediction can reduce energy consumption by 12.5% and realize the balance between EDC system performance and energy consumption. The system can effectively meet the requirements of multi-access edge computing (MEC) for low delay, high bandwidth and high reliability, reduce unnecessary energy consumption and waste, and reduce the cost of MEC service providers in actual operation. This exploration has important reference value for promoting the energy-saving development of Internet-related industries.
Nowadays, the utilization of IoT technology has been rapidly increased in various applications such as smart city, smart banking, smart transport, etc. The internet of things allows the user to collect the data easily using the different sensors installed at various locations in the open environment. The data collection process by the IoT sensors is giving access to the various services. However, due to the open communication medium, it is difficult to provide secure access to these services. In the contribution by Sharma et al. “Secure Transmission Technique for Data in IoT Edge Computing Infrastructure,” a data transmission technique has been proposed, which will provide secure communication in IoT infrastructure for smart city applications. In this method, each IoT sensor have to prove their legitimacy to the reader and the base station before the transmission of data. Hence, the IoT sensors can transmit the required data in a secure and efficient way. In the proposed technique, the proof of correction shows that the required information is not supposed to send through an online medium, it is obtained at the receiver using the Euclidean parameters shared by the IoT sensors. The proposed technique is compatible to provide the security against most of the attacks performed by the attackers. Two random variables and complex mathematical calculation are making the proposed technique more reliable than others. This technique will significantly improve the security of different data transmission services which will be helpful to improve the smart city infrastructure.
Scalable and secure authorization of smart things is of the crucial essence for the successful deployment of the Internet of Things (IoT). Unauthorized access to smart things could exacerbate the security and privacy concern, which could, in turn, lead to the reduced adoption of the IoT, and ultimately to the emergence of severe threats. Even though there are a variety of IoT solutions for secure authorization, authorization schemes in highly dynamic distributed environments remain a daunting challenge. Access rights can dynamically change due to the heterogeneous nature of shared IoT devices and thus the identity and access control management are challenging. The contribution by Sessa et al. “Authorization Schemes For Internet of Things: Requirements, Weaknesses, Future Challenges and Trends” provides a comprehensive comparative analysis of the current state-of-the-art IoT authorization schemes to highlight their strengths and weaknesses. Then it defines the most important requirements and highlights the authorization threats and weaknesses impacting authorization in the IoT. Finally, the survey presents the ongoing open authorization challenges and provides recommendations for future research.

Conclusion

The articles presented in this special issue provides insights in fields related to IoT big data using intelligent mobile edge computing, including models, performance evaluation and improvements, and application developments. We wish the readers can benefit from insights of these papers, and contribute to these rapidly growing areas. We also hope that this special issue would shed light on major developments in the area of Complex & Intelligent Systems and attract attention by the scientific community to pursue further investigations leading to the rapid implementation of these technologies.

Acknowledgements

We would like to express our appreciation to all the authors for their informative contributions and the reviewers for their support and constructive critiques in making this special issue possible. Finally, we would like to express our sincere gratitude to Professor Yaochu Jin, the Editor in Chief, for providing us with this unique opportunity to present our works in Springer Complex & Intelligent Systems.
Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://​creativecommons.​org/​licenses/​by/​4.​0/​.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Metadaten
Titel
Intelligent mobile edge computing for IoT big data
verfasst von
Gwanggil Jeon
Marcelo Albertini
Valerio Bellandi
Abdellah Chehri
Publikationsdatum
29.08.2022
Verlag
Springer International Publishing
Erschienen in
Complex & Intelligent Systems / Ausgabe 5/2022
Print ISSN: 2199-4536
Elektronische ISSN: 2198-6053
DOI
https://doi.org/10.1007/s40747-022-00821-7

Weitere Artikel der Ausgabe 5/2022

Complex & Intelligent Systems 5/2022 Zur Ausgabe

Premium Partner