Skip to main content
Top

2020 | Book

Cloud Computing, Smart Grid and Innovative Frontiers in Telecommunications

9th EAI International Conference, CloudComp 2019, and 4th EAI International Conference, SmartGIFT 2019, Beijing, China, December 4-5, 2019, and December 21-22, 2019

insite
SEARCH

About this book

This book constitutes the refereed proceedings of the 9thInternational Conference on Cloud Computing, CloudComp 2019, and the 4th International Conference on Smart Grid and Innovative Frontiers in Telecommunications, SmartGIFT 2019, both held in Beijing, China, in December 2019. The55 full papers of both conferences were selected from 113 submissions. CloudComp 2019 presents recent advances and experiences in clouds, cloud computing and related ecosystems and business support. The papers are grouped thematically in tracks on cloud architecture and scheduling; cloud-based data analytics; cloud applications; and cloud security and privacy. SmartGIFT 2019 focus on all aspects of smart grids and telecommunications, broadly understood as the renewable generation and distributed energy resources integration, computational intelligence applications, information and communication technologies.

Table of Contents

Frontmatter

Cloud Architecture and Scheduling

Frontmatter
MCOPS-SPM: Multi-Constrained Optimized Path Selection Based Spatial Pattern Matching in Social Networks

In this paper, we study the multi-constrained optimized path selection based spatial pattern matching in Location-Based Social Network (MCOPS-SPM). Given a set D including spatial objects (each with a social identity and a social reputation) and social relationships (e.g., trust degree, social intimacy) between them. We aim at finding all connections (paths) of objects from D that match a user-specified multi-constraints spatial pattern P. A pattern P is a complex network where vertices represent spatial objects, and edges denote social relationships between them. The MCOPS-SPM query returns all the instances that satisfy P. Answering such queries is computationally intractable, and we propose algorithms to solve the multi-constrained optimized path matching problem and guide the join order of the paths in the query results. An extensive empirical study over real-world datasets has demonstrated the effectiveness and efficiency of our approach.

Ying Guo, Lianzhen Zheng, Yuhan Zhang, Guanfeng Liu
A Multi-objective Computation Offloading Method in Multi-cloudlet Environment

Computation offloading is becoming a promising technology that can improve quality of service for mobile users in mobile edge computing. However, it becomes much difficult when there are multi-cloudlet near to the mobile users. The resources of the cloudlet are heterogeneous and finite, and thus it is challenge to choose the best cloudlet for the multi-user. The issue for multi-user in multi-cloudlet environment is well-investigated in this study. Firstly, we establish a multi-objective optimization model with respect to time consumption and energy consumption of mobile devices. Moreover, we devise a multi-objective computation offloading method based on improved fast and elitist genetic algorithm for selecting the optimal offloading strategies. Finally, compared with other methods, numerous experiments proved that our proposed method have advantages in effectiveness and efficiency.

Kai Peng, Shuaiqi Zhu, Lixin Zheng, Xiaolong Xu, Victor C. M. Leung
A Survey of QoS Optimization and Energy Saving in Cloud, Edge and IoT

Since the emergence of cloud computing, it has been serving people as an important way of data processing. Later, with the development of computer and people’s demand for higher service quality, fog computing, edge computing, mobile edge computing (MEC), Internet of Things (IoTs) and other models gradually appeared. They are developed step by step to bring better service to people. In recent years, IoTs technology has also been developed rapidly. This paper firstly gives a brief overview of cloud computing, fog computing, edge computing, MEC and IoTs. Then, we investigated the important papers related to these technologies, classify and compared the papers, so as to have a deeper understanding of these technologies.

Zhiguo Qu, Yilin Wang, Le Sun, Zheng Li, Dandan Peng
A Multi-objective Computation Offloading Method for Hybrid Workflow Applications in Mobile Edge Computing

Computation offloading has become a promising method to overcome intrinsic defects of portable smart devices, such as low operating speed and low battery capacity. However, it is a challenge to design an optimized strategy as the edge server is resource-constrained and the workflow application has timing constraints. In this paper, we investigated the hybrid workflow application computation offloading issue, which further increases the difficulty. According to the analysis of theory and consideration of time consumption and energy consumption, we establish a multi-objective optimization model to solve the issue. Furthermore, we propose a method based on particle swarm optimization algorithm for multi-objective computation offloading to get the optimal strategy for tasks offloading, which is suitable for all the hybrid workflow applications. Finally, extensive experiments have verified the effectiveness and efficiency of our proposed method.

Kai Peng, Bohai Zhao, Xingda Qian, Xiaolong Xu, Lixin Zheng, Victor C. M. Leung
An Introduction and Comparison of the Application of Cloud and Fog in IoT

This paper mainly introduces the definitions of cloud computing and fog computing, and expounds their differences. Firstly, the new problems encountered by the Internet of Things (IoTs) are put forward. It also points out the shortcomings of cloud computing in solving these problems. Then it explains the advantages of fog computing over cloud computing in solving these problems, and finally introduces the new challenges fog computing encounters.

Zheng Li, Yilin Wang
Distributed Cloud Monitoring Platform Based on Log In-Sight

Log management plays an essential role in identifying problems and troubleshoot problems in a distributed system. However, when we conducted log analysis on big data cluster, Kubernetes cluster and Ai capability cluster, we found it was difficult to find a Distributed cloud monitoring platform that met our requirements. So, we propose a Distributed cloud monitoring platform based on log insight, which can be used to achieve unified log insight of big data clusters, K8s clusters, and Ai capability clusters. At the same time, through this system, Developers can intuitively monitor and analyze the business system data and cluster operation monitoring data. Once there is a problem in the log, it will immediately alert, locate, display, and track the message. This system is helpful to improve the readability of log information to administrators, In the process of data collection, Filebeat and Metricbeat will be combined to collect data, therefore, the system can not only collect ordinary log data but also support to collect the indicator data of each famous mature system (Such as operating system, Memcached, Mysql, Docker, Kafka, etc.). Besides, the system will monitor and manage the status of cluster nodes through BeatWatcher. Finally, we develop the system and verify its feasibility and performance by simulation.

E. Haihong, Yuanxing Chen, Meina Song, Meijie Sun
A Self-adaptive PSO-Based Dynamic Scheduling Method on Hierarchical Cloud Computing

Edge computing has been envisioned as an emerging and prospective computing paradigm for its advantage of low latency, which uses local resources. However, the edge resources are usually limited and could not meet end-users’ diversified requirement. Cloud computing paradigm could provide scalable and centralized resources with high computational capabilities, but it has latency issues. Thus it is suggested to combine both computing paradigms together to improve the performance of mobile applications. In this paper, we propose a self-adaptive dynamic scheduling approach based on hierarchical heterogeneous clouds. Our scheduling mechanism considers not only schedule planning but also dynamic scheduling on heterogeneous clouds. Firstly, a self-adaptive scheduling mechanism based on a meta-heuristic optimization algorithm, PSO (Particle Swarm Optimization), is presented for schedule planning. Then a dynamic scheduling mechanism on dynamic partial workflow model is proposed for dynamic optimization during the execution. Finally, external experiments compared with other methods are conducted to demonstrate the effectiveness of our proposal.

Shunmei Meng, Weijia Huang, Xiaolong Xu, Qianmu Li, Wanchun Dou, Bowen Liu
Application of Bluetooth Low Energy Beacons and Fog Computing for Smarter Environments in Emerging Economies

The Internet of Things (IoT) has already begun to drastically alter the way people operate in various industries across the world, as well as how we interact with our environment. There is a lot of progress being made toward achieving the envisioned goals of IoT, however there are still numerous challenges to be addressed. Bluetooth low energy (BLE) and its beacons protocol have pushed forward innovations in the field of microlocation, which is a key area of IoT. The emergence of fog computing architecture has also lead to reduced dependence on cloud architecture by shifting resources towards users and local applications. Together these two innovations provide ideal conditions for adoption of IoT in emerging economies, which are known to be both financially and technically constrained. In this paper we provide an overview of the key innovations that are suitable for adoption in emerging economies based on BLE and fog computing. We further present three reference models for indoor navigation systems which can help further the research work in the application of BLE and fog computing.

Mingxu Sun, Kondwani Michael Kamoto, Qi Liu, Xiaodong Liu, Lianyong Qi
Near-Data Prediction Based Speculative Optimization in a Distribution Environment

Apache Hadoop is an open source software framework that supports data-intensive distributed applications and is distributed under the Apache 2.0 licensing agreement, where consumers will no longer deal with complex configuration of software and hardware but only pay for cloud services on demand. So how to make the performance of the cloud platform become more important in a consumer-centric environment. There exists imbalance between in some distribution of slow tasks, which results in straggling tasks will have a great influence on the Hadoop framework. By monitoring those tasks in real-time progress and copying the potential Stragglers to a different node, the speculative execution (SE) realizes to improve the probability of finishing those backup tasks before the original ones. The Speculative execution (SE) applies this principle and thus proposed a solution to handle the Straggling tasks. At present, the performance of the Hadoop system is unsatisfying because of the erroneous judgement and inappropriate selection for the backup nodes in the current SE policy. This paper proposes an SE optimized strategy which can be used in prediction of near data. In this strategy, the first step is gathering the real-time task execution information and the remaining runtime required for the task is predicted by a local prediction method. Then it chooses a proper backup node according to the near data and actual demand in the second step. On the other side, this model also includes a cost-effective model in order to make the performance of SE to the peak. The results show that using this strategy in Hadoop effectively improves the accuracy of alternative tasks and effects better in heterogeneous Hadoop environments in various situations, which is beneficial to consumers and cloud platform.

Mingxu Sun, Xueyan Wu, Dandan Jin, Xiaolong Xu, Qi Liu, Xiaodong Liu
Rendering of Three-Dimensional Cloud Based on Cloud Computing

Cloud modeling and real-time render are of great significance in virtual scene simulation. The lighting model and rendering technology are image synthesis methods, introduced to computer graphics, aiming to simplify virtual scene simulation and enhance the fidelity of complex scenes. Currently, the simulation algorithms applied to cloud scene, inclined to be complicated and computationally intensive. Hence, it is still a key challenge to implement more efficient algorithms to map higher quality three-dimensional clouds. In this paper, a computation-reducing and time-saving method is designed to deal with the above challenge. Technically, the lighting model and rendering technology for the weather research and forecasting (WRF) data are proposed to create the cloud scenes. Then project files are uploaded to the cloud system and directly for real-time rendering at the same time, which can largely save time and reduce the cost of rendering. Finally, adequate experimental analyses are conducted to verify the effectiveness and efficiency of the proposed scheme.

Yonghua Xie, Xiaoyong Kou, Ping Li, Xiaolong Xu

Cloud-Based Data Analytics

Frontmatter
Distributed Stochastic Alternating Direction Method of Multipliers for Big Data Classification

In recent years, classification with big data sets has become one of the latest research topic in machine learning. Distributed classification have received much attention from industry and academia. Recently, the Alternating Direction Method of Multipliers (ADMM) is a widely-used method to solve learning problems in a distributed manner due to its simplicity and scalability. However, distributed ADMM usually converges slowly and thus suffers from expensive time cost in practice. To overcome this limitation, we propose a novel distributed stochastic ADMM (DS-ADMM) algorithm for big data classification based on the MPI framework. By formulating the original problem as a series of sub-problems through a cluster of multiple computers (nodes). In particular, we exploit a stochastic method for sub-problem optimization in parallel to further improve time efficiency. The experimental results show that our proposed distributed algorithm is suitable to enhance the performance of ADMM, and can be effectively applied for big data classification.

Huihui Wang, Xinwen Li, Xingguo Chen, Lianyong Qi, Xiaolong Xu
Personalized Recommendation Algorithm Considering Time Sensitivity

Aiming to solve the problem of goods popularity bias, this paper introduces the prevalence of items into user interest modeling, and proposes an item popularity model based on user interest feature. Usually, traditional model that does not take into account the stability of user’s interests, which leads to the difficulty in capturing their interest. To cope with this limitation, we propose a time-sensitive and stabilized interest similarity model that involves a process of calculating the similarity of user interest. Moreover, by combining those two kinds of similarity model based on weight factors, we develop a novel algorithm for calculation, which is named as IPSTS (IPSTS). To evaluate the proposed approach, experiments are performed and results indicate that Mean Absolute Difference (MAE) and root mean square error (RMSE) could be significantly reduced, when compared with those of traditional collaborative filtering Algorithms.

Fuzhen Sun, Haiyan Zhuang, Jin Zhang, Zhen Wang, Kai Zheng
Cloud-Based Master Data Platform for Smart Manufacturing Process

With the development of technology and application of industrial internet of things, a large amount of data is generated in the research and development (R&D) processes in manufacturing domain, including manufacturing procedures, enterprise management, and product transactions. However, these data usually maintained in different departments, which result in information isolation and data with relations cannot be synchronized. This issue leads to the waste of storage space for redundant data and human resources for coordinating essential information. Aiming these problems, we proposed a cloud-based data management platform architecture to collect and maintain the data from isolated domains and distributed departments. A graph database is employed to store the data emphasizing the relations between entities and Master Data Management is deployed to link the entities cross standalone databases. The efficiency of inspecting, managing and updating information across databases shall be improved by the features of the proposed platform.

Lei Ren, Ziqiao Zhang, Chun Zhao, Guojun Zhang
A Semi-supervised Classification Method for Hyperspectral Images by Triple Classifiers with Data Editing and Deep Learning

A semi-supervised classification method for hyperspectral remote sensing images based on convolutional neural network (CNN) and modified tri-training is proposed. The abstract features are captured by training a CNN model with the pixels’ vectors as inputs. Based on the extracted high-level features, different classifiers will perform different outputs under the same training set, due to the different types of classifiers take on diverse characteristics. Thus, taking multiple classifiers’ results into consideration can integrate different prediction labels synthetically from a high level and can perform more credible results. At the meantime, the number of training samples of hyperspectral images is limited, which will hinder the classification effect. Illuminated by tri-training algorithm, we utilize triple different classifiers to classify the hyperspectral images based on the extracted high-level features in the semi-supervised mode. By utilizing triple classifiers jointly to train and update the training samples set when the number of labeled samples is limited. At the meantime, we pick the confident samples via randomize and majority vote into the training set for data editing during the iterative updating process. Experiments performed on two real hyperspectral images reveal that our method performs very well in terms of classification accuracy and effect.

Guoming Zhang, Junshu Wang, Ge Shi, Jie Zhang, Wanchun Dou
A Survey of Image Super Resolution Based on CNN

With the advent of the information age in contemporary society, images are everywhere, no matter in military use or in daily life. Therefore, as a medium for people to obtain information, images have become more and more important. With the fast development of deep convolution neural networks (DCNNs), Single-Image Super-Resolution (SISR) becomes one of the techniques that have made great breakthroughs in recent years. In this paper, we give a brief survey on the task of SISR. In general, we introduce the SR problem, some recent SR methods, public benchmark datasets and evaluation metrics. Finally, we conclude by denoting some points that could be further improved in the future.

Qianxiong Xu, Yu Zheng
Design and Development of an Intelligent Semantic Recommendation System for Websites

When searching for the interesting content within a specific website, how to describe the initial need by selecting proper keywords is a critical problem. The character-matching search functions of website can hardly meet users’ requirements. Furthermore, building the content of webpages of a specific web-site and the associated rules is uneconomical. This paper, based on the framework of the Lucene engine, applied a semantic ontology, the calculation of the relevance of word entries, and the semantics of keywords to design an intelligent semantic recommendation system with the Jena secondary semantic analysis technique. Subsequently, the expanded keywords were semantically ranked based on the term frequency analysis technique. Meanwhile, the ontology algorithm and their relevance were introduced as the dynamic weight values. Finally, in the text content retrieval process, the search results were ranked based on the previous relevance weights. The experimental results show that the system designed in this paper is not only easy to develop but also capable of expanding users queries and recommending relevant content. Further, the system can improve the precision and recall for website search results.

Zhiqiang Zhang, Heping Yang, Di Yang, Xiaowei Jiang, Nan Chen, Mingnong Feng, Ming Yang
A Lightweight Neural Network Combining Dilated Convolution and Depthwise Separable Convolution

Aimed to reduce the excessive cost of neural network, this paper proposes a lightweight neural network combining dilated convolution and depthwise separable convolution. Firstly, the dilated convolution is used to expand the receptive field during the convolution process while maintaining the number of convolution parameters, which can extract more high-level global semantic features and improve the classification accuracy of the network. Second, the use of the depthwise separable convolution reduces the network parameters and computational complexity in convolution operations. The experimental results on the CIFAR-10 dataset show that the proposed method improves the classification accuracy of the network while effectively compressing the network size.

Wei Sun, Xijie Zhou, Xiaorui Zhang, Xiaozheng He
Resource Allocation Algorithms of Vehicle Networks with Stackelberg Game

With the emergence and development of the Internet of Vehicles (IoV), higher demands are placed on the response speed and ultra-low delay of the vehicle. Cloud computing services are not friendly to reducing latency and response time. Mobile Edge Computing (MEC) is a promising solution to this problem. In this paper, we introduce MEC into the IoV to propose a specific vehicle edge resource management framework, which consists of fog nodes (FN), data service agents (DSA), and cars. We proposed a dynamic service area partitioning algorithm that enables the DSA to adjust the service area and provide a more efficient service for the vehicle. A resource allocation framework based on Stackelberg game model is proposed to analyze the pricing problem of FN and data resource strategy of DSA. We use the distributed iterative algorithm to solve the problem of game equilibrium. Our proposed resource management framework is finally verified by numerical results, which show that the allocation efficiency of FN resources among the cars is ensured, and we also get a subgame perfect nash equilibrium.

Ying Zhang, Guang-Shun Li, Jun-Hua Wu, Jia-He Yan, Xiao-Fei Sheng
Research on Coordination Control Theory of Greenhouse Cluster Based on Cloud Computing

With the development of modern agriculture, the clustering phenomenon of greenhouses is prominent. The traditional single greenhouse management is oriented to farmers. It is difficult for upper management to obtain the information of greenhouses conveniently. The real-time transmission of monitoring results and the real-time regulation of the internal environment of greenhouse clusters are difficult. And the scope of management of large-scale agricultural companies is also growing, and an integrated management platform is urgently needed. The emergence of cloud computing technology has made this management model possible. On the other hand, the greenhouse cluster is a non-linear complex large system, which not only needs to improve the capacity of the greenhouse cluster, but also take into account the utilization of regional resources. The traditional control methods are insufficient in the efficient use of regional resources, and the existing control theory can’t meet the above requirements. Target requirements. The computing power of local equipment can’t meet the needs of massive data processing. Therefore, based on the cloud computing platform, this paper draws on the theory of complex systems to carry out coordinated control theory research on greenhouse clusters, establishes a cloud computing-based greenhouse cluster management system, and designs greenhouse clusters. The control system description model is coordinated; on this basis, the greenhouse cluster coordination control structure model is designed. This study provides a reference for the control of modern greenhouse clusters, and has certain theoretical significance and application value for the development of greenhouse cluster coordinated control theory.

Xiangnan Zhang, Wenwen Gong, Yifei Chen, Dan Li, Yawei Wang
Anomalous Taxi Route Detection System Based on Cloud Services

Machine learning is very popular right now. We can apply the knowledge of machine learning to deal with some problems in our daily life. Taxi service provides a convenient way of transportation, especially for those who travel to an unfamiliar place. But there can be a risk that the passenger gets overcharged on the unnecessary mileages. To help the passenger to determine whether the taxi driver has made a detour, we propose a solution which is a cloud-based system and applies machine learning algorithms to detect anomaly taxi trajectory for the passenger. This paper briefly describes the research on several state-of-art detection methods. It also demonstrates the system architecture design in detail and gives the reader a big picture on what parts of the application have been implemented.

Yu Zi, Yun Luo, Zihao Guang, Lianyong Qi, Taoran Wu, Xuyun Zhang
Collaborative Recommendation Method Based on Knowledge Graph for Cloud Services

As the number of cloud services and user interest data soars, it’s hard for users to find suitable could services within a short time. A suitable cloud service automatic recommendation system can effectively solve this problem. In this work, we propose KGCF, a novel method to recommend users cloud services that meet their needs. We model user-item and item-item bipartite relations in a knowledge graph, and study property-specific user-item relation features from it, which are fed to a collaborative filtering algorithm for Top-N item recommendation. We evaluate the proposed method in terms of Top-N recommendation on the MovieLens 1M dataset, and prove it outperforms numbers of state-of-the-art recommendation systems. In addition, we prove it has well performance in term of long tail recommendation, which means that more kinds cloud services can be recommended to users instead of only hot items.

Weijia Huang, Qianmu Li, Xiaoqian Liu, Shunmei Meng
Efficient Multi-user Computation Scheduling Strategy Based on Clustering for Mobile-Edge Computing

The Mobile Edge Computing (MEC) is a new paradigm that can meet the growing computing needs of mobile applications. Terminal devices can transfer tasks to MEC servers nearby to improve the quality of computing. In this paper, we investigate the multi-user computation offloading problem for mobile-edge computing. We study two different computation models, local computing and edge computing. First, we drive the expressions for time delay and energy consumption for local and edge computing. Then, we propose a server partitioning algorithm based on clustering. We propose a task scheduling and offloading algorithm in a multi-users MEC system. We formulate the tasks offloading decision problem as a multi-user game, which always has a Nash equilibrium. Our proposed algorithms are finally verified by numerical results, which show that the scheduling strategy based on clustering can significantly reduce the energy consumption and overhead.

Qing-Yan Lin, Guang-Shun Li, Jun-Hua Wu, Ying Zhang, JiaHe Yan
Grazing Trajectory Statistics and Visualization Platform Based on Cloud GIS

In order to meet the needs of ranchers and grassland livestock management departments for the visualization of grazing behavior, this study develops a statistical and visual platform for herd trajectory. The Web AppBuilder for ArcGIS and ArcGIS Online were used to implement statistics and visualization of herd trajectories. The walking speed, walking trajectory and feed intake of the herd were calculated by the GP service on the server. The calculation results were published to the ArcGIS online platform. The relevant information was analyzed and displayed by Web AppBuilder for ArcGIS calling the data on ArcGIS Online. This platform achieved the visualization function of walking speed, walking trajectory and feed intake of the herd. It can provide technical support and data support for relevant management departments to monitor grazing information and study the living habits of herds.

Dong Li, Chuanjian Wang, Qilei Wang, Tianying Yan, Ju Wang, Wanlong Bing
Cloud-Based AGV Control System

With the development of artificial intelligence technology, the application of mobile robots is more and more extensive. How to solve the control problem of mobile robots in complex network environment is one of the core problems that plague the promotion and application of AGV clusters. In view of the above problems, this paper studies the control technology in the cloud big data environment, and realizes the decision, planning and control of AGV in the cloud environment. Firstly, cloud-side data sharing and cross-domain collaboration are used to realize intelligent adaptive association of heterogeneous data, then establish a collaborative hierarchical information cloud processing model, and design a new AGV sensing structure based on various devices such as laser radar and ultrasonic sensors. Finally, a set of AGV motion control methods in cloud environment is proposed. The experimental results show that the efficient coordination among network nodes in the heterogeneous AGV system in the cloud environment is stable overall and has a lower delay rate, which will greatly promote the application of AGV in various complex network environments.

Xiangnan Zhang, Wenwen Gong, Haolong Xiang, Yifei Chen, Dan Li, Yawei Wang

Cloud Applications

Frontmatter
A Parallel Drone Image Mosaic Method Based on Apache Spark

MapReduce has been widely used to process large-scale data in the past decade. Among the quantity of such cloud computing applications, we pay special attention to distributed mosaic methods based on numerous drone images, which suffers from costly processing time. In this paper, a novel computing framework called Apache Spark is introduced to pursue instant responses for the quantity of drone image mosaic requests. To assure high performance of Spark-based algorithms in a complex cloud computing environment, we specially design a distributed and parallel drone image mosaic method. By modifying to be fit for fast and parallel running, all steps of the proposed mosaic method can be executed in an efficient and parallel manner. We implement the proposed method on Apache Spark platform and apply it to a few self-collected datasets. Experiments indicate that our Spark-based parallel algorithm is of great efficiency and is robust to process low-quality drone aerial images.

Yirui Wu, Lanbo Ge, Yuchi Luo, Deqiang Teng, Jun Feng
CycleSafe: Safe Route Planning for Urban Cyclists

Cyclist numbers in major cities are constantly increasing whilst traffic conditions continue to worsen. This poses a major issue for cyclists who attempt to share congested roads with motor vehicles. This paper shows that there is not enough work being done to improve the safety of cyclists on the road, and proposes a solution to this problem in the form of a route planning application. Current cyclist route planning applications do not take safety factors like traffic, rain or visibility into account when providing cycle routes. We use Auckland city as a case study to explore our solution. The traffic and weather data in Auckland are acquired by using Google, Bing and Wunderground APIs. An evaluation of our solution shows that our system successfully implements a route planning application that routes users away from unsafe traffic conditions, thus improving cyclist safety.

Mehdi Shah, Tianqi Liu, Sahil Chauhan, Lianyong Qi, Xuyun Zhang
Prediction of Future Appearances via Convolutional Recurrent Neural Networks Based on Image Time Series in Cloud Computing

In recent years, cloud computing has become a prevalent platform to run artificial intelligence (AI) and deep learning applications. With cloud services, AI models can be deployed easily for the convenience of users. However, although cloud service providers such as Amazon Web Services (AWS) have provided various services to support AI applications, the design of AI models is still the key in many specific applications such as forecasting or prediction. For example, how to forecast the future appearance of ornamental plants or pets? To deal with this problem, in this paper we develop a convolutional recurrent neural network (CRNN) model to forecast the future appearance according to their past appearance images. Specifically, we study the problem of using the pine tree’s past appearance images to forecast its future appearance images. We use a plant simulation software to generate pine tree’s growing images to train the model. As a result, our model can generate the future appearance image of the pine tree, and the generated images are very similar to the true images. This means our model can work well to forecast the future appearance based on the image series.

Zao Zhang, Xiaohua Li
Video Knowledge Discovery Based on Convolutional Neural Network

Under the background of Internet+education, video course resources are becoming more and more abundant, at the same time, the Internet has a large number of not named or named non-standard courses video. It is increasingly important to identify courses name in these abundant video course teaching resources to improve learner efficiency. This study utilizes a deep neural network framework that incorporates a simple to implement transformation-invariant pooling operator (TI-pooling), after the audio and image information in course video is processed by the convolution layer and pooling layer of the model, the TI-pooling operator will further extract the features, so as to extract the most important information of course video, and we will identify the course name from the extracted course video information. The experimental results show that the accuracy of course name recognition obtained by taking image and audio as the input of CNN model is higher than that obtained by only image, only audio and only image and audio without ti-pooling operation.

JinJiao Lin, ChunFang Liu, LiZhen Cui, WeiYuan Huang, Rui Song, YanZe Zhao
Time-Varying Water Quality Analysis with Semantical Mining Technology

Water resources is one of the most important natural resources. With the development of industry, water resource is harmed by various types of pollution. However, water pollution process is affected by many factors with high complexity and uncertainty. How to accurately predict water quality and generate scheduling plan in time is an urgent problem to be solved. In this paper, we propose a novel method with semantical mining technology to discover knowledge contained in historical water quality data, which can be further used to improve forecast accuracy and achieve early pollution warning, thus effectively avoiding unnecessary economic losses. Specifically, the proposed semantical mining method consists of two stages, namely frequent sequence extraction and association rule mining. During the first stage, we propose FOFM (Fast One-Off Mining) mining algorithm to extract frequently occurred sequences from quantity of water quality data, which can be further considered as input of the second stage. During the process of association rule mining, we propose PB-ITM (Prefix-projected Based-InterTransaction Mining) algorithm to find relationship between frequently occurred water pollution events, which can be regarded as knowledge to explain water pollution process. Through experimental comparisons, we can conclude the proposed method can result in flexible, accurate and diverse patterns of water quality events.

Jun Feng, Qinghan Yu, Yirui Wu
Data-Driven Fast Real-Time Flood Forecasting Model for Processing Concept Drift

The hydrological data of small and medium watershed develops with the passage of time. The rainfall-runoff patterns in these data often develop over time, and the models established for the analysis of such data will soon not be applicable. In view of the problem that adaptability and accuracy of the existing data-driven flood real-time forecasting model in medium and small watershed with concept drift. We update the data-driven model using incremental training based on support vector machine (SVM) and gated recurrent unit (GRU) model respectively. According to the rapid real-time flood forecasting test results of the Tunxi watershed, Anhui Province, China, the fast real-time flood forecast data-driven model with incremental update can more accurately predict the moment when the flood begins to rise and the highest point of flood stream-flow, and it is an effective tool for real-time flood forecasting in small and medium watersheds.

Le Yan, Jun Feng, Yirui Wu, Tingting Hang
A Survey on Dimension Reduction Algorithms in Big Data Visualization

In practical applications, the data set we deal with is typically high dimensional, which not only affects training speed but also makes it difficult for people to analyze and understand. It is known as “the curse of dimensionality”. Therefore, dimensionality reduction plays a key role in the multidimensional data analysis. It can improve the performance of the model and assist people in understanding the structure of data. These methods are widely used in financial field, medical field e.g. adverse drug reactions and so on. In this paper, we present a number of dimension reduction algorithms and compare their strengths and shortcomings. For more details about these algorithms, please visit our Dagoo platform via www.dagoovis.com .

Zheng Sun, Weiqing Xing, Wenjun Guo, Seungwook Kim, Hongze Li, Wenye Li, Jianru Wu, Yiwen Zhang, Bin Cheng, Shenghui Cheng
Quantum Searchable Encryption for Cloud Data Based on Delegating Quantum Computing

Based on delegating quantum computing (DQC), a DQC model that adapts to multi-qubit and composite quantum circuits is given firstly. In this model, the single client with limited quantum ability can give her encrypted data to a powerful but untrusted quantum data server and let the data server computes over the encrypted data without decryption, where the computation is a quantum circuit composed of multiple quantum gates. Then, the client generates the decryption key to decrypt the computing result according to the circuit of computation. However, this model cannot meet the situation of multi-client accessing or computing encrypted cloud data in the cloud environment. To solve this problem, we let the client outsource key generation to a trusted key server, which composes the quantum cloud center with the data server. The clients only perform X and Z operation according to the encryption or decryption key. Then, combined with Grover algorithm, a quantum searchable encryption scheme for cloud data based on delegating quantum computing is proposed in this paper. The data server mainly uses Grover algorithm to perform search computation on the encrypted data. Moreover, a concrete example of our scheme is discussed next, where the data server searches for 2 target items from 8 items of the encrypted data. Finally, security of our proposed scheme is analysed, which can protect the security of the data.

Yinsong Xu, Wenjie Liu, Junxiu Chen, Lian Tong
Quantum Solution for the 3-SAT Problem Based on IBM Q

Quantum computing is currently considered to be a new type of computing model that has a subversive impact on the future. Based on its leading information and communication technology advantages, IBM launched IBM Q Experience cloud service platform, and achieved phased research results in the quantum simulator and programming framework. In this paper, we propose a quantum solution for the 3-SAT problem, which includes three steps: constructing the initial state, computing the unitary $$U_f$$ implementing the black-box function f and performing the inversion about the average. In addition, the corresponding experimental verification for an instance of the Exactly-1 3-SAT problem with QISKit, which can connect to IBM Q remotely, is depicted. The experimental result not only show the feasibility of the quantum solution, but also serve to evaluate the functionality of IBM Q devices.

Ying Zhang, Yu-xiang Bian, Qiang Fan, Junxiu Chen
Cloud Grazing Management and Decision System Based on WebGIS

In order to improve the information level of animal husbandry and solve the problems of unreasonable utilization of grassland resources, this study was based on 3S technology, making full use of the advantages of GIS information processing and Cloud computing resources. A cloud grazing management and decision system based on WebGIS was developed. The system took the mainstream Web browser as the client platform. The functions of displaying the real-time position of the herd, querying historical trajectory, monitoring grassland growth and estimating situation of grassland utilization were achieved by the system. For the server side, the spatial management technology of spatial data engine ArcSDE and SQL Server 2012 was applied to store data. Tomcat 7.0 was used as the Web server and ArcGIS Server 10.3 was used as GIS Server. The automation of data processing was realized by calling ArcPy package through Python script. The results were published automatically to the ArcGIS Server for client display. The system can provide decision-making basis for ranchers and grassland livestock management departments to manage grazing and grassland. It enables ranchers to make reasonable and effective grazing plans, so as to make balanced utilization of grassland resources and promote the sustainable development of grazing animal husbandry.

Dong Li, Chuanjian Wang, Tianying Yan, Qilei Wang, Ju Wang, Wanlong Bing
Application Design of Provincial Meteorological Service System Based on National Unified Meteorological Data Environment

A unified data environment is established in China Integrated Meteorological Information Sharing System (CIMISS) or the national meteorological service. The paper discusses the establishment of provincial meteorological service system application flow and scheme based on unified data environment. It creates a seamless integration between local system and China Integrated Meteorological Information Sharing System without changing business processes and system architecture of existing meteorological service system. In the design scheme, the meteorological data is obtained by the multiple services based on unified data environment and data interface. According to different data structures, analytical methods of discrete data, gridded data and raster data are discussed. Finally, efficient and rapid visualization of meteorological data is realized. The result shows that the application flow and scheme that China Integrated Meteorological Information Sharing System used in provincial meteorological service system are effective and feasible. It is hoped that the studies of this paper can provide a reference for accessing unified national meteorological data environment for meteorological service system.

Qing Chen, Ming Yang, You Zeng, Yefeng Chen, Shucheng Wu, Yun Xiao, Yueying Hong
Moving Vehicle Detection Based on Optical Flow Method and Shadow Removal

Video-based moving vehicle detection is an important prerequisite for vehicle tracking and vehicle counting. However, in the natural scene, the conventional optical flow method cannot accurately detect the boundary of the moving vehicle due to the generation of the shadow. In order to solve this problem, this paper proposes an improved moving vehicle detection algorithm based on optical flow method and shadow removal. The proposed method firstly uses the optical flow method to roughly detect the moving vehicle, and then uses the shadow detection algorithm based on the HSV color space to mark the shadow position after threshold segmentation, and further combines the region-labeling algorithm to realize the shadow removal and accurately detect the moving vehicle. Experiments are carried out in complex traffic scenes with shadow interference. The experimental results show that the proposed method can well solve the impact of shadow interference on moving vehicle detection and realize real-time and accurate detection of moving vehicles.

Min Sun, Wei Sun, Xiaorui Zhang, Zhengguo Zhu, Mian Li
Refactor Business Process Models for Efficiency Improvement

Since business processes describe the core value chain of enterprises, thousands of business processes are modeled in business process models. A problem is how to improve the efficiency of these models. In this paper, we propose an approach to refactor these models for efficiency improvement. More specifically, we first identify false sequence relations that affect model efficiency based on the sequence relation matrix and the dependency relation matrix. Second, we refactor a business process model by constructing and transforming a dependency graph without altering its output result. After refactoring, the concurrent execution of business tasks in the original models can be maximized such that its efficiency can be improved. Experimental results show the effectiveness of our approach.

Fei Dai, Miao Liu, Qi Mo, Bi Huang, Tong Li
A New Model of Cotton Yield Estimation Based on AWS

Timely and precise yield estimation is of great significance to agricultural management and macro-policy formulation. In order to improve the accuracy and applicability of cotton yield estimation model, this paper proposes a new method called SENP (Seedling Emergence and Number of Peaches) based on Amazon Web Services (AWS). Firstly, using the high-resolution visible light data obtained by the Unmanned Aerial Vehicle (UAV), the spatial position of each cotton seedling in the region was extracted by U-Net model of deep learning. Secondly, Sentinel-2 data were used in analyzing the correlation between the multi-temporal Normalized Difference Vegetation Index (NDVI) and the actual yield, so as to determine the weighting factor of NDVI in each period in the model. Subsequently, to determine the number of bolls, the growth state of cotton was graded. Finally, combined with cotton boll weight, boll opening rate and other information, the cotton yield in the experimental area was estimated by SENP model, and the precision was verified according to the measured data of yield. The experimental results reveal that the U-Net model can effectively extract the information of cotton seedlings from the background with high accuracy. And the precision rate, recall rate and F1 value reached 93.88%, 97.87% and 95.83% respectively. NDVI based on time series can accurately reflect the growth state of cotton, so as to obtain the predicted boll number of each cotton, which greatly improves the accuracy and universality of the yield estimation model. The determination coefficient (R2) of the yield estimation model reached 0.92, indicating that using SENP model for cotton yield estimation is an effective method. This study also proved that the potential and advantage of combining the AWS platform with SENP, due to its powerful cloud computing capacity, especially for deep learning, time-series crop monitoring and large scale yield estimation. This research can provide the reference information for cotton yield estimation and cloud computing platform application.

Quan Xu, Chuanjian Wang, Jianguo Dai, Peng Guo, Guoshun Zhang, Yan Jiang, Hongwei Shi

Cloud Security and Privacy

Frontmatter
Intelligent System Security Event Description Method

In a cloud environment, the control logic and data forwarding of network devices are separated from each other. The control layer is responsible for the centralized management of network nodes. After it acquires the entire network topology, it can automatically generate a visualized network structure. The security analyst can grasp the connection status of the devices on the entire network in the control domain. The network topology generation method based on the control layer information is directly and efficiently, which can greatly simplify the description of security events in the cloud environment. At the same time, the separate structure also makes the specific details of the underlying network device hidden. Petri-net, as a formal description tool, can be used to describe such a structure. Based on the cloud environment structure, this paper combines the advantages of CORAS modeling and analysis with object-oriented Petri-net theory, and proposes a COP (CORAS-based Object Oriented Petri-net)-based intelligent system security event description method. Model the description of the complexity and dynamics of cloud environment security events.

Jun Hou, Qianmu Li, Yini Chen, Shunmei Meng, Huaqiu Long, Zhe Sun
Designing a Bit-Based Model to Accelerate Query Processing Over Encrypted Databases in Cloud

Database users have started moving toward the use of cloud computing as a service because it provides computation and storage needs at affordable prices. However, for most of the users, the concern of privacy plays a major role as they cannot control data access once their data are outsourced, especially if the cloud provider is curious about their data. Data encryption is an effective way to solve privacy concerns, but executing queries over encrypted data is a problem that needs attention. In this research, we introduce a bit-based model to execute different relational algebra operators over encrypted databases at the cloud without decrypting the data. To encrypt data, we use the randomized encryption algorithm (AES-CBC) to provide the maximum-security level. The idea is based on classifying attributes as sensitive and non-sensitive, where only sensitive attributes are encrypted. For each sensitive attribute, the table’s owner predefines the possible partition domains on which the tuples will be encoded into bit vectors before the encryption. We store the bit vectors in an additional column in the encrypted table in the cloud. We use those bits to retrieve only part of encrypted records that are candidates for a specific query. We implemented and evaluated our model and found that the proposed model is practical and success to minimize the range of the retrieved encrypted records to less than 30% of the whole set of encrypted records in a table.

Sultan Almakdi, Brajendra Panda
Review of Research on Network Flow Watermarking Techniques

In cloud environment, a framework for cross-domain collaborative tracking could find intruders hidden behind autonomous domains by linking these autonomous domains effectively. The autonomous domain in framework could select the appropriate intrusion tracking technology to implement intra-domain tracking according to its own operating rules and communication characteristics. As an active traffic analysis technology, network flow watermarking technology could accurately locate the real positions of intruders hidden behind intermediate hosts (stepping stones) and anonymous communication systems. Furthermore, it has many advantages such as high precise rate, low false alarm rate, short observation time and so on. For these advantages and its high efficiency of intra-domain tracking, it has become the hot spot in academe research in recent years. Therefore, this paper did the following work: (1) research on network flow watermarking technology; (2) conclude the implementation framework of network flow watermarking technology; (3) analyze the principles and implementation processes of several mainstream network flow watermarking schemes; (4) analyze threats to network flow watermarking.

Hui Chen, Qianmu Li, Shunmei Meng, Haiyuan Shen, Kunjin Liu, Huaqiu Long
A Multi-objective Virtual Machine Scheduling Algorithm in Fault Tolerance Aware Cloud Environments

In modern cloud datacenters, virtual machine (VM) scheduling is a complex problem, especially taking consideration of the factor of service reliability. Failures may occur on physical servers while they are running cloud users’ applications. To provide high-reliability service, cloud providers can adopt some fault tolerance techniques, which will influence performance criteria of VM scheduling, such as the actual execution time and users’ expenditure. However, only few studies consider fault tolerance and its influence. In this paper, we investigate fault tolerance aware VM scheduling problem and formulate it as a bi-objective optimization model with quality of service (QoS) constraints. The proposed model tries to minimize users’ total expenditure and, at the same time maximize the successful execution rate of their VM requests. The both objectives are important concerns for users to improve their satisfactions, which can offer them sufficient incentives to stay and play in the clouds and keep the cloud ecosystem sustainable. Based on a defined cost efficiency factor, a heuristic algorithm is then developed. Experimental results show that, indeed, fault tolerance significantly influences some performance criteria of VM scheduling and the developed algorithm can decrease users’ expenditure, improve successful execution rate of their VM requests and thus perform better under fault tolerance aware cloud environments.

Heyang Xu, Pengyue Cheng, Yang Liu, Wei Wei, Wenjie Zhang
PSVM: Quantitative Analysis Method of Intelligent System Risk in Independent Host Environment

Quantitative risk analysis of security incidents is a typical non-linear classification problem under limited samples. Having advantages of strong generalization ability and fast learning speed, the Support Vector Machine (SVM) is able to solve classification problems in limited samples. To solve the problem of multi-classification, Decision Tree Support Vector Machine (DT-SVM) algorithm is used to construct multi-classifier to reduce the number of classifiers and eliminate non-partitionable regions. Particle Swarm Optimization (PSO) algorithm is introduced to cluster training samples to improve the classification accuracy of the constructed multi-classifier. In the ubiquitous network, the cost of information extraction and processing is significantly lower than that of traditional networks. This paper presents a quantitative analysis method of security risk based on Particle Swarm Optimization Support Vector Machine (PSO-SVM), and classifies the flow data by combining the way of obtaining the flow data in ubiquitous networks, so as to realize the quantitative analysis of the security risk in ubiquitous networks.In the experiment, KDD99 data set is selected to verify the effectiveness of the algorithm. The experimental results show that the proposed PSO-SVM classification method is more accurate than the traditional one. In the ubiquitous network, this paper builds an experimental environment to illustrate the implementation process of security risk analysis method based on PSO-SVM. The risk analysis results show that the analysis value of risk in ubiquitous network fits well with the change trend of actual value. It means quantitative analysis of risk can be achieved.

Shanming Wei, Haiyuan Shen, Qianmu Li, Mahardhika Pratama, Meng Shunmei, Huaqiu Long, Yi Xia
Coordinated Placement of Meteorological Workflows and Data with Privacy Conflict Protection

Cloud computing is cited by various industries for its powerful computing power to solve complex calculations in the industry. The massive data of meteorological department has typical big data characteristics. Therefore, cloud computing has been gradually applied to deal with a large number of meteorological -services. Cloud computing increases the computational speed of meteorological services, but data transmission between nodes also generates additional data transmission time. At the same time, based on cloud computing technology, a large number of computing tasks are cooperatively processed by multiple nodes, so improving the resource utilization of each node is also an important evaluation indicator. In addition, with the increase of data confidentiality, there are some data conflicts between some data, so the conflicting data should be avoided being placed on the same node. To cope with this challenge, the meteorological application is modeled and a collaborative placement method for tasks and data based on Differential Evolution algorithm (CPDE) is proposed. The Non-dominated Sorting Differential Evolution (NSDE) algorithm is used to jointly optimize the average data access time, the average resource utilization of nodes and the data conflict degree. Finally, a large number of experimental evaluations and comparative analyses verify the efficiency of our proposed CPDE method.

Tao Huang, Shengjun Xue, Yumei Hu, Qing Yang, Yachong Tian, Dan Zeng
Method and Application of Homomorphic Subtraction of the Paillier Cryptosystem in Secure Multi-party Computational Geometry

A secure two-party computation protocol for the problem of the distance between two private points is important and can be used as the building block for some secure multi-party computation (SMC) problems in the field of geometry. Li’s solution to this problem is inefficient based on $$OT_m^1$$ oblivious transfer protocol and some drawbacks still remain while applied to compute the relationship between a private circle and a private point. Two protocols are also proposed based on the Paillier cryptosystem by Luo et al. and more efficient than Li’s solution, but there also remain some drawbacks. In this paper, we propose an idea to improve the efficiency of secure protocol by using its homomorphic subtraction based on the Paillier cryptosystem. Then we apply it to solve the secure two-party computation problem for the distance between two private points. Using our solution, the SMC protocol to the relationship between a private point and a private circle area is more efficient and private than Li’s solution. In addition, we also find that our solution is also more efficient than the BGN-based solution and much better while the plaintext can be in some large range.

Meng Liu, Yun Luo, Chi Yang, Dongliang Xu, Taoran Wu
A Secure Data Access Control Scheme Without Bilinear Pairing in Edge Computing

Edge computing, as an extension of cloud computing, subcontracts the personal private data to edge nodes on the edge network of Internet of Things (IoT) to decrease transmission delay and network congestion. So, a major security concern in edge computing is access control issues for shared data. In this paper we introduce a scheme without bilinear pairing encryption (Un-BPE) to provide access control in edge and cloud communication. To achieve confidentiality, verifiability and access control, the secret key is generated by Key Trust Authority (KTA), end users and edge node together, and saved in cloud platform; the operations of verification are performed by the adjacent edge node; and the operations of encryption and decryption are performed by the terminal device. We verify the efficiency of our scheme in terms of the security of the encryption algorithm and the performance of the system. The analysis of the proposed scheme reveals better computational efficiency.

Xiaofei Sheng, Junhua Wu, Guangshun Li, Qingyan Lin, Yonghui Yao
Simulations on the Energy Consumption of WRF on Meteorological Cloud

In the paper, we try to evaluate the energy consumption of meteorological applications on meteorological cloud of on different kinds of processors. We take WRF (Weather Research and Forecasting model) model as the typical model. Three major factors are including in the evaluation: the energy consumption, the execution time, and the parallelism. The moldable parallel tasks have a scope of parallelisms. But after the job has an execution state, and the parallelism cannot be changed during the execution. Different to most of past research, our system support slots time and every job needs a few slot times to execute it. We give a detailed analysis of DVFS (Dynamic Voltage and Frequency Scaling) model for WRF and evaluate the different performance of three kinds of CPUs in different aspects, and at last, based the analysis of the attributes of the three CPUs and the nonlinear speedup of WRF under different numbers of resources, simulations result are given to address the energy consumption of WRF under different environments. We hope our research can help us to enhance the scheduling method of parallel tasks.

Junwen Lu, Yongsheng Hao, Xianmei Hua
A Survey of Information Intelligent System Security Risk Assessment Models, Standards and Methods

This paper describes the theoretical hierarchy of information security risk assessment, which includes the models, standards and methods. Firstly, this paper generalizes and analyzes the security risk assessment models on the macro scale and proposes a common security risk assessment model by reviewing the development history of the models. Secondly, this paper compares different security risk assessment standards and classifies them into information security risk assessment standards, information security risk assessment management standards and information security risk assessment management implementation guidelines on the mesoscale. Then, on the micro scale, this paper generalizes security risk assessment methods and analyzes the security risk assessment implementation standards, which is the specific implementation method of security assessment work. Finally, this paper proposes a cloud security event description and risk assessment analysis framework based on the cloud environment and the common security risk assessment model we proposed.

Zijian Ying, Qianmu Li, Shunmei Meng, Zhen Ni, Zhe Sun

Smart Grid and Innovative Frontiers in Telecommunications

Frontmatter
Multiple Time Blocks Energy Harvesting Relay Optimizing with Time-Switching Structure and Decoding Cost

Energy harvesting (EH) is of prime importance for enabling the Internet of Things (IoT) networks. Although, energy harvesting relays have been considered in the literature, most of the studies do not account for the processing costs, such as the decoding cost in a decode-and-forward (DF) relay. However, it is known that the decoding cost amounts to a significant fraction of the circuit power required for receiving a codeword. Hence, in this work, we are motivated to consider an EH-DF relay with the decoding cost and maximize the average number of bits relayed by it with a time-switching architecture. To achieve this, we first propose a time-switching frame structure consisting of three phases: (i) an energy harvesting phase, (ii) a reception phase and, (iii) a transmission phase. We obtain optimal length of each of the above phases and communication rates that maximize the average number of bits relayed. We consider the radio frequency (RF) energy to be harvested by the relay is from the dedicated transmitter and the multiple block case when energy is allowed from flow among the blocks, different from the single block case when energy is not allowed to flow among the blocks. By exploiting the convexity of the optimization problem, we derive analytical optimum solutions under the EH scenario. One of the optimal receiving rate for the relay is the same as in single block case. We also provide numerical simulations for verifying our theoretical analysis.

Chenxu Wang, Yanxin Yao, Zhengwei Ni, Rajshekhar V. Bhat, Mehul Motani
Spectrum Sharing in Cognitive Radio Enabled Smart Grid: A Survey

Smart grid is viewed as the next-generation electric power system to meet the demand of communication and power delivery in an intelligent manner. With large scale deployment of electric power systems, smart grid faces the challenge from large volume data and high spectrum needs. To realize efficient spectrum utilization in the fact of spectrum scarcity, cognitive radio (CR) is involved in smart grid and generates the cognitive radio enabled smart grid. Cognitive radio enabled smart grid coexists with primary network by employing CR technologies including spectrum sensing, sharing, access and so on. Spectrum sharing is an important CR technology which realizes network coexistence without harmful interference through radio resource allocation. In this paper, a comprehensive survey is provided to review the state-of-the-art researches on spectrum sharing in cognitive radio enabled smart grid. We identify the network architecture and communication technology issues of cognitive radio enabled smart gird, and illustrate the investigation of spectrum sharing in different radio resource dimensions to highlight the superiority in efficient spectrum utilization.

Shuo Chen
Colorization of Characters Based on the Generative Adversarial Network

With the development of economy, global demand for electricity is increasing, and the requirements for the stability of the power grid are correspondingly improved. The intelligence of the power grid is an inevitable choice for the research and development of power systems. Aiming at the security of the smart grid operating environment, this paper proposes a gray-scale image coloring method based on generating anti-network, which is used for intelligent monitoring of network equipment at night, and realizes efficient monitoring of people and environment in different scenarios. Based on the original Generative Adversarial Network, the method uses the Residual Net improved network to improve the integrity of the generated image information, and adds the least squares loss to the generative network to narrow the distance between the sample and the decision boundary. Through the comparison experiments in the self-built CASIA-Plus-Colors high-quality character dataset, it is verified that the proposed method has better performance in colorization of different background images.

Changtong Liu, Lin Cao, Kangning Du
Enhanced LSTM Model for Short-Term Load Forecasting in Smart Grids

With the rapid development of smart grids, significant research has been devoted to the methodologies for short-term load forecasting (STLF) due to its significance in forecasting demand on electric power. In this paper an enhanced LSTM model is proposed to upgrade the state-of-the-art LSTM network by exploiting the long periodic information of load, which is missed by the standard LSTM model due to its constraint on input length. In order to distill information from long load sequence and keep the input sequence short enough for LSTM, the long load sequence is reshaped into two-dimension matrix whose dimension accords to the periodicity of load. Accordingly, two LSTM networks are paralleled: one takes the rows as input to extract the temporal pattern of load in short time, while the other one takes the columns as input to distill the periodicity information. A multi-layer perception combines the two outputs for more accurate load forecasting. This model can exploit more information from much longer load sequence with only linear growth in complexity, and the experiment results verify its considerable improvement in accuracy over the standard LSTM model.

Jianing Guo, Yuexing Peng, Qingguo Zhou, Qingquan Lv
Time-Switching Energy Harvesting Relay Optimizing Considering Decoding Cost

Energy harvesting (EH) from natural and man-made sources is of prime importance for enabling the Internet of Things (IoT) networks. Although, energy harvesting relays in a relay network, which form building blocks of an IoT network, have been considered in the literature, most of the studies do not account for the processing costs, such as the decoding cost in a decode-and-forward (DF) relay. However, it is known that the decoding cost amounts to a significant fraction of the circuit power required for receiving a codeword. Hence, in this work, we are motivated to consider an EH-DF relay with the decoding cost and maximize the average number of bits relayed by it with a time-switching architecture. To achieve this, we first propose a time-switching frame structure consisting of three phases: (i) an energy harvesting phase, (ii) a reception phase and, (iii) a transmission phase. We obtain optimal length of each of the above phases and communication rates that maximize the average number of bits relayed. We consider two EH scenarios, (a) when the radio frequency (RF) energy, to be harvested by the relay, is transmitted from a dedicated transmitter, and (ii) when the energy is harvested at the rely from the ambient environment. By exploiting the convexity of the optimization problem, we derive analytical optimum solutions under the above two scenarios and provide numerical simulations for verifying our theoretical analysis.

Wanqiu Hu, Yanxin Yao, Zhengwei Ni, Rajshekhar V. Bhat, Mehul Motani
A Resource Allocation Scheme for 5G C-RAN Based on Improved Adaptive Genetic Algorithm

Cloud-Radio Access Networks (C-RAN) is a novel mobile network architecture where baseband resources are pooled, which is helpful for the operators to deal with the challenges caused by the non-uniform traffic and the fast growing user demands. The main idea of C-RAN is to divide the base stations into the baseband unit (BBU) and the remote radio head (RRH), and then centralize the BBUs to form a BBU pool. The BBU pool is virtualized and shared between the RRHs, improving statistical multiplexing gains by allocating baseband and radio resources dynamically. In this paper, aiming at the problem of resource dynamic allocation and optimization of 5G C-RAN, a resource allocation strategy based on improved adaptive genetic algorithm (IAGA) is proposed. The crossover rate and mutation rate of the genetic algorithm are optimized. Simulation results show that the performance of the proposed resource allocation strategy is better than the common frequency reuse algorithm and the traditional genetic algorithm (GA).

Xinyan Ma, Yingteng Ma, Dongtang Ma
A Novel Adaptive Multiple-Access Scheme for Wireless Chip Area Network in the Smart Grid System

The design and construction of the smart grid system in 5G ultra-dense network needs to be effectively integrated with the mobile communication technology. Wireless chip area network (WCAN), as an application of the smart grid, has promising research potential. Focusing on the issue of multi-user network communication, this paper proposes an adaptive time-hopping pulse position modulation (TH-PPM) multiple access scheme that is applicable to WCAN. Combined with the specific applications of WCAN, the wireless channel characteristics of intra/inter chip communication are investigated, the bit error rate (BER) performance of the THPPM multiple-access system is analyzed; then, based on the aforementioned results, an adaptive TH-PPM multiple-access distribution mechanism is proposed and an intelligent transmission mechanism is designed to appropriately select the monopulse signal-to-noise ratio of the intra/inter chip, BER, and transmission rate in WCAN. Finally, the performance is analyzed through simulation and is also compared with the fixed multiple-access technology. The results show that on the premise of ensuring wireless interconnection quality of service of the intra/inter chip, this scheme can allocate system rate and power resource properly, strengthen transmission performance, and address the limitations of fixed multiple-access technology. The findings presented in this paper provide a reference for multi-user multiple-access communication with large capacity.

Xin-Yue Luo, Hao Gao, Xue-Hua Li
Backmatter
Metadata
Title
Cloud Computing, Smart Grid and Innovative Frontiers in Telecommunications
Editors
Xuyun Zhang
Guanfeng Liu
Meikang Qiu
Wei Xiang
Tao Huang
Copyright Year
2020
Electronic ISBN
978-3-030-48513-9
Print ISBN
978-3-030-48512-2
DOI
https://doi.org/10.1007/978-3-030-48513-9

Premium Partner