Skip to main content

2018 | Buch

Smart Computing and Communication

Third International Conference, SmartCom 2018, Tokyo, Japan, December 10–12, 2018, Proceedings

insite
SUCHEN

Über dieses Buch

This book constitutes the refereed proceedings of the Third International Conference on Smart Computing and Communications, SmartCom 2018, held in Tokyo, Japan, in December 2018.

The 45 papers presented in this volume were carefully reviewed and selected from 305 submissions. They focus on topics from smart data to smart communications, as well as smart cloud computing to smart security.

Inhaltsverzeichnis

Frontmatter
A Two-Way Identity Authentication Scheme Based on Dynamic Password

In order to solve the security issues existed in RFID authentication in recent years. A mutual identity authentication scheme based on dynamic is proposed after describing and analyzing the problems that RFID authentication technology encounters, which can solve replay attack, man-in-the-middle attack and other security issues. In addition, this paper also describes the techno of Authentication technology. The method proposed refers to tags privacy level between Tag and Reader to achieve mutual authentication, it not only can enhance the privacy protection of the label carrier and protect the identity privacy of the Reader holder, but also has a certain effectiveness advantage.

Baohua Zhao, Ningyu An, Xiao Liang, Chunhui Ren, Zhihao Wang
Adaptive Quality Control Scheme to Improve QoE of Video Streaming in Wireless Networks

Recently, with the spread of smart devices and the development of networks, the demand for video streaming has increased, and HTTP adaptive streaming has been gaining attention. HTTP adaptive streaming can guarantee QoE (Quality of Experience) because it selects the video quality according to the network state. However, in wireless networks, delay and packet loss rates are high and the available bandwidth fluctuates sharply. Therefore, QoE is degraded when the quality is selected on the basis of the measured bandwidth. In this paper, we propose an adaptive quality control scheme to improve QoE of video streaming in wireless networks. The proposed scheme calculates two factors, the buffer underflow probability and the instability, by considering the buffer state and the changes of quality level. Using these factors, the proposed scheme defines a quality control region that consists of four sub-regions. The video quality is determined by applying different control strategy to each sub-region. The results of experiments have shown that the proposed scheme improves QoE compared to the existing quality control schemes by minimizing the buffer underflow and the unnecessary quality changes and maximizing the average video quality.

Minsu Kim, Kwangsue Chung
Travel Review Analysis System with Big Data (TRAS)

This paper introduces a process for online travel review analysis in Thai language employed in a recommender system supporting travelers (TRAS). The process covers three main categories: attractions, accommodation, and gastronomy. The filtering and queuing results gained with MapReduce build the input for three main steps: (1) the analysis process for element scores, (2) the analysis process for the total scores of the reviews, and (3) the travel guidance system based on users’ selections. The extensive tests revealed that the system operates properly regarding functional and non-functional requirements. We employed 60,000 travel reviews containing all categories to test the analysis process for steps (1) and (2). We found that the number of adjectives and modifiers in each review affects the time used for analysis. In contrast to previous recommender systems, TRAS applies a more diverse and transparent rating and ranking approach. Travelers can select the features they are interested in and get personalized results, so that a given location might achieve different rankings for different travelers.

Chakkrit Snae Namahoot, Sopon Pinijkitcharoenkul, Michael Brückner
Vulnerability Assessment for PMU Communication Networks

The smart grid is introducing many salient features such as wide-area situational awareness, precise demand response, substation automation. These features are enabled by data communication networks that facilitate the collection, transfer, and processing of a wide variety of data regarding different components of the smart grid. As a result, the smart grid’s heavy dependence on data inevitably poses a great challenge to ensure data integrity and authenticity. Even though with defending mechanisms like firewalls deployed, the internal network can no longer be deemed physically isolated. Additionally, the experience with information security in common computer network reveals that flawed designs, implementations, and configurations of the communication network introduce vulnerabilities. These vulnerabilities open opportunities for attackers to launch cyber attackers. In this paper, we attempt to gain more insights with respect to the cyber security of the current PMU network technologies by exploring, validating, and demonstrating vulnerabilities.

Xiangyu Niu, Yue Tong, Jinyuan Sun
Proposal of Parallel Processing Area Extraction and Data Transfer Number Reduction for Automatic GPU Offloading of IoT Applications

Recently, IoT (Internet of Things) technologies have been progressed. To overcome of the high cost of developing IoT services by vertically integrating devices and services, Open IoT enables various IoT services to be developed by integrating horizontally separated devices and services. For Open IoT, we have proposed Tacit Computing technology to discover the devices that have data users need on demand and use them dynamically and an automatic GPU (graphics processing unit) offloading technology as an elementary technology of Tacit Computing. However, it can improve limited applications because it only optimizes parallelizable loop statements extraction. Therefore, in this paper, to improve performances of more applications automatically, we propose an improved method with reduction of data transfer between CPU and GPU. This can improve performance of many IoT applications. We evaluate our proposed GPU offloading method by applying it to Darknet which is general large application for CPU and find that it can process it 3 times as quickly as only using CPUs within 10 h tuning time.

Yoji Yamato, Hirofumi Noguchi, Misao Kataoka, Takuma Isoda, Tatsuya Demizu
Performance Evaluation of Industrial OPC UA Gateway with Energy Cost-Saving

OPC UA is an international standard that defines communication technology and data processing method in smart factory. Many industry standards and protocols accept the OPC UA specification. Therefore, OPC UA plays an important role in Smart Factory by supporting high interoperability among various protocols. ARM processors are typical CPUs and are used in many embedded systems due to their structural simplicity and low power consumption. However, existing plants still use x86 processors with high power consumption and price. Today, changing from smart factories to new processors has high entry barriers due to the huge cost and lack of experiments taking into account realistic indicators. Therefore, in this paper, we propose OPC UA Gateway that accommodates OPC UA specification on industrial device platform based on ARM processor. We evaluate performance based on indicators such as Publish Interval, Sampling Interval, Subscription Restriction, Encryption and Security Guidelines. Our experimental results show about 66% reduction in operating costs compared to x86 processors.

Hanjun Cho, Jongpil Jeong
Travel-Time Prediction Methods: A Review

Near-future Travel-time information is helpful to implement Intelligent Transportation Systems (ITS). Travel-time prediction refers to predicting future travel-time. Researchers have developed various methods to predict travel-time in the past decades. This paper conducts a review focusing on literatures, including techniques proposed recently. These methods are categorized as model-based and data-driven methods. We elaborate two common model-based methods, namely queuing theory and cell transmission model. Data-driven methods are categorized as parametric models (linear regression, autoregressive integrated moving average model and Kalman filter) and non-parametric models (neural network, support vector regression, nearest neighbors and ensemble learning). These methods are compared from data, prediction range and accuracy. In addition, we discuss several solutions to overcome shortcomings of existing methods, and highlight significant future research challenges.

Mengting Bai, Yangxin Lin, Meng Ma, Ping Wang
The Influence of Sleep on Opportunistic Relay in Linear Wireless Sensor Networks

In linear wireless sensor networks, if the nodes can listen and receive packet at any time, a better balance between energy consumption and delay can be achieved by using opportunistic relay (such as TE-OR). In order to further reduce the network power consumption, the nodes need to be properly in a dormant state, which will affect the performance of opportunistic relay. Taking TE-OR algorithm as an example, this paper studies the effect of sleep on opportunistic relay. The simulation results show that in order to achieve the delay performance of the TE-OR algorithm without sleep, the duty cycle should reach more than 60%. In addition, it is difficult to optimize the energy consumption and delay performance simultaneously for the opportunistic relay with sleep.

Haibo Luo, Zhiqiang Ruan, Fanyong Chen
Reconfigurable Hardware Generation for Tensor Flow Models of CNN Algorithms on a Heterogeneous Acceleration Platform

Convolutional Neural Networks (CNNs) have been used to improve the state-of-art in many fields such as object detection, image classification and segmentation. With their high computation and storage complexity, CNNs are good candidates for hardware acceleration with FPGA (Field Programmable Gate Array) technology. However, much FPGA design experience is needed to develop such hardware acceleration. This paper proposes a novel tool for design automation of FPGA-based CNN accelerator to reduce the development effort. Based on the Rainman hardware architecture and parameterized FPGA modules from Corerain Technology, we introduce a design tool to allow application developers to implement their specified CNN models into FPGA. Our tool supports model files generated by TensorFlow and produces the required control flow and data layout to simplify the procedure of mapping diverse CNN models into FPGA technology. A real-time face-detection design based on the SSD algorithm is adopted to evaluate the proposed approach. This design, using 16-bit quantization, can support up to 15 frames per second for 256*256*3 images, with power consumption of only 4.6 W.

Jiajun Gao, Yongxin Zhu, Meikang Qiu, Kuen Hung Tsoi, Xinyu Niu, Wayne Luk, Ruizhe Zhao, Zhiqiang Que, Wei Mao, Can Feng, Xiaowen Zha, Guobao Deng, Jiayi Chen, Tao Liu
Image Segmentation Algorithm Based on Spatial Pyramid and Visual Salience

An image segmentation algorithm based on Spatial Pyramid and visual salience is proposed in the paper. The segmentation algorithm is divided into five steps. The first step is extracting the global features of images to be processed. The second step is dividing the image into some sub-blocks according to different scales. And the third step is extracting the sub-block features of different scales and connecting the features sequentially. The fourth step is calculating the salience of different sub-blocks. The last step is segmenting the salient objects from the source image. The segmentation algorithm detects salient parts of image by means of both color histogram and spatial pyramid. The significance of pixels can be calculated by means of color and pattern. The algorithm assigns different weights to different pixels and sub-blocks. According to experiment results, the segmentation algorithm proposed in the paper outperforms other segmentation in precision, recall and time complexity.

Jingxiu Ni, Xu Qian, Guoying Zhang, Aihua Liang, Huimin Ju
Fast Implementation for SM4 Cipher Algorithm Based on Bit-Slice Technology

The SM4 block cipher algorithm used in IEEE 802.11i standard is released by the China National Cryptographic Authority and is one of the most important symmetric cryptographic algorithms in China. However, whether in the round encryption or key expansion phase of the SM4 algorithm, a large number of bit operations on the registers (e.g., circular shifting) are required. These operations are not effective to encryption in scenarios with large-scale data. In traditional implementations of SM4, different operands are assigned to different words and are processed serially, which can bring redundant operations in the process of encryption and decryption. Bit-slice technology places the same bit of multiple operands into one word, which facilitates bit-level operations in parallel. Bit-slice is actually a single instruction parallel processing technology for data, hence it can be accelerated by the CPU’s multimedia instructions. In this paper, we propose a fast implementation of the SM4 algorithm using bit-slice techniques. The experiment proves that the Bit-slice based SM4 is more efficient than the original version. It increases the encryption and decryption speed of the message by an average of 80%–120%, compared with the original approach.

Jingbin Zhang, Meng Ma, Ping Wang
Static Analysis of Android Apps Interaction with Automotive CAN

Modern car infotainment systems allow users to connect an Android device to the vehicle. The device then interacts with the hardware of the car, hence providing new interaction mechanisms to the driver. However, this can be misused and become a major security breach into the car, with subsequent security concerns: the Android device can both read sensitive data (speed, model, airbag status) and send dangerous commands (brake, lock, airbag explosion). Moreover, this scenario is unsettling since Android devices connect to the cloud, opening the door to remote attacks by malicious users or the cyberspace. The OpenXC platform is an open-source API that allows Android apps to interact with the car’s hardware. This article studies this library and shows how it can be used to create injection attacks. Moreover, it introduces a novel static analysis that identifies such attacks before they actually occur. It has been implemented in the Julia static analyzer and finds injection vulnerabilities in actual apps from the Google Play marketplace.

Federica Panarotto, Agostino Cortesi, Pietro Ferrara, Amit Kr Mandal, Fausto Spoto
Opportunistic Content Offloading for Mobile Edge Computing

It has been envisioned that future Mobile Edge Computing (MEC) paradigm is enabled with cache ability. Considering some of the content requests are highly concentrated, the popular contents will be repeatedly requested. To prevent frequent extra content request that will burden network backhaul, Base station (BS) with cache ability and even mobile user with the exact content can provide flexible content offloading on-consume. In this paper, we propose an opportunistic content offloading scheme by predicting the opportunistic content providers among mobile users and edges for MEC paradigm. At first, we propose to predict the opportunistic mobile content providers with popular contents according to historical data record. We then propose the opportunistic content offloading algorithm modeled by Stackelberg game. During the process, we consider mobile content consumer and content providers including mobile users and MEC server (e.g., BS) as the relationship of “leader-followers” in Stackelberg game. Based on the prediction of opportunistic connection with neighboring content provides, we design an iterative algorithm to reach the optimal equilibrium pricing with fast convergence. Our simulations are based on the real dataset provided by China Mobile Communications Corporation. The simulation results show our scheme can efficiently alleviate the network backhaul. During the peak hours, the number of content unloaded by our method accounts for 34.5% of the original total content load, thus effectively reducing the content overload pressure of the BS.

Hao Jiang, Bingqing Liu, Yuanyuan Zeng, Qian Li, Qimei Chen
Image Annotation Algorithm Based on Semantic Similarity and Multi-features

The paper proposed an image annotation algorithm based on semantic similarity and multi-feature fusion. The annotation algorithm draws lessons from the method of semantic extraction in natural language processing, and establishes the corresponding semantic trees for some common scenes. The scene semantic tree is constructed based on the visual features of the specific scene in the image set. Firstly, the visual features of scene images are extracted, and then the visual features are clustered by fuzzy clustering. According to the clustering results, the images are grouped, clustered at different nodes according to visual features, and the images are further grouped. After the scene semantic tree is constructed, the algorithm will extract the visual features of the image to be annotated. Furthermore, the image moves from the item node to a leaf node in the scene semantic tree according to its visual features, and the semantic keywords which appear in the route constitute the tags of the image.

Jingxiu Ni, Dongxing Wang, Guoying Zhang, Yanchao Sun, Xinkai Xu
One Secure IoT Scheme for Protection of True Nodes

As a new generation of information technologies, IoT has been applied into many industrial fields and made great contributions to our everyday life. However, vulnerability of IoT constrains the application of IoT, especially, when the node used in IoT systems is malicious one which may break the system and leakage vital data (for example, nodes used by patient to transfer condition data). To tackle the security concerning, we propose one secure IoT framework used to protect true nodes and ensure the secure operation. Firstly, we introduce two types of IoT classic architectures and summarize the security challenges, then we give an introduction and comparison of several IoT security frameworks. At last, we propose our scheme for protecting the safety of IoT nodes in the perception layer.

Yongkai Fan, Guanqun Zhao, Xiaodong Lin, Xiaofeng Sun, Dandan Zhu, Jing Lei
Predicted Mobile Data Offloading for Mobile Edge Computing Systems

Mobile Edge Computing (MEC) has emerged as a promising technology to meet with the high data rate, real-time transmission, and huge computation requirements for the ever growing future wireless terminals, such as virtual reality devices, augmented reality, and the Internet of Vehicles. Due to the limitation of licensed bandwidth resources, mobile data offloading should be considered. On the other hand, WiFi AP that works on the abundant unlicensed spectrum can provide good wireless services under light-loaded areas. Therefore, in this paper we leverage WiFi AP to offload some devices from SBS. To effectively perform the offloading process, we build a multi-LSTM based deep-learning algorithm to predict the traffic of SBS. According to the prediction results, an offline mobile data offloading strategy has been proposed, which has been obtained through cross entropy method. Simulation results demonstrate the efficiency of our prediction model and offloading strategy.

Hao Jiang, Duo Peng, Kexin Yang, Yuanyuan Zeng, Qimei Chen
A Monte Carlo Based Computation Offloading Algorithm for Feeding Robot IoT System

Ageing is becoming an increasingly major problem in European and Japanese societies. We have so far mainly focused on how to improve the eating experience for both frail elderly and caregivers by introducing and developing the eating aid robot, Bestic, made to get the food from plate to the mouth for frail elderly or person with disabilities. We expand the functionalities of Bestic to create food intake reports automatically so as to decrease the undernutrition among frail elderly and workload of caregivers through collecting data via a vision system connected to the Internet of Things (IoT) system. Since the computation capability of Bestic is very limited, computation offloading, in which resource intensive computational tasks are transferred from Bestic to an external cloud server, is proposed to solve Bestic’s resource limitation. In this paper, we proposed a Monte Carlo algorithm based heuristic computation offloading algorithm, to minimize the total overhead of all the Bestic users after we show that the target optimization problem is NP-hard in a theorem. Numeric results showed that the proposed algorithm is effective in terms of system-wide overhead.

Cheng Zhang, Takumi Ohashi, Miki Saijo, Jorge Solis, Yukio Takeda, Ann-Louise Lindborg, Ryuta Takeda, Yoshiaki Tanaka
Collective Behavior Aware Collaborative Caching for Mobile Edge Computing

In Mobile Edge Computing (MEC) paradigm, popular and repetitive content can be cached and offloaded from nearby MEC server in order to reduce the backhaul overload. Due to hardware limitation of MEC devices, collaboration among MEC servers can greatly improve the cache performance. In this paper, we propose a Collective Behavior aware Collaborative Caching (CBCC) method. At first, we propose to discover the collective behavior of users by using content-location similarity network fusion algorithm. our analysis is based on real dataset of usage detail records and explore the heterogeneity and predictability of collective behavior during content access. Based on it, we propose a collaborative relationship model that relies on the collective behavior. Then, the collaborative caching placement is formulated by solving a multi-objective optimization problem. Our simulations are based on the real dataset from cellular systems. The numerical results show that the proposed method achieves performance gains in terms of both hit rate and transmission cost.

Hao Jiang, Hehe Huang, Ying Jiang, Yuan Wang, Yuanyuan Zeng, Chen Zhou
Implementation of Distributed Multi-Agent Scheduling Algorithm Based on Pi-calculus

Currently, efficient use of distributed resources is a research hotspot. Considering that the structure of a distributed communication system is prone to change and many distributed algorithms are still based on the serial underlying model, this paper proposes a distributed multi-agent model based on Pi-calculus. This model takes advantage of Pi-calculus parallel computing, including using channels to transfer information. Besides this, the model combines multi-agent technology to further improve parallelism, enabling distributed resources to be used more efficiently. This paper uses the classic algorithm of heterogeneous scheduling in distributed environments, the heterogeneous earliest finish time (HEFT) algorithm as an example to apply the model by creating different topologies of the task scheduling graph. And then implement the model with Nomadic Pict using channels to transmit information and assigning tasks to multiple agents. We can prove that the distributed multi-agent model based on Pi-calculus can make use of distributed resources more efficiently compared with traditional C++ language combined with multithreading and Socket communication mechanisms assigning tasks to multiple clients.

Bairun Li, Hui Kang, Fang Mei
SmartDetect: A Smart Detection Scheme for Malicious Web Shell Codes via Ensemble Learning

The rapid global spread of the web technology has led to an increase in unauthorized intrusions into computers and networks. Malicious web shell codes used by hackers can often cause extremely harmful consequences. However, the existing detection methods cannot precisely distinguish between the bad codes and the good codes. To solve this problem, we first detected the malicious web shell codes by applying the traditional data mining algorithms: Support Vector Machine, K-Nearest Neighbor, Naive Bayes, Decision Tree, and Convolutional Neural Network. Then, we designed an ensemble learning classifier to further improve the accuracy. Our experimental analysis proved that the accuracy of SmartDetect—our proposed smart detection scheme for malicious web shell codes—was higher than the accuracy of Shell Detector and NeoPI on the dataset collected from Github. Also, the equal-error rate of the detection result of SmartDetect was lower than those of Shell Detector and NeoPI.

Zijian Zhang, Meng Li, Liehuang Zhu, Xinyi Li
An Optimized MBE Algorithm on Sparse Bipartite Graphs

The maximal biclique enumeration (MBE) is a problem of identifying all maximal bicliques in a bipartite graph. Once enumerated in a bipartite graph, maximal bicliques can be used to solve problems in areas such as purchase prediction, statistic analysis of social networks, discovery of interesting structures in protein-protein interaction networks, identification of common gene-set associations, and integration of diverse functional genomes data. In this paper, we develop an optimized sequential MBE algorithm called $$\mathsf {sMBEA}$$ for sparse bipartite graphs which appear frequently in real life. The results of extensive experiments on several real-life data sets demonstrate that $$\mathsf {sMBEA}$$ outperforms the state-of-the-art sequential algorithm $$\mathsf {iMBEA}$$ .

Yu He, Ronghua Li, Rui Mao
IoT Framework for Interworking and Autonomous Interaction Between Heterogeneous IoT Platforms

The IoT (Internet of Things) connects with various devices over the Internet and provides a high-level service by interacting between devices. However, heterogeneous characteristics such as communication protocols and message types limit the scope of interaction between IoT platforms. To solve the problem of the fragmented IoT market, the oneM2M global initiative defined a horizontal M2M service layer. Since conventional platforms work based on syntax data, servers and gateways do not understand the meaning of exchanged data. Semantic Web technologies which give meaning to information has been studied in many fields to realize platform understand the data. In this paper, we propose an IoT framework that is based on the oneM2M standard to ensures the interoperability of heterogeneous IoT platform. In addition, the proposed framework interacts autonomously with all objects by applying semantic Web technologies. All data generated by connected device and sensor are described as semantic data. The platform understands the data based on the ontology and performs the autonomous interaction. Users manage rules and devices via the Web client’s management interface. Finally, interoperability evaluation is presented through an experiment of interworking and rule-based interaction between heterogeneous platform devices in real IoT environment.

Seongju Kang, Kwangsue Chung
Improvement of TextRank Based on Co-occurrence Word Pairs and Context Information

TextRank, a widely used keyword extraction algorithm, considers the relationship between words based on the graph model. However, Words with high frequency have more opportunities to co-occur with other words. Extracting keywords based on co-occurrence relationships ignores some unrecognized words, and TextRank only constructs a graph model from a single document. It leads to less efficiency in some related documents for missing the context information in the documents collection. In this paper, A smart improvement algorithm for TextRank is promoted. Firstly, for introducing external document features and considering the relationship between documents, all co-occurrence word pairs from the documents collection are extracted by associate rule mining. Then the co-occurrence frequency in TextRank score formula is replaced with the mutual information between the co-occurrence word pairs, which considers some less co-occurrence word pairs. Moreover, the context entropy of the words in the collection are calculated. At last, a new TextRank score formula is constructed, in which the context entropy pluses the replaced score formula with different weights. For testing the effectiveness, an experiment, considering five scoring weights combination, compares the improvement algorithm with the original TextRank and TF-IDF based on two different type of datasets (a public Chinese dataset and a financial dataset crawled from the internet). The experiment results show that with the same weight of the two parts, the improved TextRank algorithm is superior to the others.

Yang Wang, Hua Yin, Minwei He
Augmenting Embedding with Domain Knowledge for Oral Disease Diagnosis Prediction

In this paper, we propose to add domain knowledge from the most comprehensive biomedical ontology SNOMED CT to facilitate the embedding of EMR symptoms and diagnoses for oral disease prediction. We first learn embeddings of SNOMED CT concepts by applying the TransE algorithm prevalent for representation learning of knowledge base. Secondly, the mapping from symptoms/diagnoses to biomedical concepts and the corresponding semantic relations defined in SNOMED CT are modeled mathematically. We design a neural network to train embeddings of EMR symptoms and diagnoses and ontological concepts in a coherent way, for the latter the TransE-learned vectors being used as initial values. The evaluation on real-world EMR datasets from Peking University School and Hospital Stomatology demonstrates the prediction performance improvement over embeddings solely based on EMRs. This study contributes as a first attempt to learn distributed representations of EMR symptoms and diagnoses under the constraint of embeddings of biomedical concepts from comprehensive clinical ontology. Incorporating domain knowledge can augment embedding as it reveals intrinsic correlation among symptoms and diagnoses that cannot be discovered by EMR data alone.

Guangkai Li, Songmao Zhang, Jie Liang, Zhanqiang Cao, Chuanbin Guo
Regional Estimation Prior Network for Crowd Analyzing

Crowd analysis from images or videos is an important technology for public safety. CNN-based multi-column methods are widely used in this area. Multi-column methods can enhance the ability of exacting various-scale features for the networks, but they may introduce the drawbacks of complicating and functional redundancy. To deal with this problem, we proposed a multi-task and multi-column network. With the support of a regional estimation prior task, components of network may pay more attention to their own target functions respectively. In this way, the functional redundancy can be reduced and the performance of network can be enhanced. Finally, we evaluated our method in public datasets and monitoring videos.

Ping He, Meng Ma, Ping Wang
RBD: A Reference Railway Big Data System Model

The subway line is complex and involves many departments, resulting in unstandardized storage of relevant data in the Metro department. Data systems between different departments cannot cooperate. In this paper, we propose Railway Large Data Platform (RBD) to standardize the large data of rail transit. A large data platform system is designed to store the complex data of rail transit, which can cope with complex scenes. Taking the construction of rail transit platform in Chongqing as an example, we have made a systematic example.

Weilan Lin, Fanhua Xu, Meng Ma, Ping Wang
Artificial Intelligence Platform for Heterogeneous Computing

Since the birth of artificial intelligence, the theory and the technology have become more mature, and the application field is expanding. In this paper, we build an artificial intelligence platform for heterogeneous computing, which supports deep learning frameworks such as TensorFlow and Caffe. We describe the overall architecture of the AI platform for a GPU cluster. In the GPU cluster, based on the scheduling layer, we propose Yarn by the Slurm scheduler to not only improve the distributed TensorFlow plug-in for the Slurm scheduling layer but also to extend YARN to manage and schedule GPUs. The front-end of the high-performance AI platform has the attributes of availability, scalability and efficiency. Finally, we verify the convenience, scalability, and effectiveness of the AI platform by comparing the performance of single-chip and distributed versions for the TensorFlow, Caffe and YARN systems.

Haikuo Zhang, Zhonghua Lu, Ke Xu, Yuchen Pang, Fang Liu, Liandong Chen, Jue Wang
Review on Application of Artificial Intelligence in Photovoltaic Output Prediction

With the development of photovoltaic, the distributed power grid has begun large-scale interconnection, which has an impact on the stability of the network. Distributed photovoltaic output is intermittent and stochastic. It is affected by climate and environment conditions such as sunlight, season, geography and time. It is difficult to accurately model and analyze the characteristics of distributed photovoltaic output. More and more artificial intelligence methods are applied to the photovoltaic output prediction and produce good results. This paper introduces the importance of photovoltaic prediction in photovoltaic power generation, then briefly gives what is artificial intelligence, and enumerates a large number of applications of artificial intelligence methods in photovoltaic power prediction. Finally, the direction of future research on photovoltaic power generation is proposed.

Dianling Huang, Xiaoguang Wang, Boyao Zhang
Abnormal Flow Detection Technology in GPU Network Based on Statistical Classification Method

Domain Name System (DNS), as the Internet “hub system” of basic resources services, mainly provides the basic services of domain name and IP address mapping. Abnormal flow detection technology plays an important role in the security service quality of Internet basic services, and it is also one of the important contents of Internet security research. The existing research mainly focuses on the analysis of network flow and other technologies at the data level, but in the context of network attacks, especially in the case of DDoS attacks, the accuracy and detection performance need to be improved. Based on the statistical method of high-performance abnormal flow detection technology, in this paper, the flow data are used for real-time statistical fitting, and the difference is made with the historical log data statistics. GPU parallel technology is used to improve the detection performance, which improves the accuracy and detection performance in the case of DDoS attacks on the network.

Huifeng Yang, Liandong Chen, Boyao Zhang, Haikuo Zhang, Peng Zuo, Ningming Nie
Research on Data Forwarding Algorithm Based on Link Quality in Vehicular Ad Hoc Networks

Vehicular ad hoc networks (VANETs) realize remote data transmission via multi-hop communications. However, high relative vehicle mobility and frequent changes of the network topology inflict new problems on forwarding data in time. As a result, the robustness of the link is crucial to VANETs. In this paper, we present an efficient routing algorithm based on link quality named DFLQ. Firstly, we determine the range of forwarding according to the traffic density and the vehicle route. Then, we can compute the time of link maintaining on the basis of the position, speed and direction of the nodes. Also, we can estimate the quality of wireless channel based on the expected transmission count. Finally, the longest link maintenance time is chosen as the relay node to forward the data. Simulation results validate that the DFLQ improves packet delivery rate, reduces end-to-end delay and network overhead to a certain extent.

Xiumei Fan, Hanyu Cai, Tian Tian
The Knowledge Map Analysis of User Profile Research Based on CiteSpace

With the development of big data technology, user profile, as an effective method for delineating user characteristics, has attracted extensive attention from researchers and practitioners. Rich related literatures have been accumulated. How to find the key factors and the new direction from such a big library is a difficult problem for a new researcher entering the field. The knowledge map can be used to visualize the development trend, the frontier field and the overall knowledge structure from these researches. Therefore, we choose web of science database as the literature search engine and use CiteSpace to construct the user profile knowledge map. Through these maps, we analyze the important authors and countries, make the common word analysis and co-citation analysis, study the hot spots and important literatures. The time distribution shows that some foundational theories in user profile were produced at the second stage from 2004 to 2013. What’s more, from the geographical distribution, we find that user profile, as an abstract concept, has no unified framework. Each country focuses on the different research points. From the knowledge map of keywords, we find that the top three algorithmic techniques used in constructing user profile are clustering, classification, and collaborative filtering. At the same time, user profile is also used in some specific applications, such as anomaly detection, behavior analysis, and information retrieval.

Danbei Pan, Hua Yin, Yang Wang, Zhijian Wang, Zhensheng Hu
The Accuracy of Fuzzy C-Means in Lower-Dimensional Space for Topic Detection

Topic detection is an automatic method to discover topics in textual data. The standard methods of the topic detection are nonnegative matrix factorization (NMF) and latent Dirichlet allocation (LDA). Another alternative method is a clustering approach such as a k-means and fuzzy c-means (FCM). FCM extend the k-means method in the sense that the textual data may have more than one topic. However, FCM works well for low-dimensional textual data and fails for high-dimensional textual data. An approach to overcome the problem is transforming the textual data into lower dimensional space, i.e., Eigenspace, and called Eigenspace-based FCM (EFCM). Firstly, the textual data are transformed into an Eigenspace using truncated singular value decomposition. FCM is performed on the eigenspace data to identify the memberships of the textual data in clusters. Using these memberships, we generate topics from the high dimensional textual data in the original space. In this paper, we examine the accuracy of EFCM for topic detection. Our simulations show that EFCM results in the accuracies between the accuracies of LDA and NMF regarding both topic interpretation and topic recall.

Hendri Murfi
Information-Centric Fog Computing for Disaster Relief

Natural disasters like earthquakes and typhoons are bringing huge casualties and losses to modern society every year. As the main foundation of the information age, host-centric network infrastructure is easily disrupted during disasters. In this paper, we focus on combining Information-Centric Networking (ICN) and fog computing in solving the problem of emergency networking and fast communication. We come up with the idea from six degrees of separation theory (SDST) in achieving Information-Centric Fog Computing (ICFC) for disaster relief. Our target is to model the relationship of network nodes and design a novel name-based routing strategy using SDST. In the simulation part, we evaluate and compare our work with existing routing methods in ICN. The results show that our strategy can help improve work efficiency in name-based routing under the limitation of post-disaster scenario.

Jianwen Xu, Kaoru Ota, Mianxiong Dong
Smartly Deploying WeChat Mobile Application on Cloud Foundry PaaS

WeChat has become the mainstream social network mobile applications in China. Based on the WeChat API, there are many attached mobile applications. Cloud Foundry is a lightweight mainstream PaaS platform. In this paper, we studied how to develop and deploy WeChat applications on the Cloud Foundry platform to achieve lightweight development and deployment. The implementation of a product maintenance system proves the effectiveness of our deployment method with WeChat and Cloud Foundry.

Zhihui Lu, Xiaoli Wan, Meikang Qiu, Lijun Zu, Shih-Chia Huang, Jie Wu, Meiqin Liu
Research on Arm Motion Capture of Virtual Reality Based on Kinematics

Virtual reality needs to simulate interaction scenes that are as consistent as possible with reality. Motion capture is the key to address this need. In this paper, a kinematic-based virtual reality arm motion capture scheme is designed on the HTC VIVE platform to achieve low-cost and high-precision motion capture. Based on the human skeleton model, an arm kinematic chain model suitable for VR environment is designed. The above human structure data is redirected to the VR arm to drive the VR arm movement in the virtual environment. Compared with existing motion capture solutions, the experimental results and user survey results show that the method proposed in this paper is able to restore the actual arm movements in virtual reality, showing higher accuracy, and the average satisfaction of the survey object reaches 85%.

Shubin Cai, Dihui Deng, Jinchun Wen, Chaoqin Chen, Zhong Ming, Zhiguang Shan
Financial News Quantization and Stock Market Forecast Research Based on CNN and LSTM

The changes of stock market and the predictions of the price have become hot topics. When machine learning emerged, it has been used in the stock market forecast research. In recent years, the vertical development of machine learning has led to the emergence of deep learning. Therefore, this paper proposes and realizes the CNN and LSTM forecasting model with financial news and historical data of stock market, which uses deep learning methods to quantify text and mine the laws of stock market changes and analyze whether they can predict changes. According to the results from this paper, this method has certain accuracy in predicting the future changes of the stock market, which provides help to study the inherent laws of stock market changes.

Shubin Cai, Xiaogang Feng, Ziwei Deng, Zhong Ming, Zhiguang Shan
Correlation Coefficient Based Cluster Data Preprocessing and LSTM Prediction Model for Time Series Data in Large Aircraft Test Flights

The Long Short-Term Memory (LSTM) model has been applied in recent years to handle time series data in multiple application domains, such as speech recognition and financial prediction. While the LSTM prediction model has shown promise in anomaly detection in previous research, uncorrelated features can lead to unsatisfactory analysis result and can complicate the prediction model due to the curse of dimensionality. This paper proposes a novel method of clustering and predicting multidimensional aircraft time series. The purpose is to detect anomalies in flight vibration in the form of high dimensional data series, which are collected by dozens of sensors during test flights of large aircraft. The new method is based on calculating the Spearman’s rank correlation coefficient between two series, and on a hierarchical clustering method to cluster related time series. Monotonically similar series are gathered together and each cluster of series is trained to predict independently. Thus series which are uncorrelated or of low relevance do not influence each other in the LSTM prediction model. The experimental results on COMAC’s (Commercial Aircraft Corporation of China Ltd) C919 flight test data show that our method of combining clustering and LSTM model significantly reduces the root mean square error of predicted results.

Hanlin Zhu, Yongxin Zhu, Di Wu, Hui Wang, Li Tian, Wei Mao, Can Feng, Xiaowen Zha, Guobao Deng, Jiayi Chen, Tao Liu, Xinyu Niu, Kuen Hung Tsoi, Wayne Luk
PSPChord - A Novel Fault Tolerance Approach for P2P Overlay Network

In this paper, we propose a novel approach called PSPChord to provide efficient fault tolerance solution for Chord-based P2P overlay networks. In our proposal, the successor list is removed, instead, we design the partition-based data replication and modify finger tables. While the partition strategy is used to distribute data replicas evenly on Chord ring to reduce and balance the cost of lookup request, the finger table is added links to successor and predecessor of neighboring nodes to pass over faulty nodes. By simulating, our experiments already showed the performance of PSPChord as compared with original Chord in resolving fault tolerance problem on P2P overlay network.

Dan Nguyen, Nhat Hoang, Binh Minh Nguyen, Viet Tran
Studying Weariness Prediction Using SMOTE and Random Forests

This article is aimed at the low accuracy of student weariness prediction in education and the poor prediction effect of traditional prediction models. It was established the SMOTE (Synthetic Minority Oversampling Technique) algorithm and random forest prediction models. This study puts forward to useing the SMOTE oversampling method to balance the data set and then use the random forest algorithm to train the classifier. By comparing the common single classifier with the ensemble learning classifier, it was found that the SMOTE and Random forest method performed more prominently, and the reasons for the increase in the AUC value after using the SMOTE method were analyzed. Using Massive Open Online Course (Mooc) synthesis student’s datasets, which mainly include the length of class, whether the mouse has moved, whether there is a job submitted, whether there is participating in discussions and completing the accuracy of the assignments. It is proved that this method can significantly improve the classification effect of classifiers, so teachers can choose appropriate teaching and teaching interventions to improve student’s learning outcomes.

Yu Weng, Fengming Deng, Guosheng Yang, Liandong Chen, Jie Yuan, Xinkai Gui, Jue Wang
Design of Heterogeneous Evaluation Method for Redundant Circuits

Fault-tolerant mechanisms have been an essential part of the electronic equipment in extreme environments such as high voltage, extreme temperature and strong electromagnetic environment etc. Accordingly, how to improve the robustness and disturbance rejection performance of the circuit has become the primary problem in recent years. In this paper, a heterogeneous evaluation method based on relational analysis is proposed. It uses genetic algorithm and evolutionary hardware to get the required sub-circuit structures and uses relational strategy to evaluate heterogeneous degree of redundant circuit system. Finally, the sub-structures with large heterogeneous degree are selected to build redundant circuit system. In the experiments, we designed short-circuit fault and parameter drift fault to validate the heterogeneous evaluation method. The experimental results show this method can not only enhance the heterogeneous degree, but also maintain high robustness. Compared with random heterogeneous redundant system and homogeneous redundant system, the Average Fault-free Probability of redundant fault-tolerant circuit system based on relational method is 8.9% and 21.7% higher respectively in short-circuit fault experiments, and it is 9.1% and 23.9% higher respectively in parameter drift fault experiments.

Huicong Wu, Jie Yu, Yangang Wang, Xiaoguang Wang
Senior2Local: A Machine Learning Based Intrusion Detection Method for VANETs

Vehicular Ad-hoc Network (VANET) is a heterogeneous network of resource-constrained nodes such as smart vehicles and Road Side Units (RSUs) communicating in a high mobility environment. Concerning the potentially malicious misbehaves in VANETs, real-time and robust intrusion detection methods are required. In this paper, we present a novel Machine Learning (ML) based intrusion detection methods to automatically detect intruders globally and locally in VANETs. Compared to previous Intrusion Detection methods, our method is more robust to the environmental changes that are typical in VANETs, especially when intruders overtake senior units like RSUs and Cluster Heads (CHs). The experimental results show that our approach can outperform previous work significantly when vulnerable RSUs exist.

Yi Zeng, Meikang Qiu, Zhong Ming, Meiqin Liu
Depth Prediction from Monocular Images with CGAN

Depth prediction from monocular images is an important task in many computer vision fields as monocular cameras are currently the majorities of the image acquisition equipment, which is used in many fields such as stereo scenes understanding and Simultaneous Location and Mapping (SLAM). In this paper, we regard depth prediction as an image generation task and propose a new method for monocular depth prediction using Conditional Generative Adversarial Nets (CGAN). We transform the corresponding depth images of RGB images as the Relative depth images by dividing the maximum value, then we use an encoder-decoder as the generator of CGAN, which is used to generate depth images corresponding to input RGB images, the discriminator is constituted by an encoder, which is used to discriminate whether the input images are true or fake by evaluating the difference between input images. By learning the potential correspondence between pixels of RGB images and depth image, we could finally obtain the corresponding depth images of test RGB images with our CGAN model. We test our model with different objective functions in TUM RGB-D dataset and NYU V2 dataset, and the result shows excellent performance.

Wei Zhang, Guoying Zhang, Qiran Zou
Anomaly Detection for Power Grid Based on Network Flow

As an important part of the national infrastructure, the power grid is facing more and more network security threats in the process of turning from traditional relative closure to informationization and networking. Therefore, it is necessary to develop effective anomaly detection methods to resist various threats. However, the current methods mostly use each packet in the network as the detection object, ignore the overall timing pattern of the network, cannot detect some advanced behavior attacks. In this paper, we introduce the concept of network flow, which consists of the same end-to-end network packets, besides the network flow fragmentation divides the network flow into pieces at regular intervals. We also propose a network flow anomaly detection method based on density clustering, which uses bidirectional flow statistics as features. The experimental result demonstrate that the methodology has excellent detection effect on large-scale malicious traffic and injection attacks.

Lizong Zhang, Xiang Shen, Fengming Zhang, Minghui Ren, Bo Li
A K-Anonymous Full Domain Generalization Algorithm Based on Heap Sort

K-Anonymity algorithms are used as essential methods to protect end users’ data privacy. However, state-of-art K-Anonymity algorithms have shortcomings such as lacking generalization and suppression value priority standard. Moreover, the complexity of these algorithms are usually high. Thus, a more robust and efficient K-Anonymity algorithm is needed for practical usage. In this paper, a novel K-Anonymous full domain generalization algorithm based on heap sort is presented. We first establish the k-anonymous generalization priority standard of information. Then our simulation results show the user’s data privacy can be effectively protected while generalization efficiency is also improved.

Xuyang Zhou, Meikang Qiu
Correction to: Design of Heterogeneous Evaluation Method for Redundant Circuits

In the original version of this chapter, a wrong project number was stated in the Acknowledgements Section. This has now been corrected.

Huicong Wu, Jie Yu, Yangang Wang, Xiaoguang Wang
Correction to: Studying Weariness Prediction Using SMOTE and Random Forests

In the original version of this chapter, a wrong project number was stated in the Acknowledgements Section. This has now been corrected.

Yu Weng, Fengming Deng, Guosheng Yang, Liandong Chen, Jie Yuan, Xinkai Gui, Jue Wang
Backmatter
Metadaten
Titel
Smart Computing and Communication
herausgegeben von
Prof. Meikang Qiu
Copyright-Jahr
2018
Electronic ISBN
978-3-030-05755-8
Print ISBN
978-3-030-05754-1
DOI
https://doi.org/10.1007/978-3-030-05755-8