Skip to main content

2022 | Buch

Smart Computing and Communication

6th International Conference, SmartCom 2021, New York City, NY, USA, December 29–31, 2021, Proceedings

insite
SUCHEN

Über dieses Buch

This book constitutes the proceedings of the 6th International Conference on Smart Computing and Communication, SmartCom 2021, which took place in New York City, USA, during December 29–31, 2021.*

The 44 papers included in this book were carefully reviewed and selected from 165 submissions. The scope of SmartCom 2021 was broad, from smart data to smart communications, from smart cloud computing to smart security. The conference gathered all high-quality research/industrial papers related to smart computing and communications and aimed at proposing a reference guideline for further research.

* Conference was held online due to the COVID-19 pandemic.

Inhaltsverzeichnis

Frontmatter

Regular Papers

Frontmatter
Efficient Online Service Based on Go-Tensorflow in the Middle-Station Scenario of Grid Service

The application of machine learning and deep learning is widely used in the business of the power grid. However, the business of the power grid is complicated, and the online service of deep learning faces greater performance challenges. In order to solve this problem, this paper proposes an online service EOSP based on go-tensorflow. EOSP service is divided into 3 modules, namely model configuration module, execution engine module and model management module. The model configuration module mainly includes functions such as online model configuration and model configuration information synchronization. The execution engine can execute graphical model calls, and has optimized performance based on the characteristics of golang language coroutines. The model management module is responsible for model registration, update, uninstallation and version management. Experiments show that the EOSP service is highly stable, which greatly reduces the time consumption of online services.

Peng Liu, Yiming Lu, Guoqing Wang, Wang Zhou
Resource Modeling of Power Communication Packet Optical Transport Network

With the explosive growth of information, data traffic has gradually become the main body occupying the communication network, and the grouping of services and carriers is the current general trend. The industry has put forward a new power communication network POTN (Packet Optical Transmission Network), POTN can carry on the unified scheduling and management to different levels. After years of development, the POTN network has gradually matured, but there are still some problems such as weak communication network structure and insufficient transmission capacity. This paper mainly studies the POTN resource modeling, and develops the simulation system based on the POTN, realizes the mapping and reuse of client services, improves the protection mechanism of the POTN network, and completes the reasonable planning and optimization of the transmission network topology. Aiming at the convergence problem of multiple services based on POTN technology, the aggregation algorithm is designed to improve the bandwidth and port resource utilization of devices, Finally, the simulation analysis of the experiment provides important theoretical support for the construction of equipment model and network topology.

ZhiXin Lu, LianYu Fu, YiZhao Liu, XiYang Yin
Energy-Efficient Federated Learning in IoT Networks

This paper investigated the problem of energy-efficient transmission resource allocation for Federated learning (FL). By processing data on the heterogeneous devices and uploading the model updates to the server, federated learning facilitates a large-scale model training with lower latency. Although FL has many promising advantages, the number of involved devices is limited by the communication and device battery resources. Therefore, it is rational to select devices due to its importance. This joint device selection and resource allocation problem is formulated to minimize the total energy cost and maximize the training accuracy. This optimization problem is a mixed integer nonlinear programming (MINLP), which is solved by a penalty dual decomposition (PDD) method. The closed-form expression solution shows that devices with more importance and less energy cost are more likely to be selected. The experiments show that the proposed algorithm has a desired performance and outperforms the random selection and full selection benchmarks.

Deyi Kong, Zehua You, Qimei Chen, Juanjuan Wang, Jiwei Hu, Yunfei Xiong, Jing Wu
Chinese Fine-Grained Sentiment Classification Based on Pre-trained Language Model and Attention Mechanism

With the development of technology and the popularization of the Internet, the use of online platforms is gradually rising in all walks of life. People participate in the use of the platform and post comments, and the information interaction generated by this will affect other people’s views on the matter in the future. It can be seen that the analysis of these subjective evaluation information is particularly important. Sentiment analysis research has gradually developed into specific aspects of sentiment judgment, which is called fine-grained sentiment classification. Nowadays, China has a large population of potential customers and Chinese fine-grained sentiment classification has become a current research hotspot. Aiming at the problem of low accuracy and poor classification effect of existing models in deep learning, this paper conducts experimental research based on the merchant review information data set of Dianping. The BERT-ftfl-SA model is proposed and integrate the attention mechanism to further strengthen the data characteristics. Compared with traditional models such as SVM and FastText, its classification effect is significantly improved. It is concluded that the improved BERT-ftfl-SA fine-grained sentiment classification model can achieve efficient sentiment classification of Chinese text.

Faguo Zhou, Jing Zhang, Yanan Song
Link-Efficiency Multi-channel Transmission Protocol for Data Collection in UASNs

With the burgeoning of underwater Internet of things, the amount of data generated by sensor nodes increases dramatically in UASNs, requiring efficient data collection schemes. The data collection process mainly considers MAC and routing design to ensure the link efficiency of sensing data to the sink, which refers to collision avoidance and high bandwidth utilization at low signaling overhead. In recent underwater MAC designs, multi-channel schemes have been adopted as an effective way to eliminate collisions. However, existing multi-channel schemes face the problem of low bandwidth utilization, resulting in a reduction in the performance of network throughput and end-to-end delay. Besides, in the routing design, the unbalanced transfer load may lead to partial transmission congestion, which also decreases the overall bandwidth utilization. To solve the above problems, in this paper, we propose a Link-Efficiency Transmission Protocol (LETP). In the routing layer, we propose a forwarding node probabilistic selection approach to solve the unbalanced transfer load problem at low signaling overhead. In the MAC layer, with the assistance of routing information, a Link Efficiency Channel Allocation (LECA) algorithm with low signaling overhead is applied on receiver sides to allocate dedicated communication channels to senders and optimize the allocation decision based on channel characteristics. Simulation results verify that LETP achieves a better network performance in comparison with existing protocols.

Xiaohui Wei, Xiaonan Wang, Haixiao Xu, Xingwang Wang, Hao Guo
A Multi-attribute Decision Handover Strategy for Giant LEO Mobile Satellite Networks

Due to the characteristics of low orbital height and large number of satellites, the giant Low Earth Orbit (LEO) satellite network has more frequent handovers than ordinary LEO satellite communication systems. There are a large number of satellites available for handover, and the handover problem is more prominent. In this study, a handover strategy suitable for all users is designed for the inter-satellite handover in the giant LEO satellite network. This strategy designs two different handover methods for different handover users and new callers. For users who can predict the handover time, a multi-attribute decision handover strategy based on a directed graph is adopted. It uses the satellite service timetable (including satellite coverage at each time and each index value) to construct a satellite handover directed graph, and combine the handover path to reserve channel resources for users. For handover users and new callers who cannot predict the handover time, a multi-attribute decision handover strategy based on combined weights is adopted. The best candidate satellites are obtained by weighting each index combination, distinguishing user types and service types, and using channel queuing strategies to access satellites according to different priorities. The simulation results prove that this strategy can reduce the number of handovers while taking into account the signal strength and system load balance, and can effectively reduce the handover blocking rate.

TingTing Zhang, LinTao Yang, Tao Dong, Jie Yin, ZhiHui Liu, ZhanWei Wang
ML-ECN: Multilayer Emergency Communication Network Based on the Combination of Space and Earth

In recent years, major geological disasters have occurred, often accompanied by the massive destruction of infrastructure communication facilities. Therefore, quickly building an Emergency Communication Network (ECN) after a disaster is a key problem. At present, there are a variety of emergency communication network solutions, including Unmanned Aerial Vehicle (UAV) networks, satellite networks, wireless sensor networks, ground MESH networks, and so on. However, this kind of research rarely considers the comprehensive application of the abovementioned networks and the collaborative work between them. In this paper, an emergency communication network is proposed, which includes a wireless sensor network, ground MESH network, UAV MESH network, and satellite network. The multilayer network works together to ensure the smooth development of post-disaster rescue work. We also propose a location algorithm based on reverse verification, which relies mainly on wireless sensor networks. For network service quality, we propose an improved scheduling algorithm based on Weighted Deficit Round Robin (WDRR) and. Multiple measures are taken to ensure network Quality of Service (QoS). Numerical results show the superiority of our scheme.

Liang Zhou, Hao Li, Jianguo Zhou, Changjia Zhou, Tianzhu Shi
Multi-attribute Authentication Method Based on Continuous Trust Evaluation

In order to solve the data security problems faced by the power Internet of Things, this article combines the trust evaluation technology and the token authentication method to propose a multi-attribute identity authentication method based on continuous trust evaluation. This method embeds the trust evaluation value into the token, and authenticate the user’s identity through the real-time update of the token, thereby granting the corresponding authority. At the same time, the authentication process is encrypted and decrypted by mixing the random generation matrix encryption algorithm based on chaotic mapping and the RSA digital signature algorithm to improve the security of authentication. The experimental results show that the trust evaluation value obtained by the continuous trust evaluation algorithm for users with different identities are different, so the resource permissions obtained are also different. Therefore, the multi-attribute authentication method combined with trust evaluation in this paper has higher security and confidentiality than previous authentication methods.

Jing Guo, Bingsen Li, Ping Du, Ziyi Xin, Jianjun Zhang, Jiawei Chen
Defects Detection System of Medical Gloves Based on Deep Learning

In industrial production, medical gloves with tear, stain and other defects will be produced. In traditional manual mode, the efficiency and accuracy of defect detection depend on the proficiency of spot-check workers, which results in uneven glove product quality. In this paper, a surface defect detection system of medical gloves based on deep learning is designed for the automatic detection with high efficiency and accuracy. According to the industrial requirements of high real-time, the system adopts a cache scheme to improve the data reading and writing speed, and an Open Neural Network Exchange (ONNX) to effectively improve the speed of model reasoning. For the demands of high detection accuracy, the system designs a dual model detection strategy, which divides texture detection and edge detection into two steps. The advantage of this strategy is to remove most useless information while ensuring the effective information of the image. Furthermore, two auxiliary models are used to promote the accuracy of detection based on classification methods. Finally, experiments are proposed to verify the functional indicators of the system. After the on-site test of the production line in the medical glove factory, the system has the ability to detect the gloves of two production lines with high real-time. The product missed detection rate is less than 2%, and the product mistakenly picked rate is less than 5/10000. Verified by the industry of gloves, the system can be put into production line.

Jing Wang, Meng Wan, Jue Wang, Xiaoguang Wang, Yangang Wang, Fang Liu, Weixiao Min, He Lei, Lihua Wang
Mobile Terminal Identity Authentication Method Based on IBC

With the rapid development of the power mobile Internet, traditional identity authentication and authentication modes still have identity information theft, are unable to prevent illegal behaviors of internal users, etc., which can no longer meet the security requirements of identity authentication in mobile Internet services. In response to this problem, this paper proposes a mobile terminal identity authentication method on the Identity-Based Cryptography (IBC). During registration, the user’s voice is collected through the mobile terminal to establish an identity vector (i-vector) voiceprint model, and features are extracted and added to the identity mark to generate a user ID and a corresponding identity password system; during authentication, the legitimacy of the user is judged based on voice recognition to prevent illegal user intrusion, And then based on the identity password combined with the symmetric encryption algorithm AES to encrypt and decrypt data to achieve terminal identity authentication to resist common attacks in the mobile Internet. Finally, the experimental analysis shows that the method effectively improves the security of the power mobile Internet identity authentication process and has the advantages of low cost and high efficiency. In the power mobile Internet business scenario, this method greatly improves the identification ability of illegal users and the resistance to network malicious attacks.

Xuqiu Chen, Wei Wang, Wei Gan, Yi Yang, Su Yuan, Meng Li
Secure Shell Remote Access for Virtualized Computing Environment

Information processing in the era of big data is inseparable from the effective support of scientific computing. Complex scientific computing requires cloud computing to provide computing resources. One of the core foundations of Cloud computing is secure and reliable remote access technology. Users often log in to the remote server for scientific calculation. However, when users log in with public key, the steps are cumbersome. Therefore, this project develops a secure shell remote access information system for virtualized computing environment, which is called SSHRA for short. The system enables users to log in to the remote server more conveniently. The system can generate the corresponding certificate according to the public key provided by the user. Users use certificates for remote login. Users can obtain certificates through web or email. In addition, this system also designs an intelligent connection between multi hop servers. The system improves the security of remote login by limiting the IP, validity and available commands of the certificate. After users log in to the remote server with the certificate provided by the system, they can use commands to perform related operations. The system is developed based on open source software, so it has good scalability.

He Li, Rongqiang Cao, Hanwen Xiu, Meng Wan, Kai Li, Xiaoguang Wang, Yangang Wang, Jue Wang
A Survey of Machine Learning and Deep Learning Based DGA Detection Techniques

Botnets are the most commonly used mechanisms for current cyberattacks such as DDoS, ransomware, email spamming, phishing data, etc. Botnets deploy the Domain Generation Algorithm (DGA) to conceal domain names of Command & Control (C&C) servers by generating several fake domain names. A sophisticated DGA can circumvent the traditional detection methods and successfully communicate with the C&C. Several detection methods like DNS sinkhole, DNS filtering and DNS logs analysis have been intensively studied to neutralize DGA. However, these methods have a high noise rate and require a massive amount of computational resources. To tackle this issue, several researchers leveraged Machine learning (ML) and Deep Learning (DL) algorithms to develop lightweight and cost-effective detection methods. The purpose of this paper is to investigate and evaluate the DGA detection methods based on ML/DL published in the last three years. After analyzing the relevant literature strengths and limitations, we conclude that low detection speed, encrypted DNS sensitivity, data imbalance sensitivity, and low detection accuracy with variant or unknown DGA are most likely the current research trends and opportunities. As far as we know, this survey is the first of its kind to discuss DGA detection techniques based on ML/DL in-depth, as well as analysis of their limitations and future trends.

Amr M. H. Saeed, Danghui Wang, Hamas A. M. Alnedhari, Kuizhi Mei, Jihe Wang
Sci-Base: A Resource Aggregation and Sharing Ecology for Software on Discovery Science

In the process of open science and Open research, scientific software affects all aspects of scientific research. Open-source scientific software is a cost-effective solution for many universities and research institutions. By exploring the ecological model of scientific software, this paper puts forward a design scheme of a safe and controllable support platform of open-source scientific software through putting the core mechanism of “sharing” and “continuous evaluation” into effect. China Science and technology has made breakthroughs from the four levels of theory and technology, support platform, ecosystem and operation system, and created a proven scientific software ecosystem based on the design scheme, including the service mode of “four platforms and one competition”. The progress of research transparency in scientific research and gradual maturity of scientific software promote the cooperation between community developers. In the course of constructing the scientific open-source ecology, open-source culture has been integrated into scientific researches, continuously gathering excellent open-source scientific software, and gradually creating a favorable environment for scientific research talents. It can not only actively promote the breakthrough of technological monopoly and improve the talent training strategy of independent innovation, but also benefit the public and promote the progress of innovation ability of the whole society.

Meng Wan, Jiaheng Wang, Jue Wang, Rongqiang Cao, Yangang Wang, He Li
Joint Accuracy and Resource Allocation for Green Federated Learning Networks

This paper studies the energy and time resource optimization of federated learning (FL) in wireless communication networks. In the considered network model, each client uses local data for model training, and then sends the trained FL model to the central server. However, the energy budget of the local computing and transmission process is limited. Therefore, reducing energy consumption should be given priority when we consider the FL efficiency and accuracy. We invest a green communication joint learning issue and expressed as an optimization problem. To minimize the energy consumption under the condition that the overall FL time is constrained, we propose an iterative algorithm based on Lyapunov optimization. Our algorithm selects the clients participating in each round and allocates the different bandwidth to each client. At the same time, the connection between local training and communication process is considered so that we can get the optimal client local calculation force.

Xu Chu, Xiaoyang Liu, Qimei Chen, Yunfei Xiong, Juanjuan Wang, Han Yu, Xiang Hu
Trust Evaluation Method Based on the Degree of Code Obfuscation

In the power grid environment, equipment certification is the key practice of zero trust security. Through the trust evaluation of equipment, the power grid environment can be effectively protected. This paper introduces identity authentication and behavior authentication to evaluate the security of devices from many aspects, and proposes a trust evaluation scheme based on code confusion for terminal devices. Through the identification and password technology and digital signature mechanism, the identity of terminal equipment is verified to prevent incomplete files caused by virus infection, Trojan horse/back door/man-made tampering, transmission failure and other reasons, and complete identity authentication. Through the improved trust evaluation model based on supervised learning to evaluate the effectiveness of code confusion, we can determine the ability of the code to resist malicious attacks, analyze whether the device will bring threatening behavior, indirectly verify the device behavior trust, and then prove the security of the terminal or software to complete behavior authentication.

Lu Chen, Zaojian Dai, Nige Li, Yong Li
An SG-CIM Model Table Classification Method Based on Multi Feature Semantic Recognition Technology

In this paper, based on the SG-CIM model Knowledge Graph, we introduce the semantic recognition technology in Natural Language Processing to model the information contained in the graph and mine the semantic information with multiple features according to the characteristics of the data itself. For the problem of graph domain name-related attribute complementation, this paper adopts a multi-feature semantic recognition approach to classify the given SG-CIM model table by domain name. We propose an ATT-ALE-TextRNN model for the descriptive features of the table, adding N times second-level domain name embeddings to the basic TextRNN and calculating the attention score together to capture the tendency of different contextual information for a given category. In this paper, with reference to the multidimensional discrete feature classification problem of the recommender system, an improved DeepFM model is proposed for the table discrete, class-forming features. It facilitates the discovery of semantic dependencies between class features, makes the feature distribution more diverse, and avoids the problems of low repetition between multidimensional features and low performance of combined computation. By combining the above two models, this paper achieves more accurate mining of multi-feature semantics and accurate classification of topic domains.

Pengyu Zhang, Chunmei Wang, Baocong Hao, Wenhui Hu, Xueyang Liu, Lizhuang Sun
BBCT: A Smart Blockchain-Based Bulk Commodity Trade System

The bulk commodity is a major strategic resource of the country, and related trade markets are booming. However, due to channel isolation and information barriers, the trade platforms also have risks and industry chaos, which make the bulk commodity trade process suffer from information asymmetry, difficulty in choosing a suitable commodity, difficulty in commodity pricing, and lack of trust in the trade platform. To cope with these challenges, we propose BBCT, a Blockchain-based Bulk Commodity Trade system, to achieve credible and fair bulk commodity trade. The process of multi-party trade is divided into four stages: matching, negotiation based on time-constrained Bayesian learning, decision-making based on the technique for order preference by similarity to ideal solution and signing. We design a “negotiation-signing” dual-channel blockchain architecture, so that buyers and sellers can complete the negotiation and signing process coordination on the blockchain. We finally verify the efficiency and reliability of BBCT through experiments.

Jian Yang, Yawen Lu, Zhihui Lu, Jie Wu, Hui Zhao
Research on Data Fault-Tolerance Method Based on Disk Bad Track Isolation

Disk is not only an important carrier to save data, but also a key component of storage system. It is of great significance to improve the reliability and data availability of disk. In the big data environment, the number of disks in the storage system is huge and distributed, so the maintenance for disk failure is inefficient and costly. Disk bad track is one of the most common disk faults. To solve this problem, this paper proposes a disk bad track isolation algorithm. This approach uses a data fault-tolerance mechanism to isolate physical bad tracks, effectively reducing the disk failure rate, improving service stability, and reducing system maintenance costs. The results show that the bad track isolation method can reduce the Disk IO exception by 47%, and significantly reduce the disk failure rate. It is effective in improving system stability and reducing maintenance cost.

Xu Zhang, Li Zheng, Sujuan Zhang
Charge Prediction for Criminal Law with Semantic Attributes

Most of the existing machine learning methods for charge prediction adopt the training mechanism of supervised learning. These algorithms have high requirements for the number of training samples corresponding to each crime. However, few or no cases are corresponding to some crimes in real scenarios, which leads to the poor performance of these models in practice. To alleviate this problem, we propose a novel Zero-Shot Learning (ZSL) based method for legal charge prediction tasks. Specifically, we define a set of semantic attributes to represent the domain knowledge of charges, which enables the model to migrate knowledge from seen charges to unseen charges. In this way, with the help of the ZSL mechanism, unseen charges and charges with a small number of training samples could be relatively predicted accurately. We evaluate the performance of the proposed method on a dataset collected from China Judgements Online, and the experimental results show that our method obtains $$32.4\%$$ 32.4 % accuracy for the unseen charges and can largely retain the predictive power for the seen charges.

Cong Zhou, Weipeng Cao, Zhiwu Xu
Research on Enterprise Financial Accounting Based on Modern Information Technology

In the process of the continuous development of market economy, the market has brought development opportunities for enterprises. If an enterprise wants to maximize its own economic benefits, it requires the enterprise to continuously optimize and reform its financial accounting work, so as to promote its rapid and steady development. For modern enterprises, information technology has become the main factor of competition among enterprises, which promotes the wide application of modern information technology by enterprises to a great extent. Modern information technology is highly integrated with enterprise financial management, which improves the efficiency and quality of financial accounting. As an indispensable and important department in the process of enterprise development, enterprise financial accounting must keep up with the trend of big data era and apply information technology to enterprise financial accounting to improve enterprise management. This paper analyzes the impact of modern information technology on enterprise financial accounting, and puts forward corresponding actions to promote the positive impact of modern information technology on enterprise financial accounting.

Jie Wan
Financial Information Management Under the Background of Big Data

With the constant renewal of the knowledge age, economic globalization is intensifying. Big data is the inevitable outcome of the development of information technology to a certain extent, which integrates various Internet technologies and brings many conveniences to people’s lives and work. Massive data not only brings difficulties to management and maintenance, but also provides great potential value. As an important development field of social and economic construction, it plays an important role in promoting the progress of Chinese society and the improvement of people’s material level. The application of big data technology in financial management is an inevitable trend and need of the development of modern society, which makes new development and breakthrough in the financial field. Based on big data and its characteristics, this paper provides an important way to solve big data in financial industry. With the application of science and technology, financial information management based on big data has been realized, which is helpful to promote the continuous progress of financial industry.

Dansheng Rao, Jie Wan
AreaTransfer: A Cross-City Crowd Flow Prediction Framework Based on Transfer Learning

Urban transfer learning transfers knowledge from the data-rich city to the data-scarce city, effectively solving the cold-start crowd flow prediction problem. In urban transfer learning, the selection of source cities mainly focuses on the experimental evaluation, which lacks methods for assessing the transferability of source cities. Besides, the complex regional matching relationships between source-target cities have not been fully addressed. To resolve these challenges, we propose a cross-city crowd flow prediction framework based on transfer learning, called AreaTransfer. AreaTransfer aims to select the appropriate source city from multi-source candidate cities and establish effective area matching relationships to improve crowd flow prediction accuracy. First, we design a source city selection algorithm based on the city’s layout characteristics to select the final source city. Then, we propose a modified deep residual neural network to allow area-level prediction. Finally, we optimize the pre-trained model by integrating the area matching results during the city selection process. Experimental results exhibit that AreaTransfer can improve the prediction accuracy by 15%–17% compared with other state-of-the-art models.

Xiaohui Wei, Tao Guo, Hongmei Yu, Zijian Li, Hao Guo, Xiang Li
Parallel Improved Quantum Evolutionary Algorithm for Complex Optimization Problems

As for the problems of premature convergence, slow convergence and long computing time in solving complex continuous function optimization by traditional quantum evolutionary algorithm, a dynamic parallel quantum evolutionary algorithm for solving complex continuous function optimization problem is proposed in this paper. Multi population co-evolution is adopted, and each sub-population evolves in different search areas according to their own evolution objectives to form a parallel search mode, which can speed up the algorithm convergence and avoid premature convergence; Quantum computation is introduced into the differential evolution algorithm. In this method, the probability amplitude representation of qubits is applied to the real number coding of chromosomes, the chromosome position is updated by quantum mutation, quantum crossover and quantum selecting operations, the two probability amplitudes of qubits are exchanged by quantum non-gate, and an adaptive operator is introduced to improve the population diversity, It can not only prevent the premature convergence of the algorithm, but also make the algorithm converge faster and improve the problem-solving ability of the optimization algorithm. Taking the function extreme value problem as an example, the effectiveness of the algorithm is verified by this algorithm.

Yapeng Sun
A Privacy-Preserving Auditable Approach Using Threshold Tag-Based Encryption in Consortium Blockchain

The rising attention of deploying consortium blockchain in the industry has facilitated a wide scope of enterprise-level applications. In consortium blockchain, each participant’s identity needs to be verified before it joins the blockchain network, which implies both identity leakage and the linkability between entites are targets for privacy attackers. In this paper, we propose a novel approach to hide both identity and linking relations between participants. An auditor role is developed in our solution to trace down suspicious transactions to identify malicious participants in a decentralized manner. Security analysis and experiment evaluations are given for evidencing the effectiveness of our approach in this work.

Yunwei Guo, Haokun Tang, Aidi Tan, Lei Xu, Keke Gai, Xiongwei Jia
A Hop-Parity-Involved Task Schedule for Lightweight Racetrack-Buffer in Energy-Efficient NoCs

Traditional NoC’s buffer design mainly bases on SRAM that could not break through the high static power consumption characteristics by itself, which could be solved by emerging NVMs, such as energy-efficient RTM (Racetrack Memory). Using RTM instead of SRAM for NoC buffer design can directly reduce the static energy to near-zero level. However, RTM is not friendly to random access due to its port alignment operation, called invalid shift. This paper proposes to replace random FIFO-buffer with sequential LIFO-buffer for lightweight transmission in NoC, which can overcome the expense of invalid shift. However, the LIFO design incurs flits flipping during transmission and leads to extra endianess-correction cost in odd-path. Therefore, this paper designs a hop-parity-involved task schedule that avoids those odd-path during the communications among tasks, by which the extra endianess-correction can be totally removed. Our experiments show that RTM-LIFO buffer design can achieve over $$50\%$$ 50 % energy saving than SRAM-FIFO buffer.

Wanhao Cao, Jihe Wang, Danghui Wang, Kuizhi Mei
Analysis and Discussion on Standard Cost Allocation Model in State Grid

With the rapid development of smart grid, the traditional financial system gradually does not meet the needs of the current power grid financial decision-making. As a new data processing means, data analysis and statistical regression can make up for some shortcomings of the traditional financial system. This paper discusses the application of big data analysis and artificial intelligence theory in the activity-based transformation of power grid standard cost, in order to provide new ideas for power grid cost management. Firstly, the stepwise regression analysis and forward driver selection method are used to select the most significant driver variables from many driver variables, and then on this basis, the selected variables are visually analyzed to further eliminate the high correlation between variables. Finally, the machine learning model in artificial intelligence and lifting algorithm are used to optimize and discuss the cost allocation model. The experimental results show that the average absolute error of the total cost calculated by the model is 6% lower than the actual total cost. Big data analysis and AI can more efficiently process power grid financial data, which is of great help to improve the accuracy of calculation.

Shaojun Jin, Jun Pan, Qian Chen, Bo Li
A Novel Deception Defense-Based Honeypot System for Power Grid Network

In recent years, as cyber-attacks have become more and more rampant, power grid networks are also facing more and more security threats, which have gradually become the focus attention of attackers. Traditional defense methods are represented by intrusion detection systems and firewalls, whose main purpose is to keep attackers out. However, with the diversification, concealment and complexity of attack methods, traditional defense methods are usually difficult to cope with the endless attack methods. To this end, this paper proposes a new type of honeypot system based on deception defense technology. While retaining the nature of the honeypot, it adopts dynamic deception approach to actively collect unused IP addresses in the power grid networks. Then, these unused IP addresses are used to construct dynamic virtual hosts. When an attacker initiates network access to these dynamic virtual hosts, they will proactively respond to the attacker or redirect the attack traffic to the honeypot in the background, thereby deceiving and trapping the attacker. The experimental results show that the proposed honeypot system can effectively expands the monitoring range of traditional honeypots and has a good defense effect against unknown attacks, thus effectively making up for the shortcomings of traditional defense methods.

Mingjun Feng, Buqiong Xiao, Bo Yu, Jianguo Qian, Xinxin Zhang, Peidong Chen, Bo Li
Seamless Group Pre-handover Authentication Scheme for 5G High-Speed Rail Network

The rapid development of global wireless networks has promoted the innovation of 5G network application scenarios and the improvement of infrastructure. The deployment of 5G network in high-speed railway network system has important practical application prospects. Data security and user experience are significant factors to consider for 5G high-speed rail network. However, existing 5G network handover authentication mechanisms still have challenging issues in terms of security and efficiency. Considering the complexity and diversity of the future 5G high-speed rail network, this paper proposes a secure and efficient group pre-handover authentication scheme based on Mobile Relay Node (MRN). MRN with trusted execution environment assists 5G User Equipment (UE) on high-speed rail to complete the handover process before reaching the next 5G base station in the authentication and handover execution phase, so as to improve the security and handover efficiency of UE in the authentication and handover process. Therefore, this scheme can reduce handover delay and realize fast and secure group handover of high-speed UEs under the dense-deployed 5G base station. After comprehensive performance evaluation, security and communication efficiency are superior to other schemes.

Zongxiao Li, Di Liu, Peiran Li, Dawei Li, Yu Sun, Zhenyu Guan, Jianwei Liu, Jie Gao
Anomaly Detection System of Controller Area Network (CAN) Bus Based on Time Series Prediction

With the development of intelligent networked vehicles, the research on the safety of in-vehicle networks has gradually become a hot spot. CAN (controller area network) is the most widely used in-vehicle network bus, and its safety problem has become the most critical problem to be solved in the development process of intelligent networked vehicles. This paper aims at the in-vehicle can be used in intelligent networked vehicles Bus network, its communication characteristics and security problems are analyzed and dissected. Meanwhile, with the increase in demand for in-vehicle network communication applications, the corresponding attacks have also increased year by year. Therefore, this paper only increases the anomaly detection of in-vehicle application log in the anomaly detection of CAN bus, aiming to detect the abnormal behavior of vehicles in an all-around way. To solve the field data lacking's problem, we collect a data set containing several types of data from multiple channels, including different types of attack can bus messages. Due to the different elements in the message having different effects on the classification results, the attention mechanism is introduced to give different weights to different messages and log data segments, which increases the effect of classification detection.

Xiangtian Tan, Chen Zhang, Bo Li, Binbin Ge, Chen Liu
High-Performance and Customizable Vector Retrieval Service Based on Faiss in Power Grid Scenarios

With the rapid development of machine learning and deep learning, more and more services in the power grid have introduced machine learning and deep learning technologies, and related application scenarios and services have become more and more, especially vector retrieval services. With the increase of the amount of data and the increase of the demand, higher requirements are put forward for the vector retrieval service performance and service management. In order to solve this problem, this paper designs a high-performance and customizable vector retrieval service, referred to as HCFRS. The HCVRS service uses Fiass as the underlying framework of the vector retrieval service, supporting functions such as service registration, service unloading, resource allocation, load balancing, and data management. This paper verifies whether the HVCRS service meets the service design requirements from the three aspects of functional testing, accuracy testing and performance testing. The experimental results show that the HVCRS service has complete functions and good performance, which basically solves the difficulties encountered by the State Grid in vector retrieval services.

Pengyu Zhang
APT Attack Heuristic Induction Honeypot Platform Based on Snort and OpenFlow

The honeypot can record attacker’s aggressive behavior and analyze methods of attack in order to develop more intelligent protection policies in the system. While traditional honeypot technology has evolved from static configuration over to the dynamic deployment and greatly reduces the possibility of an attacker identifying a honeypot. But most of the prior technology, there are passive listening, high maintenance costs, controllable weak, low monitoring coverage, easy to identify and other issues. In this paper, T-Pot Multi-platform honeypot based Snort and OpenFlow technology, we proposed the attack traffic into the Multi-platform honeypot to prevent further harm to the system. In order to reduce system load and improve system performance, perform feature extraction and modeling analysis on the attack data set, and Based on the ATT&CK model, a multi-honeypot platform for APT attack recognition is implemented.

Bo Dai, Zhenhai Zhang, Ling Wang, Yuan Liu
An Automatic Design Method of Similarity Fusion Neural Network Based on SG-CIM Model

The State Grid Enterprise Public Data Model (SG-CIM 4.0) is a semantically unified data model that can be provided for smart grid business applications. The model is based on a standard table to find similar entities in the physical model for consistency checking. The standard table entities and physical model entities contain both continuous attributes and discrete attributes. How to accurately calculate the similarity of these different attributes and fuse them into a unique similarity, which can be used to efficiently and accurately mine the entity pairs with the highest similarity, are problems to be solved. In order to solve the above problems and make this similarity calculation and fusion method scalable, this paper calculates the syntactic similarity of continuous attributes, semantic similarity, and discrete attribute similarity to the content of different attributes in the entity and introduces a NAS (Neural Architecture Search) based on these similarities Similarity Fusion Neural Network automatically designed a method to achieve the fusion of similarity, which will be called SFNAS (Similarity Fusion NAS). The neural network fusion similarity calculated using SFNAS is better than the traditional linear weighted average similarity in terms of entity pair matching hit rate. This paper can provide useful references for subsequent research on SG-CIM models.

Xiaoqi Liao, Xinliang Ge, Yufei Li, Wenhui Hu, Xin He, Shijie Gao, Xiaoming Chen, Xueyang Liu
A Detection Method for I-CIFA Attack in NDN Network

In recent years, Named Data Networking (NDN) has become a hot topic of research as an emerging network architecture. In particular, the security of NDN has received widespread attention, because NDN networks could resist most of the denial-of-service attacks under the current TCP/IP architecture. However, with the continuous evolution of attack models, the security of NDN networks has been greatly threatened. At present, a new type of attack I-CIFA poses a threat to NDN network. This paper proposes new detection features and sets up sample sets with different detection granularity to improve detection accuracy. And combine Random Forest algorithm for real-time detection of I-CIFA attacks. Experiments show that the scheme could detect I-CIFA attacks more rapidly and has a higher detection rate than other detection schemes. The experiment result show that the detection performance of this scheme is better than other schemes. The detection probability is 97.5%, false negative probability is 1.2% and the error rate is 3.0%.

Meng Yue, Han Zheng, Wenzhi Feng, Zhijun Wu
InterGridSim: A Broker-Overlay Based Inter-Grid Simulator

Large scale Grid computing systems are often organized as an inter-Grid architecture, where multiple Grid domains are interconnected through their local broker. In this context, the main challenge is to devise appropriate job scheduling policies that can satisfy goals such as global load balancing together with maintaining the local policies of the different Grids. This paper presents InterGridSim, a simulator for scalable resource discovery and job scheduling technique in broker based interconnected Grid domains. Inter-Grid scheduling decisions are handled jointly by brokers in a broker overlay network. A Broker periodically exchanges its local domain’s resource information with its neighboring brokers. InterGridSim offers several network structures and workload allocation techniques for Tier-1 and Tier-0 networks and large workload capacity. The paper presents sample simulations for throughput, utilisation, and load balancing in a network of 512 brokers and 50k nodes.

Abdulrahman Azab
Vertical Handover of Satellite-Ground Fusion Network Based on Time and Location Under Early Access Strategy

Satellites have been integrated into the terrestrial mobile network to meet the 5G requirements of providing connectivity regardless of time and location. The user terminal needs to switch between the satellite and base station, known as vertical handover, to achieve continuous and high-quality communication. The credit method avoids frequent network state measurement. However, it requires the status of all networks for credit calculation, which may lead to incomplete and delayed data acquisition. Therefore, a handover decision method based on time-space attribute reputation is proposed in this study. The proposed method takes the user’s location and time factors into account. Considering the changes of the network topology caused by satellite and user mobility, the target network is selected according to the overall reputation of candidates and the user’s current location. The early untrusted network information is eliminated in time to obtain an effective and accurate result. Moreover, since uplink vertical handover has an undesirably high propagation delay during protocol implementation, this study optimizes the existing execution process and proposes an early access strategy to reduce the handover delay. Both designs have been tested and are proved to be effective.

Yun Liu, Shenghao Ding, Jiaxin Huang, Hao Jiang, Jing Wu, Ruiliang Song, Ningning Lu, Zhiqun Song
Research on Graph Structure Data Adversarial Examples Based on Graph Theory Metrics

Graph neural networks can learn graph structure data directly and mine its information, which can be used in drug research and development, financial fraud prevention, and other fields. The existing research shows that the graph neural network is lacking robustness and is vulnerable to attack by adversarial examples. At present, there are two problems in the generation of confrontation examples for graph neural networks. One is that the properties of graph structure are not fully used to describe the antagonistic examples, the other is that the gradient calculation is linked with the loss function and not directly linked with the properties of graph structure, which leads to excessive search space. To solve these two problems, this paper proposes a graph structure data confrontation example generation scheme based on graph theory measurement. In this paper, the average distance and clustering coefficient is used as the basis for each step of disturbance, and the counterexamples are generated under the premise of keeping the data characteristics. Experimental results on small-world networks and random graphs show that, compared with the previous methods, the proposed method makes full use of the nature of graph structure, does not need complex derivation, and takes less time to generate confrontation examples, which can meet the needs of iterative development.

Wenyong He, Mingming Lu, Yiji Zheng, Neal N. Xiong
Computation Offloading and Resource Allocation Based on Multi-agent Federated Learning

Mobile edge computing has been provisioned as a promising paradigm to provide User Equipment (UE) with powerful computing capabilities while maintaining low task latency, by offloading computing tasks from UE to the Edge Server (ES) deployed in the edge of networks. Due to ESs’ limited computing resources, dynamic network conditions and various UEs task requirements, computation offloading should be carefully designed, so that satisfactory task performances and low UE energy consumption can both be achieved. Since the scheduling objective function and constraints are typically non-linear, the scheduling of computation offloading is generally NP-hard and difficult to obtain optimal solutions. To address the issue, this paper combines deep learning and reinforcement learning, namely deep reinforcement learning, to approximate the computation offloading policy using neural networks and without the need of labeling data. In addition, we integrate Multi-agent Deep Deterministic Policy Gradient (MADDPG) with the federated learning algorithm to improve the generalization performance of the trained neural network model. According to our simulation results, the proposed approach can converge within 10 thousand steps, which is equivalent to the method based on MADDPG. In addition, the proposed approach can obtain lower cost and better QoS performance than the approach based only on MADDPG.

Yiming Yao, Tao Ren, Yuan Qiu, Zheyuan Hu, Yanqi Li
Multi-agent Computation Offloading in UAV Assisted MEC via Deep Reinforcement Learning

Due to its high maneuverability and flexibility, there have been a growing popularity to adopt Unmanned Aerial Vehicles (UAVs) on Mobile Edge Computing (MEC), serving as edge platforms in infrastructure- unavailable scenarios, e.g., disaster rescue, field operation. Owing to the weak workload, UAVs are typically equipped with limited computing and energy resources. Hence, it is crucial to design efficient edge computation offloading algorithms which could achieve high edge computing performance while keeping low energy consumption. A variety of UAV assisted computation offloading algorithms have been proposed, most of which focus on the scheduling of computation offloading in a centralized way and could become infeasible when the network size increases greatly. To address the issue, we propose a semi-distributed computation offloading framework based on Multi-Agent Twin Delayed (MATD3) deep deterministic policy gradient to minimize the average system cost of the MEC network. We adopt the actor-critic reinforcement learning framework to learn an offloading decision model for each User Equipment (UE), so that each UE could make near-optimal computation offloading decisions by its own and does not suffer from the booming of the network size. Extensive experiments are carried out via numerical simulation and the experimental results verify the effectiveness of the proposed algorithm.

Hang He, Tao Ren, Yuan Qiu, Zheyuan Hu, Yanqi Li
OPN-DTSP: Optimized Pointer Networks for Approximate Solution of Dynamic Traveling Salesman Problem

The Traveling Salesman Problem (TSP), a classic problem in combinatorial optimization, is a well-known NP-hard problem with a wide range of real-world applications. Dynamic TSP is a further upgrade of TSP. Its dynamic change information leads to the greater complexity of the problem. Over the years, numerous excellent algorithms have been proposed by researchers to solve this problem, from the early exact algorithms to approximate algorithms, heuristics, and more recently, machine learning algorithms. However, these algorithms either only work with static TSP or have an unacceptable time consumption. To this end, we propose an optimized pointer network for approximate solution of dynamic TSP, which guarantees a high-quality approximate solution with very low time consumption. We introduce an attention mechanism in our model to fuse the dynamically changing edge information and the statically invariant node coordinate information and use reinforcement learning to enhance the decision-making of the model. Finally, the superior performance for dynamic TSP with low time-cost is verified on comparison experiments.

Zhixiang Xiao, Mingming Lu, Wenyong He, Jiawen Cai, Neal N. Xiong
A Novel Client Sampling Scheme for Unbalanced Data Distribution Under Federated Learning

Federated learning is one computation paradigm used to address privacy preservation and efficient collaboration computing nowadays. Especially, in the environment where edge devices are facing different data scenarios, it is a challenge to enhance the prediction model accuracy. Since the data distributions on different edge devices might not be independent identical distributions, and also due to the communication obstacles existing in the modern complicated wireless world, it is an essential problem to sample which client devices to contribute to the server learning model. In this paper, instead of making the assumption on uniform distributed data sources, we assume the agnostic data distribution presumption. One indicator called client reward is defined applicable on the proposed client sampling algorithm. Combing with the redefined loss functions on the agnostic data distribution, a novel client sampling scheme is proposed and tested on real world datasets. The experiment results show that the client sampling scheme improves prediction accuracy on unbalanced data sources from different edge devices and achieves reasonable computing efficiency.

Bo Chen, Xiaoying Zheng, Yongxin Zhu, Meikang Qiu
A Novel Secure Speech Biometric Protection Method

In recent years, automatic speaker verification (ASV) has been widely used in speech biometrics. The ASV systems are vulnerable to various spoofing attacks, such as synthesized speech (SS), voice conversion (VC), replay attacks, twin attacks, and simulation attacks. The research to ensure the application of voice biometric systems in various security fields has attracted more and more researchers’ interest. The combination of credibility and voice is particularly important in this period when biometric systems are widely used. We propose a novel secure speech biometric protection method. This article also summarizes previous research on spoofing attacks, focusing on SS, VC, and replay, as well as recent efforts to improve security and develop countermeasures for spoofing speech detection (SSD) tasks. At the same time, it pointed out the limitations and challenges of SSD tasks.

Lan Yang, Zongbo Wu, Jun Guo
Thunderstorm Recognition Based on Neural Network PRDsNET Models

Thunderstorm is a kind of severe weather with strong sudden and destructive ability. It is still difficult to warn and forecast accurately in the meteorological industry. In this paper, a neural network model of encoder decoding structure -- PRDsNET is constructed, which includes a LSTM variant structure Causal LSTM unit, high-speed characteristic channel GHU (Gradient Highway Units) and DenseBlock module in dense connection. 11 SA/SB Doppler radars and lightning data from 2017 to 2020 in Hunan province were used to verify the effect of thunderstorm recognition. In addition, multiple groups of network architectures with different encoding and decoding and multiple loss functions and optimizers were selected for cross-comparison experiments. Experimental results show that the model has an average hit rate of 95% and an average false alarm rate of 5%. The results are satisfactory and have broad application scenarios in the future meteorological automation work.

ShengChun Wang, DanYi Hu, ChangQing Zhou, JingYu Xu
The Development and Trend of Vehicle Functional Safety

In the development stage of vehicle applications, ensuring the safety of the vehicle is always a key condition. However, the current automotive functional safety design is challenged by a variety of factors, such as the complexity of the new generation of automotive electrical and electronic (E/E) architecture, the continuous update of the automotive functional safety standard ISO26262, and the new AUTOSAR adaptive platform standard. The release of different types of cost increases. This paper summarizes the latest developments of vehicle functional safety design methods through analysis, design, optimization and operation stages: 1) functional safety analysis; 2) functional safety assurance; 3) safety perception cost optimization; 4) safety-critical multi-functional dispatch. It then provides the future trend of functional safety design methods, which will be directly oriented to automatic vehicles.

Jun Guo, Gejing Xu, Junjie Wu, Lan Yang, Han Deng
Design and Development of Simulation Software Based on AR-Based Torricelli Experiment

This paper proposes to use AR technology to design and develop simulation software based on the Torricelli experiment of AR. In the physics experiment teaching, Torricelli experiment clearly confirmed the existence of atmospheric pressure with simple devices and easy-to-understand principles and measured that the size of a standard atmospheric pressure is about 760 mm Hg. However, due to the possibility of causing mercury hazards and other problems during the experiment, it is not recommended to demonstrate the actual operation of the Torricelli experiment. This paper first uses smartphones to scan device pictures to obtain virtual three-dimensional models, and to conduct interactive learning on the screen of the mobile phone, so as to ensure the personal safety of teachers and students and the premise of environmental safety. Next, we improve students’ learning motivation and mastery of physics knowledge. In the development process of this system, Unity3d software is used as the main development tool, Vuforia is used technically to realize AR recognition, and the current mainstream development language C# is selected for programming.

Yan Hui, Jie Zhan, Libin Jiao, Xiu Liang

Short Papers

Frontmatter
Study on the Organization and Governance of Bigdata for Lifelong Education

The open university is carrying out various types of academic and non-academic education to serve the city’s citizens and usually has gained a good reputation in the local area. However, previously, due to its inadequate hardware system, fragmented information system, and incomplete data collection, it carried out various operations more by empirical judgment and lacked objective data support, thus making it impossible to promote the development of a lifelong education system accurately. By collecting, integrating, and analyzing these lifelong education projects, converting them into lifelong education big data, and establishing various lifelong education data visualization and analysis models, we can promote the transformation of life-long education from discrete demand to education big data precision-driven development mode, provide support for the supply-side structural reform of lifelong education, and at the same time improve the leading role of the open university in the construction of lifelong education system, thus promoting the high-quality development of lifelong education for citizens.

Li Ma, Zexian Yang, Wenyin Yang, Huihong Yang, Qidi Lao
Fault Location Technique of Distribution Power Network Based on Traveling Wave Measurement

Distribution network is the most important part of power system, distribution network fault location technology can help power companies to find and eliminate faults quickly, can provide reliable and continuous power supply plays a very important role. Based on traveling wave detection principle, this paper puts forward the corresponding solution of distribution network fault location system, and gives different types of traveling wave fault location methods.

Chunyi Lu, Kaili Yan, Mi Zhou
Unikernel and Advanced Container Support in the Socker Tool

Linux containers, with the build-once-run-anywhere approach, are becoming popular among scientific communities for software packaging and sharing. Docker is the most popular and user friendly platform for running and managing Linux containers. Unikernels are single-application fully virtualised lightweight packages designed to run as virtual machines. For some applications, unikernels can be alternative to containers due to the benefits they provide in terms of performance and security. presents an update for Socker, a wrapper for running Docker containers on Slurm that enforces running unpriviliges containers within Slurm jobs. The update to Socker includes: Improved security, MPI support, and support for OSv unikernels.

Abdulrahman Azab
Backmatter
Metadaten
Titel
Smart Computing and Communication
herausgegeben von
Prof. Dr. Meikang Qiu
Keke Gai
Han Qiu
Copyright-Jahr
2022
Electronic ISBN
978-3-030-97774-0
Print ISBN
978-3-030-97773-3
DOI
https://doi.org/10.1007/978-3-030-97774-0