Skip to main content

2024 | Buch

Parallel and Distributed Computing, Applications and Technologies

Proceedings of PDCAT 2023

herausgegeben von: Ji Su Park, Hiroyuki Takizawa, Hong Shen, James J. Park

Verlag: Springer Nature Singapore

Buchreihe : Lecture Notes in Electrical Engineering

insite
SUCHEN

Über dieses Buch

This book constitutes the refereed proceedings of the International Conference on Parallel and Distributed Computing, Applications and Technologies (PDCAT) which was held in Jeju, Korea in August, 2023. The papers of this volume are organized in topical sections on wired and wireless communication systems, high dimensional data representation and processing, networks and information security, computing techniques for efficient networks design, electronic circuits for communication systems.

Inhaltsverzeichnis

Frontmatter
A Blockchain System for Fake News Detection
Abstract
The media greatly impacts the world’s perception and what is happening in it. Nowadays, the Internet is a medium where information flowing from all over the world meets. Every year, more and more information websites are created from which we learn about the reality surrounding us. The problem is that many of these websites do not use primary and reliable sources but only use sources that have already been processed and duplicated. Very often, these portals omit not only important facts for a given event but also reproduce false information. There are also situations where fake news is deliberately created and disseminated. The phenomenon called “fake news” has long been known as disinformation or propaganda. Contemporary scientific analyses focus primarily on its new dimension, i.e. the dissemination of untrue or incorrect information on the Internet, the reasons for the popularity of fake news on the Internet, and the speed and manner of information dissemination. However, the most important problem is preventing the spread of fake news. In the article, the authors propose a method of authorising content using blockchain technology. The presented method is resistant to manipulation attempts, confirmed by the tests.
Janusz Bobulski
Formalization and Verification of the Zab Protocol Using CSP
Abstract
ZooKeeper Atomic Broadcast (Zab) is a high-performance atomic broadcast protocol, which is a key component of Apache ZooKeeper. By ensuring strong consistency and fault tolerance, the Zab protocol plays a crucial role in building robust and resilient distributed systems. However, the correctness and reliability of the Zab protocol have received limited attention in research. Thus, we employ Communicating Sequential Processes (CSP) to analyze and evaluate of the Zab protocol’s properties and behavior. We utilize Process Analysis Toolkit (PAT) to verify six important properties, including Deadlock Freedom, Divergence Freedom, Data Reachability, Consistency, Sequentiality and Atomicity. The verification results demonstrate that the Zab protocol provides assurance of correctness and reliability.
Wenting Dong, Jiaqi Yin, Sini Chen, Huibiao Zhu
Using MPIs Non-Blocking Allreduce for Health Checks in Dynamic Simulations
Abstract
Large-scale simulations often require frequent checks on global conditions that are not directly needed for the computation itself. While such health checks are not an integral part of the numerical algorithm, they serve an important role in controlling and coordinating the simulation. In distributed parallel computations their required communication may negatively impact the actual parallel computation and result in unnecessary synchronization points. We show that by using a non-blocking reduction these synchronization requirements can be loosened and the impact on the actual computation minimized. Further, it enables us to shift the communication into the background and progress it during the MPI calls that are done during the computation anyway. We demonstrate that a sufficient amount of MPI calls in between is required to allow for progress to happen. The presented approach delays the decision to be made in response to those health checks. But as it is not vital for the correct computation itself such a delay is usually tolerable and could offer a more robust scaling to large process counts.
Jana Gericke, Harald Klimach, Neda Ebrahimi Pour, Sabine Roller
Parallelizable Loop Detection using Pre-trained Transformer Models for Code Understanding
Abstract
Parallel programming is essential to utilize multi-core processors but remains challenging because it requires extensive knowledge of both software and hardware. Various automatic parallelization tools based on static analysis have been developed to ease the development of parallel programs. However, hand-parallelized codes still outperform auto-parallelized codes. Meanwhile, transformer-based large language models have made ground-breaking progress in coder understanding and generation tasks. In this paper, we fine-tune a transformer-based code understanding model, CodeT5, to create a model for automatically identifying parallelizable for-loops. The trained model helps developers to identify independent for-loops that can be potentially parallelized using tools such as OpenMP to improve the program performance. Our model is trained over 90,908 for-loops collected from 9 million C/C++ source files of public GitHub repositories, and achieves a 0.895 F1 score in identifying parallelizable for-loops in public GitHub projects and a 0.713 F1 score in the NAS Parallel Benchmark suite.
Soratouch Pornmaneerattanatri, Keichi Takahashi, Yutaro Kashiwa, Kohei Ichikawa, Hajimu Iida
SAHF-LightPoseResNet: Spatially-Aware Attention-Based Hierarchical Features Enabled Lightweight PoseResNet for 2D Human Pose Estimation
Abstract
In recent years, 2D human pose estimation (HPE) has become increasingly important in complex computer vision tasks, including understanding human behavior and interaction. Despite challenges like occlusion, unfavorable lighting, and motion blur, deep learning techniques have revolutionized 2D HPE by allowing automatic feature learning from data and improving generalization. We proposed a new model called Spatially-aware Attention-based Hierarchical Features Enabled Lightweight PoseResNet (SAHF-LightPoseResNet) for 2D HPE. This model extends the simple baseline network by using Spatially-aware Attention-based Hierarchical Features to enhance accuracy while minimizing parameters. The proposed model efficiently captures finer details by incorporating ResNet18, Global Context Blocks, and a novel SAHF module. Our SAHF-LightPoseResNet approach demonstrates superior performance compared to existing state-of-the-art methods, achieving PCKh@0.5 a of 90.8 and a Mean@0.1 metric of 41.1, highlighting its enhanced accuracy and efficiency. This model has important practical applications in robotics, gaming, and human-computer interaction, where accurate and efficient 2D HPE is essential.
Ali Zakir, Sartaj Ahmed Salman, Hiroki Takahashi
List-Based Workflow Scheduling Utilizing Deep Reinforcement Learning
Abstract
Workflow scheduling is a well-known NP-complete research problem with wide applications and increasing importance. Traditionally, heuristic and guided random search methods, e.g. genetic algorithm, are the two major categories of scheduling approaches developed to tackle this challenging problem. With the rise of deep reinforcement learning (DRL), this paper tries to apply it to solve the workflow scheduling problem in two different ways. The first way utilizes DRL as an iterative optimization method to find the best schedule for a specific workflow. In the second way, DRL is used to train a neural network which could be adopted to schedule new workflows not in the training set. Our DRL-based workflow scheduling method is based on the policy gradient (PG) reinforcement learning algorithm and utilizes a convolutional neural network (CNN). Experimental results show that our DRL-based method can produce more efficient workflow execution schedules, compared to the state of the art of heuristic-based scheduling algorithms. The superior performance of the DRL-based method indicates a promising direction for future research work on workflow scheduling.
Wei-Cheng Tseng, Kuo-Chan Huang
Federated Learning for Skin Cancer Classification
Abstract
Cancer is one of the deadliest diseases globally. Early detection is crucial for effective treatment. This paper proposes a new method that adopts federated learning for skin cancer classification. Experimental results demonstrate the potential of the proposed method for enhancing the accuracy of skin cancer classification. The proposed method can achieve 98% accuracy on the HAM10000 [1] dataset .
Zhe-Kai Xu, Yen-Wen Lin
A Task Offloading and Content Caching Strategy for the Internet of Vehicles in Cloud-Edge Environment
Abstract
A wide range of emerging in-vehicle applications can make the travel experience better for users. As the amount of vehicles on the road increases, so does the number of computational tasks that need to be processed, however, different vehicle users may request the same content, resulting in wasted resources. Therefore, IoV requires better compute offloading and content caching strategies to improve performance with respect to time latency and energy consumption. This paper proposes a joint task offloading and content caching optimization method based on forecasting traffic stream, called TOCC. First, temporal and spatial correlations are extracted from the preprocessed dataset using FOST and integrated to predict the traffic stream to obtain the number of tasks in the region at the next moment. To obtain a suitable joint optimization strategy for task offloading and content caching, the multi-objective problem of minimizing delay and energy consumption is decomposed into multiple single-objective problems using an improved MOEA/D via the Tchebycheff weight aggregation method, and a set of Pareto-optimal solutions is obtained. Finally, experimental results show the effectiveness of TOCC’s task offloading and task caching strategies and that TOCC outperforms than other methods with respect to time delay and energy consumption.
Yaping Wang, Junye Qiao, Zekun Hu, Pengwei Wang
Privacy-Preserving Retrieval Scheme Over Encrypted Medical Records with Relevance Ranking
Abstract
Electronic medical records (EMRs) contain a large amount of highly private and sensitive information of patients and medical institutions. For privacy concerns, EMRs are usually encrypted before outsourcing them to the cloud storage platform. However, it is difficult to retrieve the encrypted EMRs accurately and efficiently. The existing encrypted data retrieval schemes can hardly achieve the goals of fuzzy multi-keyword search, relevance ranking and high retrieval accuracy. Thus, this paper proposes a Privacy-preserving Retrieval scheme over Encrypted Medical Records (PREMR) that can satisfy those goals. We utilize the Possibility-Levenshtein based Spelling Corrector (PLSC) to support fuzzy multiple input keywords. A homomorphic-based encryption algorithm is proposed for relevance score encryption and calculation so that the encrypted medical records can be ranked without leaking private information. We theoretically prove that our scheme can achieve data confidential and privacy preserving. With the experiments’ evaluation, we analyze the costs and efficiency of our scheme. Finally, the comparison of PREMR with other related schemes shows that our scheme is more efficient and secure.
Wanting Lei, Xiehua Li, Yingzhu Wang, Xiaoyu Mei
A Data-Centric Approach for Efficient and Scalable CFD Implementation on Multi-GPUs Clusters
Abstract
Scalability is a crucial factor determining the performance of massive heterogeneous parallel CFD applications on the multi-GPUs platforms, particularly after the single-GPU implementations have achieved optimal performance through numerous optimizations. A novel Data-Centric hybrid MPI-CUDA CFD model is proposed in this paper to enable efficient scalability of CFD applications on large-scale heterogeneous platforms. Based on the Data-Centric approach, Minimum-cost MPI transfer strategy and the code refactoring technique are realized for a better balance between data transfer and floating-point computation performance, which could significantly improve the scalability and reduce the time-to-solution. Subsequently, those approaches are integrated into the industrial unstructured CFD software, FlowStar, to evaluate their effectiveness. Numerical results demonstrate that Minimum-cost MPI strategy achieves more than 2.0 times performance improvement compared to the traditional Model-Centric implementation, and the code refactoring technique boosts performance by 40% to 50% over the minimum-cost MPI version. Moreover, the Data-Centric implementation on 64 A100 GPUs platform show a speedup ratio of over 120 when compared to the original MPI implementation with 64 ranks.
Ruitian Li, Liang Deng, Zhe Dai, Jian Zhang, Jie Liu, Gang Liu
Research on Psychological Testing Methods of Criminal Suspects Based on Multi-features of EEG
Abstract
P300 is a commonly used indicator for testing the psychology of criminal suspects, but it has problems such as weak signal and large amount of processed data. Aiming at such problems, based on experiments to simulate real case data, a psychological test method for criminal suspects based on multi-feature extraction of EEG signals in time domain, frequency domain, and time-frequency domain was proposed. In order to achieve psychological testing of criminal suspects in public security investigations, used the existing data to test and adjusted the model. In the time domain, the signal-to-noise ratio was improved by superimposing and averaging, and P300 was extracted. The amplitude and latency of the components were taken as the time domain features. In the frequency domain, the relationship between the EEG power and frequency reflected by the power spectrum estimation was used as the frequency domain features. In the time and frequency domain, the wavelet approximation coefficients of the corresponding frequency band extracted by the Mallat algorithm was used as the frequency domain features. Time-frequency domain features were selected through F-score. Finally, SVM was used as the classifier. The optimal penalty factor and kernel function were selected through cross-validation and dynamic grid. The results show that the method of multi-feature extraction can reflect the essential characteristics of the suspect’s EEG signal, reduce the amount of data processing, and have a higher classification accuracy.
Yijie Peng, Xiaofan Zhao
Insider Trading Detection Algorithm in Industrial Chain Based on Logistics Time Interval Characteristics
Abstract
Insider trading behavior is becoming increasingly prevalent with the rapid development of the industrial chain. Insider trading refers to the illegal behavior of conducting insider trading by obtaining insider information. The existing insider trading detection methods of industrial chain do not consider the problems of inefficient industrial chain data characteristics and long trading time span, resulting in poor algorithm effect. Therefore, in order to solve the above problems, this paper proposes an algorithm for detecting insider trading in the industrial chain based on logistics time interval characteristics. Firstly, aiming at the problem of inefficiency of industrial chain data characteristics, this algorithm proposes a logistics index construction method for describing the whole process of insider trading behavior; Secondly, aiming at the problem of long time span of transaction, a dynamic sliding window method is proposed; Finally, the isolation forest algorithm is improved to identify the abnormal data. Verified under the real data set, the results show that compared to using the isolation forest methods, the F1 value of the insider trading behavior detection problem of the industry chain can be improved by 20.68% by using the logistics time interval feature.
Fulin Chen, Kai Di, Hansi Tao, Yuanshuang Jiang, Pan Li
Link Attributes Based Multi-service Routing for Software-Defined Satellite Networks
Abstract
Satellite networks are the potential complementary of terrestrial networks, which are expected to provide full-coverage and broadband access anywhere, anytime. As satellite networks scale up, Software-Defined Satellite Network (SDSN) is a promising paradigm due to its higher flexibility in network management. However, in the SDSN with highly time-varying characteristics, the traditional terrestrial routing strategy can hardly meet the QoS requirements for diverse services. In this paper, we propose a Link-Attributes-based multi-service On-Demand Routing (LAODR) algorithm under SDSN architecture. It quantifies the reliability of the Inter-Satellite Links (ISL) and provides a fine-grained state description of the dynamic topology. Furthermore, we select the K-shortest path as the solution space and reasonably allocate link resources based on LAODR to meet the diverse service demands of users. We implement LAODR and conduct experiments by using real network topologies. The results validate that LAODR not only satisfies the QoS requirements of different types of services but also outperforms other routing algorithms in terms of mean end-to-end latency, packet loss ratio, throughput and node congestion degree.
Xueyu Lu, Wenting Wei, Liying Fu, Dong Zhang
A Fuzzy Logical RAT Selection Scheme in SDN-Enabled 5G HetNets
Abstract
Mobile communication systems are witnessing an ongoing-increase in connected devices and new types of services. This considerable increase has led to an exponential augmentation in mobile data traffic volume. The dense deployment of small base stations and mobile nodes in traffic hotspots is considered one of the potential solutions aimed at satisfying the emerging requirements in 5G/Beyond 5G wireless networks. However, the ultra-densification poses challenges for the mobility management, including frequent, unnecessary and ping-pong handovers, with additional problems related to increased delay and total failure of the handover process. In this paper, we propose a new handover management approach using the Software Defined Networking (SDN) paradigm to overcome performance limitations linked to handover taking place at dense femtocell environments. With the exploitation of SDN, data plane and control plane are separated thus the HO decision can be made at the SDN controller. In addition, in order to reduce the complexity and delay of handover process, a Fuzzy logic system is used to decide whether a target candidate is suitable for handover. Simulation results validate the efficiency of our proposal.
Khitem Ben Ali, Faouzi Zarai
SSR-MGTI: Self-attention Sequential Recommendation Algorithm Based on Movie Genre Time Interval
Abstract
As an important part of the recommendation system, movie recommendation system can recommend movies to users accurately according to their preferences. Traditional movie recommendation systems simply treat user-movie interactions as a time-ordered sequence, without considering the time intervals between movies of the same genre. The genre time interval can reflect the user’s preference for a particular genre and determine whether the algorithm can fully capture the user’s interests and the time characteristics of the movie, which plays an important role in the accuracy of the movie recommendation. Therefore, in this paper, we propose a Self-Attention Sequential Recommendation algorithm based on Movie Genre Time Interval (SSR-MGTI). Specifically, a multi-head self-attention mechanism is used to model the same genre time interval information. Then, an absolute position is added to the multi-head self-attention mechanism model to solve the problem that multi-head self-attention mechanism does not consider the sequence. In addition, the convolutional neural network is used to convert the model from linear to non-linear and extract local information of user-movie interaction sequences. It is interesting to show that the proposed SSR-MGTI can accurately predict the movie that the user will watch next time. Experimental results on MovieLens and Amazon datasets demonstrate the superiority of our SSR-MGTI over state-of-the-art movie recommendation methods.
Wen Yang, Ruibo Yue, Yawen Chen, Jun Zhao
Fine Time Granularity Allocation Optimization of Multiple Networks Industrial Chains in Task Processing Systems
Abstract
As the industrial division of labor becomes increasingly specialized, various collaborative relationships between industrial chains develop, forming complex multi-networks. In the task processing system of multiple network industrial chains, there are dynamic online tasks. The arrival and deadline of these tasks cannot be accurately predicted. Therefore, it is necessary to divide the scheduling into finer time granularity to improve the response speed, efficiency, and timeliness, while ensuring the task completion rate and minimizing the task cost. In this paper, we study the characteristics of online tasks in multiple networks industrial chains and design a corresponding online scheduling framework. We analyze the arrival of online tasks in real-world scenarios and propose a passive scheduling algorithm based on the characteristics of different scenarios. The algorithm is tested on several sets of simulated data. Compared with previous heuristic algorithms, our algorithm can achieve better results in terms of task completion time, energy cost, and task completion rate in scenarios with fine time granularity.
Pan Li, Kai Di, Xinlei Bai, Yuanshuang Jiang, Fulin Chen
ε-Maximum Critic Deep Deterministic Policy Gradient for Multi-agent Reinforcement Learning
Abstract
In Multi-Agent Reinforcement Learning, the agents are vulnerable to the other agents and the training environment, which can lead to agents’ policy achieving a local optima easily and poor convergence efficiency. To tackle the above challenges, we propose a novel algorithm, g-Maximum Critic Multi-Agent Deep Deterministic Policy Gradient algorithm (g-M2DDPG), which leverages a new critic technique called g-Maximum Critic to balance the exploitation and exploration in updating Q-value function. We empirically evaluate our algorithms in three kinds of mixed cooperative and communication environments. These experimental results demonstrate that our algorithms significantly accelerates the learning process and outperform existing baseline algorithm MADDPG.
Yuanshuang Jiang, Kai Di, Zhongjian Hu, Fulin Chen, Pan Li, Yichuan Jiang
Effective Density-Based Concept Drift Detection for Evolving Data Streams
Abstract
Concept drift is a common phenomenon appearing in evolving data streams of a wide range of applications including credit card fraud protection, weather forecast, network monitoring, etc. For online data streams it is difficult to determine a proper size of the sliding window for detection of concept drift, making the existing dataset-distance based algorithms not effective in application. In this paper, we propose a novel framework of Density-based Concept Drift Detection (DCDD) for detecting concept drifts in data streams using density-based clustering on a variable-size sliding window through dynamically adjusting the size of the sliding window. Our DCDD uses XGBoost (eXtreme Gradient Boosting) to predict the amount of data in the same concept and adjusts the size of the sliding window dynamically based on the collected information about concept drifting. To detect concept drift between two datasets, DCDD calculates the distance between the datasets using a new detection formula that considers the attribute of time as the weight for old data and calculates the distance between the data in the current sliding window and all data in the current concept rather than between two adjacent windows as used in the exiting work DCDA [2]. This yields an observable improvement on the detection accuracy and a significant improvement on the detection efficiency. Experimental results have shown that our framework detects the concept drift more accurately and efficiently than the existing work.
Zelin Cui, Hui Tian, Hong Shen
An End-to-End Multiple Hyper-parameters Prediction Method for Distributed Constraint Optimization Problem
Abstract
Distributed Constraint Optimization Problem (DCOP) is an important model for multi-agents, has been widely used in various fields. When a large scale of DCOP implement on the supercomputer, various parameters need to choose, and the complement time vary widely for different combinations of parameters. Automatically provided accurate operating parameters for DCOP can improve the operation speed and enables the rational use of computational resources. However, the number of hyper-parameters of DCOP is huge, and correlation exists between hyper-parameters, thus make the prediction of multiply hyper-parameters difficult. In this paper we propose a new framework combine graph neural network and recurrent neural network. The performance shows that our framework can outperform the SODA method.
Chun Chen, Yong Zhang, Li Ning, Shengzhong Feng
Dynamic Priority Coflow Scheduling in Optical Circuit Switched Networks
Abstract
OCS (Optical Circuit Switch) is increasingly popular for accelerating data transmission of coflows due to its higher bandwidth and lower power consumption compared with EPS (Electronic Packet Switch), where a coflow is a collection of related parallel flows between two computation stages in data-intensive applications. However, the extra port constraints and reconfiguration delay of OCS obstruct the efficiency of OCS operations. This paper studies the problem of coflow scheduling in the OCS of datacenter networks to minimize the total Coflow Completion Time (CCT). We propose a Dynamic Priority Coflow Scheduling Algorithm that schedules coflows preemptively by considering coflow transmission time and OCS reconfiguration delay jointly to dynamically update each coflow's priority, which can significantly reduce the waiting time of small coflows and reduce head-of-line blocking. Extensive simulations based on Facebook data traces show that our approach outperforms the state-of-the-art scheme OMCO [19] significantly, and transmits multiple coflows 1.30× faster than OMCO.
Hongkun Ren, Hong Shen, Xin Wang
Deep Reinforcement Learning Based Multi-WiFi Offloading of UAV Traffic
Abstract
As the growing network deployment of Unmanned Aerial Vicheles (UAVs), traffic offloading has been widely used to mitigate UAV's problem of limited bandwidth in communications due to limited battery capacity. Existing work on traffic offloading has focused on reducing the average delay of the system without considering the fairness issue, and assumed that data transmission follows line-of-sight propagation which contradicts with the realistic situations in both urban and suburban areas. Achieving both fairness and system efficiency with non-line-of-sight user-UAV communication requires to solve a complex non-convex optimization problem. This paper proposes an effective algorithm (NAPPO) for joint UAV navigation and user traffic allocation by applying deep reinforcement learning (DRL). Our NAPPO applies DRL to collect user information (position, data rate and traffic demand) and dynamically adjusts the UAV position and traffic allocation ratio to minimize the maximum delay and hence improve the fairness (i.e., variation in delay between users). We show that our proposed approach of minimizing maximum delay is more effective than minimizing average delay for achieving fairness while preserving the total delay at a reasonable level. The results of the simulation experiments show that NAPPO achieves an impressive performance on the maximum delay, i.e., 49. 82% better than the heuristic algorithm and only 0.1536s worse than the optimal solution.
Zhiyong Liu, Hong Shen
Triple-Path RNN Network: A Time-and-Frequency Joint Domain Speech Separation Model
Abstract
Studies in speech separation have achieved significant success in recent years. To correctly separate the mixture signals, it is critical to encode the signals into an appropriate latent space. Existing speech separation methods include transforming mixed signals into frequency domain space or time domain space. The frequency domain features (spectrogram) are generated by STFT, which is closely related to speech articulation and reflects the energy of speech directly. The time domain features are learned from a latent embedding space, and the separation effect is facilitated by the end-to-end structure. However, these methods are based on the representations from only one domain, which is insufficient for providing a speech separation encoding space that is completely separable. Therefore, a Triple-Path Recurrent Neural Network (TPRNN) that fuse features from two domains is proposed. It employs a spectrogram as auxiliary information to improve the performance of speech separation. Experimental results on the Wall Street Journal (WSJ0) dataset show that this approach is beneficial to improve speech separation performance.
Yu-Huan Zhai, Qiang Hua, Xiao-Wen Wang, Chun-Ru Dong, Feng Zhang, Da- Chuan Xu
Design of Query Based Gallery Selector and Mask-Aware Loss for Person Search
Abstract
Person search is a challenging computer vision task that aims to simultaneously locate and identify a query person from panoramic images. To address the issue of scene similarity and its impact on search accuracy and efficiency, we propose a query based gallery selector module that employs cosine similarity to calculate the similarity between candidate images in the gallery and the query persons feature embedding, then selects and reorders images in the gallery based on their similarity to the query person, thus improving the accuracy and efficiency of searching. Furthermore, we introduce a mask-aware mechanism that improves the localization loss function for predicted bounding boxes. During training, the network is guided to increase its robustness in occluded scenarios. Experimental results on public person search datasets PRW and CUHK-SYSU demonstrate the effectiveness of our proposed method.
Qiang Hua, Ao Sun, Yu-Chen Liu, Feng Zhang, Chun-Ru Dong, Da-Chuan Xu
A Privacy-Preserving Blockchain Scheme for the Reliable Exchange of IoT Data
Abstract
The Internet of Things (IoT) system has been claimed to deliver comfort and a more satisfactory lifestyle. The data flows that connect IoT sub-systems are critical to the success of the whole system. However, there are concerns about how privacy-preserving these flows are. Most existing single-server architecture solutions to the privacy problem have limitations regarding user privacy and anonymity. They may lead to revealing users’ regular activities and affects their integrity and confidentiality. Despite Blockchain technology is suitable for improving IoT systems’ security and privacy, there is a lack of privacy protection in blockchain when accessing personal user data due to privacy threats from internal parties. Thus, we aim to build a protected environment for users to exchange data and maintain privacy with an efficient authentication solution. We consider Blockchain technology using Elliptic Curve Integrated Encryption Scheme (ECIES) and message authentication regulation to enhance security and privacy for the transmitted data. This provides reliable auditing of the users’ access history and efficient authentication within the system. We evaluate the outcomes to show the usefulness of this approach, targeting IoT data, and compare it with current work in the field.
Mnar Alnaghes, Nickolas Falkner, Hong Shen
R-RPT-A Reliable Routing Protocol for Industrial Wireless Sensor Networks
Abstract
Wireless Sensor Networks (WSN) are extensively used to monitor and control physical environments. Effective energy management and maintaining reliability are key considerations in Wireless Sensor Networks and routing plays a crucial role in achieving these objectives. The Routing Protocol for Low Power and Lossy Networks (RPL) is being adopted in Low-power and Lossy Networks (LLNs) to facilitate the connectivity of Wireless Sensor Networks within the Internet of Things (IoT). Although RPL has been significantly used in IoT routing, it still has extensive challenges. One of the most basic challenges is related to the reliability of routing. However, RPL lacks a load balancing mechanism, which is essential for maximizing the lifetime of sensor nodes by preventing the occurrence of overloaded nodes and the potential congestion that can result from it. To improve this issue, a new routing protocol was proposed called the reliable routing protocol (R-RPT) to maximize the reliability of data collection in large scale wireless sensor network. R-RPT aims to establish multiple bidirectional routes between a sensor node and a root node. R-RPT selects the parent node based on the evaluation of various criteria related to reliability. In addition, R-RPT achieves load balancing efficiently by sending data packets via the route with lighter workload. The simulation results obtained through Cooja simulator demonstrated that the proposed R-RPT routing protocol outperforms existing routing protocols in terms of packet delivery ratio, routing packet overhead and end-to-end packet delay.
Kripanita Roy, Myung-Kyun Kim
Action Segmentation Based on Encoder-Decoder and Global Timing Information
Abstract
Action segment has made significant progress, but segmenting and recognizing actions from untrimmed long videos remains a challenging problem. Most state-of-the-art (SOTA) methods focus on designing models based on temporal convolution. However, the limitations of modeling long-term temporal dependencies and the inflexibility of temporal convolutions restrict the potential of these models. To address the issue of over-segmentation in existing action segmentation algorithms, which leads to prediction errors and reduced segmentation quality, this paper proposes an action segmentation algorithm based on Encoder-Decoder and global temporal information. The action segmentation algorithm based on Encoder-Decoder and global timing information proposed in this paper uses the global timing information captured by LSTM to assist the Encoder-Decoder structure in judging the action segmentation point more accurately and, at the same time, suppress the excessive segmentation phenomenon caused by the Encoder-Decoder structure. The algorithm proposed in this paper achieves 93% frame accuracy on the constructed real Taiji action data set. The experimental results prove that this model can accurately and efficiently complete the long video action segmentation task.
Yichao Liu, Yiyang Sun, Zhide Chen, Chen Feng, Kexin Zhu
Security Challenges and Lightweight Cryptography in IoT: Comparative Study and Testing Method for PRESENT-32bit Cipher
Abstract
The Internet of Things (IoT) stands out as one of the most remarkable innovations in recent times, offering a promising future for global connectivity. However, the rapid expansion of IoT ecosystems has led to a significant increase in the attack surface, posing risks to platforms, computing systems, multifunction protocols, and network access ubiquity. To mitigate these risks, it is crucial to adopt secure system design and development practices. Popular security solutions such as data encryption and authentication have been widely employed in IoT systems. Nonetheless, the unique constraints of IoT platforms present challenges in selecting suitable algorithms. In this paper, we provide an overview and analysis of the security challenges in IoT along with potential solutions. Additionally, we propose a testing methodology for the PRESENT-32bit cipher, based on an analysis of prevalent lightweight cryptography techniques. Our implementation results demonstrate the advantages of this approach.
Van Nam Ngo, Anh Ngoc Le, Do-Hyeun Kim
The Prediction Model of Water Level in Front of the Check Gate of the LSTM Neural Network Based on AIW-CLPSO
Abstract
The water level in front of the check gate of water transfer projects is affected by physical factors such as rainfall, terrain and hydraulic structures. Its fluctuation trend has strong non-linear and stochastic characteristics, and it is difficult to predict accurately and efficiently by hydrodynamic model. To solve the problem of predicting water level in front of check gate, a long short term memory (LSTM) neural network based on adaptive inertia weight comprehensive learning particle swarm optimization algorithm (AIW-CLPSO) is proposed. The AIW and CLPSO are adopted to improve the global optimization ability and convergence velocity of PSO in the proposed model. The model was applied to the water level prediction in front of the Chaohu Lake check gate. The example of the water level prediction in front of the Chaohu Lake check gate shows that the proposed model can obtain the optimal parameters of LSTM neural network, which overcomes the limitations of difficult parameter selection and inaccurate prediction.
Linqing Gao, Dengzhe Ha, Litao Ma, Jiqiang Chen
Backmatter
Metadaten
Titel
Parallel and Distributed Computing, Applications and Technologies
herausgegeben von
Ji Su Park
Hiroyuki Takizawa
Hong Shen
James J. Park
Copyright-Jahr
2024
Verlag
Springer Nature Singapore
Electronic ISBN
978-981-9982-11-0
Print ISBN
978-981-9982-10-3
DOI
https://doi.org/10.1007/978-981-99-8211-0

Premium Partner