Skip to main content

2024 | Buch

Green, Pervasive, and Cloud Computing

18th International Conference, GPC 2023, Harbin, China, September 22–24, 2023, Proceedings, Part I

herausgegeben von: Hai Jin, Zhiwen Yu, Chen Yu, Xiaokang Zhou, Zeguang Lu, Xianhua Song

Verlag: Springer Nature Singapore

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

This book constitutes the refereed proceedings of the 18th International Conference on Green, Pervasive, and Cloud Computing, GPC 2023, held in Harbin, China, during September 23–24, 2023.
The 38 full papers and 1 short paper included in this book were carefully reviewed and selected from 111 submissions. They were organized in topical sections as follows: Industrial Digitization and Applications, Edge Intelligence, Mobile Sensing and Computing, Cyber-Physical-Social Systems, Pervasive and Green Computing and Wireless and Ubiquitous Networking.

Inhaltsverzeichnis

Frontmatter

Industrial Digitization and Applications

Frontmatter
UEBCS: Software Development Technology Based on Component Selection
Abstract
Current software development has moved away from the traditional manual workshop model and emphasizes improving software product quality. To address the issue of repetitive work, software reuse techniques can be adopted to continually enhance the quality and efficiency of software development. Software reuse primarily involves reutilizing existing software knowledge during the software development process, effectively reducing maintenance costs incurred during development and controlling the overall software development expenses. Software components are an effective form of supporting software product reuse and serve as the core technology for enabling software reuse. Component-based software engineering techniques emphasize the use of reusable software “components” to design and construct programs, aiming to assemble these components within a software architecture to achieve software reuse and improve the quality and productivity of software products. However, selecting the most suitable components from the results of component retrieval requires understanding the different usages of each component in the retrieval results. The existing methods suffer from excessive reliance on manual approaches and errors caused by inter-component relationships. Therefore, this paper proposes a component selection technique called UEBCS (Usage Example-Based Component Selection). This technique leverages steps such as clustering analysis and hierarchical classification to achieve optimal component selection. UEBCS has shown excellent results in terms of both efficiency and accuracy in selecting components. This method provides technical support for software developers in the software development process and holds significant practical significance for enhancing software development quality and efficiency, as well as promoting the deepening development of the software industry.
Yingnan Zhao, Xuezhao Qi, Jian Li, Dan Lu
A Data Security Protection Method for Deep Neural Network Model Based on Mobility and Sharing
Abstract
With the rapid development of digital economy, numerous business scenarios, such as smart grid, energy network, intelligent transportation, etc., require the design and distribution of deep neural network (DNN) models, which typically use large amounts of data and computing resources for training. As a result, DNN models are now considered important assets, but at great risk of being stolen and distributed illegally. In response to the new risks and challenges brought by frequent data flow and sharing, watermarks have been introduced to protect the ownership of DNN models. Watermarks can be extracted in a relatively simple manner to declare ownership of a model. However, watermarks are vulnerable to attacks. In this work, we propose a novel label-based black-box watermark model protection algorithm. Inspired by new labels, we design a method to embed watermarks by adding new labels in the model, and to prevent watermarks from being forged, we use encryption algorithms to generate key samples. We conduct experiments on the VGG19 model using the CIFAR-10 dataset. Experimental results show that the method is robust to fine-tuning attacks, pruning attacks. Furthermore, our method does not affect the performance of deep neural network models. This method helps to solve scenarios such as “internal data circulation, external data sharing and multi-party data collaboration”.
Xinjian Zhao, Qianmu Li, Qi Wang, Shi Chen, Tengfei Li, Nianzhe Li
A Method for Small Object Contamination Detection of Lentinula Edodes Logs Integrating SPD-Conv and Structural Reparameterization
Abstract
A small object contamination detection method (SRW-YOLO) integrating SPD-Conv and structural reparameterization was proposed to address the problem of the difficulty in the detection of small object contaminated areas of Lentinula Edodes logs. First, the SPD (space-to-depth)-Conv was used to improve the MP module to enhance the learning of effective features of Lentinula Edodes log images and prevent the loss of small object contamination information. Meanwhile, RepVGG was introduced into the ELAN structure to improve the efficiency and accuracy of inference on the contaminated regions of Lentinula Edodes logs through structural reparameterization. Finally, the boundary regression loss function was replaced with the WIoU (Wise-IoU) loss function, which focuses more on ordinary-quality anchor boxes and makes the model output results more accurate. In this study, the measures of Precision, Recall, and reached 97.63%, 96.43%, and 98.62%, respectively, which are 4.62%, 3.63%, and 2.31% higher compared to those for YOLOv7. Meanwhile, the SRW-YOLO model detects better compared with the current advanced one-stage object detection model.
Qiulan Wu, Xuefei Chen, Suya Shang, Feng Zhang, Wenhui Tan
A Study of Sketch Drawing Process Comparation with Different Painting Experience via Eye Movements Analysis
Abstract
When in a situation where there is a language barrier, facilitate the exchange through sketching is an efficient solution if a person wishes to keep the conversation going. However, not everyone has sketch basis, the method of using sketch to facilitate the exchange requires not only the foundation of sketch, but also the ability to sketch in a short period of time. Therefore, according to design and apply a set of experiments, we focus on analyzing the eye movement data of the subjects in the sketch of imaginary object shapes and compare the differences in the sketch between the experienced painters and the novice. Specifically, we invited 16 subjects to participate in sketching (e.g., a watch) on the canvas, and their eye movement data was collected by a glasses eye tracker while sketching. The results of analysis by Mann-Whitney U test showed that the novice's gaze was skewed to the left and down as a whole, the gaze was scattered, and the fixation on the central position of the picture was not focused and sustained. In addition, experienced painters put the sketch content in the center of the picture to attract the attention of the viewer and effectively convey the important information of the picture. The research results combine technology with art, which can help novice to quickly improve the efficiency and quality of sketch and carry out a wider range of art research and application.
Jun Wang, Kiminori Sato, Bo Wu
Review of Deep Learning-Based Entity Alignment Methods
Abstract
Entity alignment aims to discover different references to the same entity in different graphs, and it is a key technique for solving graph-related problems. It has developed into one of the important tasks in knowledge graphs and has received extensive attention from scholars in recent years. Through entity alignment, data from multiple isolated knowledge graphs with different sources and modes can be summarized and classified, forming a more information-rich knowledge base. In early research on entity alignment, researchers first proposed a class of alignment methods based on knowledge representation learning and verified that these methods have significant improvements over traditional methods. However, entity alignment still has many defects and challenges to be addressed, such as a lack of scalability, differences in language and relationship type definitions in knowledge graphs from different sources, which make entity alignment difficult. There are also problems such as the need to improve the quality of large-scale knowledge graph data and optimize computational efficiency. Thus, it is difficult to perform entity alignment on multiple knowledge graphs through simple translation and transformation. This paper discusses the deep learning-based methods that have emerged in the field of entity alignment based on the definition of entity alignment and data sets as standards, summarizes the shortcomings and limitations of these methods, and introduces commonly used data sets in the entity alignment task.
Dan Lu, Guoyu Han, Yingnan Zhao, Qilong Han
VMD-AC-LSTM: An Accurate Prediction Method for Solar Irradiance
Abstract
Currently, solar power has become one of the most promising new power generation methods. But electricity cannot be stored directly and solar power has strong volatility, therefore the short-term accurate prediction of solar irradiance is of great significance to maintain the stable operation of the power grid. This work presents a novel decomposition integrated deep learning model, VMD-AC-BiLSTM, is proposed for ultra-short-term prediction of solar irradiance. The proposed model organically combines Variational Modal Decomposition (VMD), Multi-head Self-Attention Mechanism, One-Dimensional Convolutional Neural Network (1D-CNN) and Bidirectional Long and Short-Term Memory Network (BiLSTM). Firstly, the historical data are decomposed into several modal components by VMD, and these components are divided into stochastic and trend component sets according to their frequency ranges. Then the stochastic and periodicity of solar irradiance are predicted by two different prediction modules. The prediction results of the two modules are integrated at the end of the proposed model. Meanwhile, the proposed model also considers the complex effects of cloud type and solar zenith angle with stochasticity and periodicity in solar irradiance data, respectively. The experimental results show that the proposed model produces relatively accurate solar irradiance predictions under different evaluation criteria. And the proposed model has higher prediction accuracy and robustness compared to other deep learning models.
Jianwei Wang, Ke Yan, Xiang Ma
Anomaly Detection of Industrial Data Based on Multivariate Multi Scale Analysis
Abstract
Anomaly detection stands as a crucial facet within the domain of data quality assurance. Notably, significant strides have been made within the realm of existing anomaly detection algorithms, encompassing notable techniques such as Long Short-Term Memory (LSTM), Gated Recurrent Unit (GRU), and anomaly detection models founded upon Generative Adversarial Networks (GANs). However, a notable gap lies in the inadequate consideration of interdependencies and correlations inherent in multidimensional time-series data. This becomes particularly pronounced within the context of industrial evolution, where industrial data burgeons in complexity. To address this lacuna, a novel hybrid model has been introduced, synergizing the capabilities of GRU with structural learning methodologies and graph neural networks. The model capitalizes on graph structural learning to unearth dependencies linking data points across distinct spatial dimensions. Concurrently, GRU extracts temporal correlations embedded within data along a single dimension. Through the incorporation of graph attention networks, the model employs a dual-faceted correlation perspective for data prediction. Discrepancies between predicted values and ground truth are utilized to gauge errors. The amalgamation of predictive and scoring mechanisms enhances the model’s versatility. Empirical validation on two authentic sensor datasets unequivocally demonstrates the superior efficacy of this approach in anomaly detection compared to alternative methodologies. A notable augmentation is observed particularly in the recall rate, underscoring the method’s potency in identifying anomalies.
Dan Lu, Siao Li, Yingnan Zhao, Qilong Han
Research on Script-Based Software Component Development
Abstract
As software demand proliferates and software size and complexity increase, traditional software development models face enormous challenges. As a result, new software development techniques are being explored to meet the requirements of software development. The development of new software development techniques has begun to meet the requirements of software development. Based on many years of experience in object-oriented and component-oriented software design methods, we propose an innovative four-factor scripted analysis and design method for state transfer using the knowledge of computer compilation principles on finite state machine (FSM) circuit module automation. It provides a methodological and empirical reference for programmers in the design and development of application systems using dynamic languages.
Yingnan Zhao, Yang Sun, Bin Fan, Dan Lu
Integration Model of Deep Forgery Video Detection Based on rPPG and Spatiotemporal Signal
Abstract
With the development of deep learning, video forgery technology is becoming more and more mature, which may bring security risk and the further development of forgery detection is urgently needed. Most of the existing forgery detection technique are based on artifacts and detail features, which are greatly affected by the resolution, and its generalization ability needs to be improved. In this paper, a multi-modal fusion forgery detection model architecture based on the inherent biological signals and spatio-temporal signals in videos is proposed. In the process of forgery detection, the model first recognizes the face of the video. Subsequently, video frame extraction and rPPG signal extraction based on Green channel are performed on the video, respectively. These two data are later input into 3D and 2D convolutional neural networks to train the base learner respectively. Finally, the integration model is constructed based on stacking strategy. Sufficient experiments show that the established fusion model can cope well with low-resolution cases and has good generalization performance, achieving 93.38% and 91.57% accuracy on FF++ c23 and celeb-DF-v2 data set, respectively.
Lujia Yang, Wenye Shu, Yongjia Wang, Zhichao Lian
A Design of Hybrid Transactional and Analytical Processing Database for Energy Efficient Big Data Queries
Abstract
With the prominent development of cloud computing and pervasive computing, huge volume of big data is accumulated in an ever-increasing manner. To process such huge volume of big data in an energy efficient manner is a popular topic in both industry and academia area. In this work, we discuss how to implement a hybrid transactional and analytical processing database to provide energy efficient big data processing capability. More specifically, PostgreSQL (PG) database is an excellent solution for handling Online Transactional Processing (OLTP) workloads. For OLTP databases to process Online Analytical Processing (OLAP) queries, the traditional solution is to dump the data from PG to an OLAP database such as Greenplum for further analysis. Such solution faces the challenges of extra energy consumption, data island, data inconsistency, to name a few. Hybrid Transactional and Analytical Processing (HTAP) systems, on the other hand, support running both transactional and analytical processing workloads on the same database, which has been achieved great attention recently. In this work, we propose a design of HTAP database by enhancing the high available OLTP PG clusters to support OLAP workloads, via the massively parallel processing (MPP) architecture. In our MPP PG cluster, the data is not split and each PG server maintains an identical replica of the whole data. Moreover, to speed up the execution efficiency, we split the data into multiple virtual parts and each PG server within the cluster only scan the pre-assigned data partition. A set of experiments on the public TPC-H dataset are conducted to evaluate the feasibility of our proposal.
Wenmin Lin
Chinese Medical Named Entity Recognition Based on Pre-training Model
Abstract
Named Entity Recognition (NER) task aims to identify named entities from unstructured text and classify them into corresponding entity types. Existing pretraining models typically utilize BERT models to learn word embeddings at the character level, disregarding the semantic relationships between phrases. They also pay less attention to long-distance dependencies within sentences. Additionally, the datasets suffer from challenges such as small scale, lack of standardization, and annotation errors, all of which contribute to poor model robustness. Therefore, this paper proposes a Chinese medical named entity recognition model based on RoBERTa (A Robustly Optimized BERT Pre-training Approach), adversarial training, and hybrid encoding layers to enhance semantic understanding and model generalization. The proposed model is evaluated on three real clinical datasets. And experimental results demonstrate significant performance improvement compared to the baseline models. Furthermore, the advantages of this proposed approach over the baseline models are further analyzed through experiments.
Fang Dong, Shaowu Yang, Cheng Zeng, Yong Zhang, Dianxi Shi
A Function Fitting System Based on Genetic Algorithm
Abstract
With the development of science and technology, function fitting has penetrated into various fields of scientific research, scientific and technological innovation. For the function of fitting analysis of a given function image, there is no public software on the market that uses the idea of genetic algorithm to solve the problem of function fitting. In order to make up for the insufficiency of the existing software and seize the opportunity of function fitting in various industries, a function fitting system based on genetic algorithm was proposed. It is a low-threshold software with a wide range of applications. When designing the function of fitting and analyzing a given function image, the genetic algorithm is used. With the application of ray detection in Unity, the closest expression of function fitting is obtained. At the same time, the line graph of fitness reduction during genetic algorithm iteration is given.
Qiuhong Sun, Jiaqi Wang, Xiaokang Zhou
A Rumor Detection Model Fused with User Feature Information
Abstract
With the rapid development of artificial intelligence technology, people's communication become more frequent, enjoying convenient at the same time, also aggravated the spread of rumors and spread. Therefore, rumor detection in social platforms has become an important direction of current scientific research. From the perspective of User characteristics, this paper uses deep learning methods to mine the change trend of user characteristics related to rumor events, and designs a rumor detection Model (User Feature Information Model, UFIM). Firstly, the feature enhancement function is used to recalculate the user feature vector to obtain a new feature vector representing the user's comprehensive feature under the current event. Then, the GRU model and the CNN model are used to learn the global and local changes of user features with the development of the event, and the user and time information are used to learn the hidden rumor features in the process of rumor spreading. The experimental results show that the UFIM model improved performance compared with the baseline model, rumors can effectively realize detection task.
Wenqian Shang, Kang Song, Yong Zhang, Tong Yi, Xuan Wang
Design and Implementation of a Green Credit Risk Control Model Based on SecureBoost and Improved-TCA Algorithm
Abstract
Green credit plays a crucial role in promoting green transformation of enterprises and advancing social sustainable development. However, the current green credit rating disclosure system lacks data sharing between different institutions, leading to inconsistencies in evaluation results. To address this issue, this study proposes a green credit risk control model based on SecureBoost and an Improved-TCA algorithm. The proposed model combines vertical federated learning result with feature transfer to protect the privacy of participants in different datasets and analyzing the experimental results of vertical federated learning using SHAP values. We proposes improved TCA, which combines the BDA algorithm with the TCA algorithm, and improves the TCA algorithm by setting different weight ratios to comprehensively integrate the advantages of both algorithms to address the issue of significantly different sample distribution quantities in certain data set applications. We proved that the improved TCA algorithm combined with secureBoost has a better prediction result in the multi-classification credit evaluation scenario.
Maoguang Wang, Jiaqi Yan, Yuxiao Chen
Unsupervised Concept Drift Detection Based on Stacked Autoencoder and Page-Hinckley Test
Abstract
Data streams are often subject to concept drift, which can gradually reduce the reliability of learning models over time in data stream mining. To maintain model accuracy and enhance its robustness, it is crucial to detect concept drift and update the learning model accordingly. The majority of drift detection methods rely on the assumption that true labels are immediately available, which is challenging to implement in real-world scenarios. Therefore, it is more practicable to detect concept drift in an unsupervised manner. This paper proposes an unsupervised Drift Detection method based on Stacked Autoencoder and Page-Hinckley test (DD-SAPH). DD-SAPH employs the stacked autoencoder as a medium to represent the distribution of historical data, which extracts hidden features from the reference window. To measure the difference between distributions of historical data and new data, the reconstruction error of the stacked autoencoder on the current window is employed. The Page-Hinckley test dynamically calculates thresholds to warn and alarm concept drift. Experimental results indicate that DD-SAPH outperforms the compared unsupervised algorithms when addressing concept drift on both synthetic and real datasets.
Shu Zhan, Yang Li, Chunyan Liu, Yunlong Zhao
An Industrial Robot Path Planning Method Based on Improved Whale Optimization Algorithm
Abstract
With the development of technology, robots are gradually being used more and more widely in various fields. Industrial robots need to perform path planning in the course of their tasks, but there is still a lack of a simple and effective method to implement path planning in complex industrial scenarios. In this paper, an improved whale optimization algorithm is proposed to solve the robot path planning problem. The algorithm initially uses a logistic chaotic mapping approach for population initialization to enhance the initial population diversity, and proposes a jumping mechanism to help the population jump out of the local optimum and enhance the global search capability of the population. The proposed algorithm is tested on 12 complex test functions and the experimental results show that the improved algorithm achieves the best results in several test functions. The algorithm is then applied to a path planning problem and the results show that the algorithm can help the robot to perform correct and efficient path planning.
Peixin Huang, Chen Dong, Zhenyi Chen, Zihang Zhen, Lei Jiang
Intrusion Detection System Based on Adversarial Domain Adaptation Algorithm
Abstract
With the explosive growth of the Internet, massive high-dimensional data and multiple attack types make intrusion detection systems face greater challenges. In practical application scenarios, the amount of abnormal data is small, and intrusion detection systems in different scenarios cannot be quickly migrated, and specific intrusion detection systems need to be trained for different scenarios, which greatly wastes manpower and material resources. Therefore, in view of the hierarchical characteristics of network data streams, this paper uses CNN and RNN networks to extract the spatiotemporal features of network data streams, then input them into GAN for unsupervised learning. Considering that long and short-term recurrent neural network (LSTM-RNN) has been shown to be able to obtain information and learn complex time series by remembering the backward (or even forward) time steps of cells, this paper replaces the generator and discriminator of GAN with LSTM-RNN. Anomaly detection is then performed based on residual loss and identification loss. Finally, this paper uses the deep domain adaptation algorithm to map the target domain and the source domain, and then optimizes the confusion loss of the domain by adversarial training, and finally extracts the invariant features of the target and the source domain.
Jiahui Fei, Yunpeng Sun, Yuejin Wang, Zhichao Lian
Small-Sample Coal-Rock Recognition Model Based on MFSC and Siamese Neural Network
Abstract
Given the advantages of deep learning in feature extraction and learning ability, it has been used in coal-rock recognition. Deep learning techniques rely on a large number of independent identically distributed samples. However, the complexity and variability of coal-rock deposit states make the dataset exhibit small sample characteristics, resulting in poor performance of deep learning model. To address this problem, this paper proposes a framework named MFSC-Siamese, which combines the advantages of log Mel-Frequency Spectral Coefficients (MFSC) and Siamese neural network. First, the MFSC is used to extract vibration signal features to preserve the information of the original signal as much as possible, which makes the extraction of vibration features more accurate. Second, a recognition model based on Siamese neural network is proposed to reduce the number of participants by sharing network branches, which achieves coal-rock recognition by learning the distance between sample features, closing the distance between similar samples and distancing the distance between dissimilar samples. To evaluate the effectiveness of the proposed method, a real vibration signal dataset was used for comparative experiments. The experimental results show that the proposed method has better generalization performance and efficiency, with accuracy up to 98.41%, which is of great significance for the construction of intelligent mines.
Guangshuo Li, Lingling Cui, Yue Song, Xiaoxia Chen, Lingxiao Zheng
Elemental Attention Mechanism-Guided Progressive Rain Removal Algorithm
Abstract
De-rainy has become a pre-processing task for most computer vision systems. Combining recursive ideas to De-rainy models is currently popular. In this paper, the EAPRN model is proposed by introducing the elemental attention mechanism in the progressive residual network model. The elemental attention mainly consists of spatial attention and channel attention, which feature-weight the feature image in both spatial and channel dimensions and combine as elemental attention features. The introduction of elemental attention can help the model improve its fitness for the rain removal task, filter out important network layers and help the network process the rainy image. Experiments show that the EAPRN model has better visual results on different datasets and the quality of the De-rainy image is further improved.
Xingzhi Chen, Ruiqiang Ma, Shanjun Zhang, Xiaokang Zhou
Backmatter
Metadaten
Titel
Green, Pervasive, and Cloud Computing
herausgegeben von
Hai Jin
Zhiwen Yu
Chen Yu
Xiaokang Zhou
Zeguang Lu
Xianhua Song
Copyright-Jahr
2024
Verlag
Springer Nature Singapore
Electronic ISBN
978-981-9998-93-7
Print ISBN
978-981-9998-92-0
DOI
https://doi.org/10.1007/978-981-99-9893-7