Skip to main content
Top

Advances on P2P, Parallel, Grid, Cloud and Internet Computing

The 20th International Conference on P2P, Parallel, Grid, Cloud and Internet Computing (3PGCIC-2025). Online Conference

  • 2026
  • Book
insite
SEARCH

About this book

P2P, Grid, Cloud and Internet computing technologies have been very fast established as breakthrough paradigms for solving complex problems by enabling aggregation and sharing of an increasing variety of distributed computational resources on a large scale.

Grid Computing originated as a paradigm for high performance computing, as an alternative to expensive supercomputers through different forms of large-scale distributed computing. P2P Computing emerged as a new paradigm after client-server and web-based computing and has shown useful to the development of social networking, B2B (Business to Business), B2C (Business to Consumer), B2G (Business to Government), B2E (Business to Employee), and so on. Parallel Computing is an essential computational paradigm for solving complicated problems quickly. It divides a scientific computing problem into several small computing tasks and concurrently runs these tasks by utilizing parallel hardware and overcoming the memory constraint. Parallel computing is an important part of Cloud environment. However, there are significant differences between cloud computing and parallel computing. Cloud Computing has been defined as a “computing paradigm where the boundaries of computing are determined by economic rationale rather than technical limits”. Cloud computing has fast become the computing paradigm with applicability and adoption in all application domains and providing utility computing at large scale. Finally, Internet Computing is the basis of any large-scale distributed computing paradigms; it has very fast developed into a vast area of flourishing field with enormous impact on today’s information societies serving thus as a universal platform comprising a large variety of computing forms such as Grid, P2P, Cloud and Mobile computing.

The aim of the volume “Advances on P2P, Parallel, Grid, Cloud and Internet Computing” is to provide latest research findings, innovative research results, methods and development techniques from both theoretical and practical perspectives related to P2P, Grid, Cloud and Internet computing as well as to reveal synergies among such large-scale computing paradigms.

Table of Contents

Frontmatter
Relation Extraction of Traditional Chinese Medicine Patents Based on Large Language Model and Diversified Semantic Interaction
Abstract
Traditional Chinese medicine (TCM) patents, as significant achievements in TCM technological innovation, contain rich semantic information and complex structures. The effective extraction of entity relations in TCM patents is of great importance for the efficient utilization of TCM knowledge. To effectively address the issues of complex entity relations, diverse relation semantics, and insufficient semantic interaction in TCM patent texts, this paper proposes an improved model based on a large language model and a diversified semantic interaction strategy to accurately extract entity relations such as TCM preparation and pharmacological mechanism from TCM patent texts. The model uses the Qwen3-Embedding-8B large language model as an encoder, providing deep representations of subject-object entities and relations through deep semantic modeling. It also designs a dual-channel cross-attention mechanism to precisely capture and enhance the modeling of associations between entities and relations through diversified semantic interaction. Additionally, an adversarial learning strategy is introduced to solve noise and long-tail data distribution issues in TCM patent texts. The experimental results show that the proposed model surpasses current benchmarks when applied to TCM patent texts. It also aids in the construction of TCM knowledge databases and contributes to research and development decision-making.
Wenjun Dan, Lihui Bai, Na Deng, Xu-an Wang, Zhuoqun Yu
Semantic Multi-agent Framework for Automated Cloud-Edge Pattern Discovery and Composition
Abstract
The evolution of the Cloud-Edge paradigm is transforming distributed applications, enabling advanced features through heterogeneous resources along the continuum. However, orchestration across platforms remains challenging due to heterogeneity and lack of standardization. This work presents a Semantic Multi-Agent framework for the automated discovery and composition of Cloud-Edge patterns, addressing gaps in existing solutions.
Alba Amato, Beniamino Di Martino
Reinforcement Learning-Based Autoscaling for Cost and Performance Optimization in Kubernetes Clusters
Abstract
Kubernetes’ built-in auto-scalers (HPA, VPA, KEDA) rely on fixed thresholds or simple metrics, which often fail to satisfy complex service-level objectives (SLOs) under dynamic cloud workloads. This paper proposes a Reinforcement Learning (RL)-based multidimensional auto-scaler that simultaneously performs horizontal and vertical scaling while optimizing both latency and cost objectives. Unlike prior work [1, 2] that focused primarily on microservices or single-dimension scaling, the proposed approach introduces a tunable reward formulation balancing SLO adherence and resource efficiency, enabling deployment across diverse workload types. A Kubernetes-native architecture was designed that integrates Prometheus metrics, a Multidimensional Pod Auto-scaler (MPA), and an RL agent trained using PPO and DDPG. The system was evaluated on industry-scale workloads, including the Spark TPC-DS benchmark (1 TB, 104 queries) and latency-sensitive microservices, deployed on a 9-node Amazon EKS cluster. Results show that this RL auto-scaler achieves up to 30% higher CPU utilization, 15–20% lower 90th percentile latency, and ~20% cost savings compared to HPA, VPA, and KEDA. Statistical analysis across 20 runs confirms these improvements are significant. Safe exploration strategies were also discussed, including bootstrapping with HPA to ensure SLA protection during early learning, and challenges of applying RL in production-scale Kubernetes clusters. This study demonstrates that RL provides a practical and extensible path toward intelligent autoscaling for cloud-native applications, bridging the gap between academic proposals and enterprise FinOps practices.
Vaibhav Pandey
A Methodology and Tool for Automatic Workload Distribution. A Case Study on Federated Learning
Abstract
This paper presents a novel methodology and tool for automatic workload distribution in federated learning environments, in order to address the challenges of security and privacy in distributed machine learning systems. The proposed approach is designed to optimize the performance of federated learning by adapting workload distribution based on the heterogeneous nature of edge devices. The methodology is implemented through a Jupyter notebook extension that facilitates the execution of federated learning tasks in a distributed computing context, leveraging Docker for containerization and an integrated skeleton based compiler for parallelization tasks and environments configuration, through decorators directly in the cells. The Jupyter Workflow kernel leveraging the streamflow library in order to execute workflow in heterogeneus environments. The paper discusses the implementation details, privacy preservation mechanisms, and performance evaluation of the proposed solution, demonstrating its effectiveness in enhancing federated learning workflows.
Salvatore D’ Angelo, Beniamino Di Martino, Pasquale Vassallo, Vito Alessandro Liccardo, Andrea Carollo, Giacomo Corridori, Gianmarco Spinatelli, Francesco Polzella
Performance Evaluation of an Intelligent System Based on a Cuckoo Search Algorithm for Mesh Router Optimization Considering Load Difference Metric and a Middle-Scale WMN
Abstract
Wireless Mesh Networks (WMNs) are recognized as good and diverse networking applications due to their robustness and rapid deployment potential. Nevertheless, these networks have challenges such as congestion, interference, reduced throughput, packet loss, and increased latency. The optimal placement of mesh routers plays a pivotal role in mitigating these issues. However, identifying the optimal router locations is computationally intractable and is considered an NP-hard problem. To solve this problem, we propose and implement an intelligent simulation framework based on Cuckoo Search (CS) algorithm named WMN-CS. This study evaluates the performance of WMNs using the WMN-CS system considering load balancing among mesh routers measured by Load Difference (LD) metric and a middle scale WMN. Simulation results demonstrate that the proposed system minimize the LD while maintaining good network connectivity and client coverage.
Shinji Sakamoto, Admir Barolli, Yi Liu, Keita Matsuo, Leonard Barolli
A Comparative Study of SPX and psBLX Crossover Methods with RDVM and FC-RDVM Router Replacement Methods for Middle-Scale WMN Considering Two-Islands Mesh Client Distribution
Abstract
Router placement in Wireless Mesh Networks (WMNs) is an NP-hard problem that strongly impacts coverage and load distribution. This study extends the WMN-PSOHCDGA hybrid system by integrating two crossover methods, Simplex Crossover (SPX) and Parallelotope-Shaped Blend Crossover (psBLX), with two router replacement methods: Rational Decreasing Vmax Method (RDVM) and Fast Convergence Rational Decreasing Vmax Method (FC-RDVM). RDVM is established for stable coverage, while FC-RDVM accelerates convergence; thus, their comparison clarifies trade-offs in reliability and efficiency. Likewise, SPX supports broad exploration, whereas psBLX preserves spatial correlations, both relevant for WMN optimization. Simulations on a medium-scale WMN with 96 clients under a two-island distribution showed that RDVM achieved full coverage, while FC-RDVM provided superior load balancing performance.
Paboth Kraikritayakul, Admir Barolli, Shinji Sakamoto, Shunya Higashi, Phudit Ampririt, Leonard Barolli
Resilient Artificial Intelligence for Environmental Protection and Renewable Energy
Abstract
In the current environmental context, characterized by an ever-increasing focus on ecosystem protection and climate change, there is a need to develop new technologies capable of monitoring environmental protection and potential adverse scenarios. Resilient Artificial Intelligence (Resilient AI) is a new technology useful for ensuring the operational continuity, accuracy, and safety of critical systems. It enables the detection of anomalies, predicting extreme events, and enabling timely decisions to support environmental protection. Resilient AI promotes the adoption of renewable energy sources, such as photovoltaics, by integrating them into smart grids, improving their efficiency and stability, reducing the risk of outages, minimizing false alarms, and supporting the energy transition towards more sustainable, reliable, and safe systems. The goal of this paper is to provide a comprehensive overview of current and future developments in Resilient AI, highlighting its potential, key challenges, and application prospects in various sectors. Through targeted simulations, the vulnerabilities of AI systems deployed in critical infrastructures essential to daily life (e.g., electricity grids, water, transportation, and telecommunications) will be analyzed, and strategies to improve their security and operational robustness will be evaluated.
Egidia Cirillo, Alessandro Del Prete, Zahida Mashaallah, Alberto Moccardi
Earthquake Simulation System Using Extended Reality Technology
Abstract
This paper proposes a realistic earthquake simulation system that employs mixed reality and virtual reality technologies to simulate an earthquake occurring in the user’s personal living environment. To simulate an earthquake occurring in the user’s personal living space, the proposed system scans the real-world environment to recognize the size of the room and furniture arrangement. Then, virtual objects are overlaid onto the real space. This eliminates the need to prepare a virtual space in advance, thereby enabling an effective earthquake simulation in the user’s personal living environments. The proposed system is expected to improve the effective disaster prevention awareness of residents in Japan, where natural disasters frequently occur every year.
Tomoyuki Ishida, Haruki Yamamoto
Fairness-Aware QoE Assessment for Adaptive Video Streaming on Edge Layers
Abstract
Adaptive video streaming over edge computing infrastructures offers a promising approach to improving user-perceived Quality of Experience (QoE) by reducing latency and enabling more efficient content delivery. However, traditional ABR (Adaptive Bitrate) algorithms often prioritize individual QoE maximization, potentially leading to unfair resource distribution across users with heterogeneous network conditions. In this work, we present a Fairness-Aware QoE Assessment framework for adaptive video streaming in edge-based architectures. Our testbed combines realistic network traces, group-based user separation, and multiple ABR strategies—including heuristic and Machine learning-based approaches to analyze the interplay between QoE performance and fairness. To enable a deeper evaluation, we adopt four complementary fairness indicators: QoE Fairness Score (QFS), Bitrate Fairness Index (BFI), Time-Fair QoE (TF-QoE), and Stability-Aware Fairness (SAF). This multi-metric approach provides greater granularity in identifying how different ABR strategies affect the overall user experience and resource allocation fairness in edge-assisted adaptive streaming environments. Experimental results show that the Machine Learning-based approach achieves the most balanced trade-off between QoE and fairness, outperforming heuristics particularly under heterogeneous network conditions. Moreover, fairness-specific metrics such as SAF and TF-QoE revealed disparities in playback stability and group-level equity that would remain hidden with QoE-only evaluations.
André Luiz S. de Moraes, Douglas D. J. de Macedo
A Cooperative Video Streaming Scheme Using Scalable Video Coding
Abstract
Video streaming is affected by the variable behavior of communication networks, which can influence in the video quality. Users may have different network connection capabilities with respect to the source, and video quality can be dynamically adjusted to meet their requirements. To deal with this problem, this paper proposes a cooperative video streaming scheme with scalable video coding. Scalable video coding is used to delivery video to nodes with weak links to the source. In this way, these nodes receive video with lower video quality compared to nodes with strong links. However, a collaboration scheme between strong nodes and weak nodes can improve the video quality in the latter. To reach this goal, the strong nodes could work as peers in order to redistribute the video streams received from the sources. Our results show that our cooperative scheme combined with scalable video coding can improve the received video quality in weak nodes.
Francisco de Asis Lopez-Fuentes, Luis D. Martinez-Hernandez
Prototype for Real-Time Monitoring of Variables in Agricultural Soils
Abstract
The integration of emerging technologies for crop monitoring has enabled significant advances in agronomy. Researchers and companies have developed systems that combine advanced sensors and artificial intelligence for the early detection of crop diseases or managing crop production. Wireless sensor networks have been implemented to monitor environmental variables and optimize the use of agricultural resources. Internet of Things (IoT)-based platforms for real-time data collection and analysis, and the use of machine learning algorithms to predict growth patterns and detect anomalies in crop conditions are examples of advances in agriculture. In this work, a prototype was developed to monitor the health of agricultural soils, using a development board and sensors to measure soil variables. The prototype allows the measurement of variables such as soil moisture, temperature, pH, and levels of essential nutrients such as nitrogen, phosphorus, and potassium using a web-based system that enables the monitoring of soil conditions. The system includes a graphical interface for viewing real-time data, alerts on adverse conditions, practical recommendations for crop care, and the capture of plant images and soil data while monitoring soil health. The objective is to allow for better control of crop health. The developed application integrates advanced data processing capabilities, notifying users of potential crop problems and providing personalized recommendations.
A. Mexicano, J. C. Carmona, L. Mexicano, T. Medina, J. A. Reyna
Methodology Based on Optimization and Artificial Vision Techniques for the Detection of HLB in Citrus Trees
Abstract
Huanglongbing (HLB) is a disease that significantly affects citrus crops and production, even when various care measures are taken to cope with it, as it is a terminal disease for them. This article presents a methodology that facilitates the detection of HLB-infected trees in Valencia orange orchards using computer vision techniques. To evaluate the methodology, multispectral photographs were taken of 10 fields located in the state of Tamaulipas and around Ciudad Victoria. A total of 141 images were adjusted and regions of interest were extracted: 58 images of healthy trees and 83 of diseased trees. Twenty vegetation indices were applied for the identification of the classes Background, Healthy Trees, and Diseased Trees. Subsequently, the techniques of Correlation Analysis (CA), Principal Component Analysis (PCA), AutoEncoder (AE), CA with PCA (CA-PCA), CA with AE (CA-AE), Minimum Redundancy Maximum Relevance (mRMR), and Transformers (T) were used to develop the 7 datasets for training the classifiers Support Vector Machine (SVM), K-Nearest Neighbor (KNN), Naive Bayes (NB), and Ensemble Learning (EL) composed of AdaBoost with DecisionTree, XGB, and RandomForest. The classifier that achieved the best separation between the classes Background, Diseased Trees, and Healthy Trees was EL using the CA-AE set, which was composed of the reduction of vegetation indices through CA and AE, with an Accuracy of 70%, Precision of 69%, Recall of 69%, F1-score of 69%, and Cohen’s Kappa of 55%.
Jesús C. Carmona-Frausto, Adriana Mexicano-Santoyo, Salvador Cervantes-Alvarez, Kevin E. Bee-Cruz, Pascual N. Montes-Dorantes
A Graphical Editor for Composing English Sentences Using Grammatical Assistance Function
Abstract
In this paper, we add a new function to the block-based graphical structure editor for the English language to assist users by suggesting all acceptable pairs of parts-of-speech or sentence elements for two blocks, which are dragged close to each other. These blocks are combined to form a grammatically correct composite block. This feature helps users to compose sentences with good grammatical awareness.
Kohei Takahashi, Michitoshi Niibori, Masaru Kamada
Towards a Systematic Approach to Memory Safety: A Case Study Integrating Techniques and Practices Over the Software Development Life Cycle (SDLC)
Abstract
Safe memory management is a crucial pillar in modern programming and cybersecurity, essential to prevent vulnerabilities and errors that can compromise the reliability and security of computer systems. Memory safety problems, as evidenced by many cases (e.g., Wannacry and Crowdstrike) can have a devastating impact on the entire Trusted Computing Base (TCB) of organisations. Despite such important issues, there is still a lack of standardised frameworks, methods, and tools able to guide software engineers in a systematic consideration and mitigation of software memory safety during the entire Software Development Life Cycle (SDLC). In this work, we propose a first attempt towards an approach that contextualises and considers, within the SDLC, main issues related to memory safety, and proposes guidelines to apply specific techniques for reducing potential memory safety risks. Specifically, our approach is pragmatic and oriented towards the industry, with the aim of helping organisations to individuate the parts where memory safety issues more often occur, and to mitigate such problems contextually to a secure SDLC. The concept of memory safety is introduced, followed by an overview of the main classes of vulnerabilities, and then by an in-depth analysis of the applicable mitigation techniques. We present our approach and a case study as an initial application of our approach, and exemplification of concepts related to it. The main contribution of this work consists in the systematization of mitigation techniques deriving from memory management problems, according to the SDLC, and in the practical demonstration of their effectiveness in a case study.
Isaia Tonini, Giacomo Nalli, Luca Piras, Pietro De Matteis, Stelios Kapetanakis, Silvio Ranise
Efficient Cross-Chain Smart Contract Execution via State Channels
Abstract
Public blockchains offer transparency and security guarantees, but suffer from limited throughput, high latency, and unpredictable transaction costs, making them unsuitable for the high-frequency data flows of the Internet of Things. In this work, we address these scalability challenges by introducing a hybrid architecture that leverages state channels to enable efficient cross-chain smart contract execution between a permissioned ledger, responsible for capturing pollutant measurements from IoT sensors, and a public blockchain, used for final settlement. Our approach enables the processing of sensor readings to be performed off-chain in a lightweight, low-cost environment, while preserving the integrity and non-repudiation guarantees of on-chain settlement through a state channel. We describe the protocol design in detail, and the use of our solution in a case study involving water quality sensors.
Alessandro Bigiotti, Leonardo Mostarda, Alfredo Navarra, Davide Sestili
A Data Cache Algorithm in Distributed Management of Metaverse Objects
Abstract
In this paper, we propose a data cache algorithm to reduce loads in distributed management of metaverse objects. In recent years, virtual spaces, or metaverses, have attracted a great deal of attention. The proposed algorithm focuses on tall objects that are less likely to be obscured in the metaverse space and allows taller objects to remain in user’s working memory and cache memory longer. In addition, we propose a system that predicts the user’s next move based on their past position to reduce the delay in rendering objects. The proposed system holds the data needed for the next move in cache memory. The evaluation results demonstrated the effectiveness of the prediction system and the characteristics of the proposed cache algorithm.
Nobuki Aoki, Tomoya Kawakami, Satoru Matsumoto, Tomoki Yoshihisa, Yuuichi Teranishi
A Predictive Framework for Scheduling Stochastic Processes on Heterogeneous Resources
Abstract
Traditional scheduling theory assumes deterministic or independently random process behavior, failing to capture the temporal patterns in real systems. This work presents a predictive framework that models processes as discrete-time Markov chains with learnable behavioral states and treats resource heterogeneity as an optimization opportunity. The central contribution is the EWMA-Markov predictor, combining exponentially weighted averaging with Markovian modeling through \(P_{ij}(t+1) = \alpha P_{ij}(t) + (1-\alpha )\mathbb {I}[\text {transition}]\), achieving O(1) complexity and numerical stability. For convex cost functions, we prove the cost under predictive scheduling satisfies \(\text {Cost}_{\text {predictive}} \le \text {Cost}_{\text {reactive}}(1 - D(\mathcal {P})H(\mathcal {R})/(1+\kappa ))\), where \(D(\mathcal {P})\) measures process diversity, \(H(\mathcal {R})\) resource heterogeneity, and \(\kappa \) switching overhead. This quantifies when heterogeneous systems outperform homogeneous ones. Robustness analysis shows prediction errors cause only linear degradation: \(\text {Cost}_{\text {achieved}} \le \text {Cost}_{\text {optimal}} + 2\varepsilon \cdot \text {diameter}(\rho )\) for error \(\varepsilon \). The framework maintains fairness with bounded distortion and integrates into production schedulers. While validated on CPU scheduling, the mathematical structure generalizes to any domain with stochastic demand and heterogeneous supply.
Walter Balzano, Pasquale Miranda
Locating Method Using Collective Intelligence and Specific Object Recognition
Abstract
Location identification using images is an emerging technology with a broad range of potential applications. It can be employed in scenarios such as locating individuals in distress during mountain climbing, aiding in disaster response for events like earthquakes and floods, finding lost children in tourist areas or crowded spaces, and tracking individuals with dementia or similar conditions. However, accurately identifying a location based solely on an image remains a significant challenge, as visual cues related to location are often absent or ambiguous. Despite growing recognition of the importance and potential of this technology, no practical solution has yet been established. In this paper, we propose a novel method for location identification inspired by the concept of collective knowledge. In this approach, a large number of individuals contribute by registering images of objects they subjectively associate with specific locations. The resulting set of object images tends to capture features that others-who did not participate in the registration-also perceive as characteristic of those locations. Therefore, it becomes feasible to estimate a location based on images containing such objects. We present the design and implementation of this method and evaluate its effectiveness through experiments in real-world environments.
Shogo Ishimaru, Hiroyoshi Miwa
Distributed Prediction Ledger: A Theoretical Framework for Privacy-Preserving Consensus on Heterogeneous Continuous Values
Abstract
Competitive markets face a fundamental tension related to the Grossman-Stiglitz paradox: agents could benefit from aggregating predictions, yet sharing them destroys competitive advantage. We introduce the Distributed Prediction Ledger (DPL), a protocol that shifts from traditional value consensus to process consensus. This paradigm shift enables competitive agents to harness collective intelligence without revealing individual strategies. DPL orchestrates a unique combination of Byzantine fault tolerance, differential privacy, and verifiable delay functions within a hierarchical architecture, achieving optimal communication complexity while preserving prediction confidentiality. Our Master Theorem proves that DPL approximates optimal aggregation with formal privacy and robustness guarantees. The framework directly addresses real-world challenges in quantitative finance, where hedge funds using proprietary models could benefit from aggregated market insights but cannot share their strategies. This work opens a new research direction in distributed systems, transforming zero-sum competition into collaborative intelligence.
Walter Balzano, Pasquale Miranda
Prognostics of Turbofan Engines Using Self-Attention Transformers and Explainable AI Techniques
Abstract
Predicting the remaining useful life (RUL) of turbofan engines, which allows for proactive interventions and improves system reliability, is a crucial task in predictive maintenance. To address the challenge of accurately estimating the RUL of turbofan engines, this paper presents a novel deep learning method that makes use of Self-Attention Transformers. Using the C-MAPSS FD001 dataset, we trained a multi-block Transformer encoder and showed that it could successfully capture long-range dependencies in sensor data. Through rigorous training, both training and validation losses effectively converge, demonstrating the model’s stable learning curves and its robustness in capturing data patterns. The model demonstrates high accuracy for lower RUL values, which is crucial for timely maintenance planning, although it shows some variance in predictions for higher RUL values. The thorough examination of the model’s interpretability is one of this work’s main contributions. We demonstrate how the model distributes its attention across the input sequence by extracting and visualizing the attention weights, with distinct attention heads focusing on patterns in recent or historical data. In addition to validating the model’s decision-making process, this analysis facilitates the development of predictive maintenance systems with enhanced transparency and reliability.
Alessandro Del Prete, Egidia Cirillo, Zahida Mashaallah, Alberto Moccardi
Prediction of Stomatal Dynamics: Leveraging Generative AI for Automated Detection and Analysis of Stomatal Closure
Abstract
Plants have always played a vital role in the lives of all living creatures. In plants, leaves contain stomata, They are responsible for regulating the exchange of gases and water vapor between the plant and its surrounding ecosystem. The irregular behavior of stomata opening and closing exerts a significant influence on plant growth and the functioning of ecosystems. Each stomata is surrounded by two specialized cells, known as guard cells, which control the opening and closing of the stomatal pores. The development of AI-based application requires the use of advanced computer vision tools such as YOLO, which can detect multiple stomata pores on a leaf in real time. This tool can extract useful information from digital microscopic images with a confidence score of 80 to 90%. Its effectiveness further enhance with CNN. Therefore, stomatal dynamics alone are not sufficient for reliable AI-based crop monitoring, and other factors must be considered. However, by combing stomatal behavior with continuous detection of plant conditions such as leaf color, thermal imaging and soil moisture, AI-generative systems can provide more accurate stress assessment and actionable insights for precision agriculture.
Zahida Mashaallah, Egidia Cirillo, Alessandro Del Prete
AI Design Principles for Compliance-Driven Industrial IoT Systems
Abstract
This survey maps the technological and regulatory landscape of AI-driven industrial Internet of Things (IIoT), addressing the misalignment between applications and their governance, in pursuit of a compliance-by-design approach. The analysis begins with a review of established and state-of-the-art time-series analysis techniques, in addition to their deployment in real-time decision support systems (DSS). Next, these technological trajectories are cross-referenced with AI design principles, evaluated under the umbrella of the European AI Act, ISO/IEC, and other normative standards. By aligning regulations, applications, and systems design, the survey proposes a practical KPI-based approach towards compliance-driven IIoT, contributing to the groundwork for advanced AI system standardization.
Flora Amato, Rajib Chandra Ghosh, Alberto Moccardi, Marcello Pelosi
Towards Reliable Prognostics: RUL Uncertainty Estimation with Transformers and Monte Carlo Dropout
Abstract
It is commonly accepted that being able to accurately predict the Remaining Useful Life (RUL) is a key part of good Prognostics and Health Management (PHM). Deep learning models have made a lot of progress on this task in the last few years. The Transformer architecture, in particular, has shown that it can make accurate predictions on complicated time-series data. However, their “black-box” nature still makes it hard to use them in safety-critical fields like aerospace engineering, where reliability and interpretability are just as important as accuracy. We suggest a way to measure how uncertain predictions are in Transformer-based RUL models using Monte Carlo Dropout (MCD) in this study. The method is tested on the difficult C-MAPSS FD002 dataset, which has many different operating conditions and failure modes. Our experimental findings indicate that the proposed method not only attains competitive accuracy but also offers a statistically robust metric of predictive confidence. We see a positive correlation (\(r = 0.5296\)) between the estimated uncertainty and the absolute prediction error. This means that the uncertainty estimates are properly defined and escalate as engines near their end-of-life, which is consistent with engineering intuition. These results are a step toward making AI-based predictions more open and reliable, which will help engineers make decisions based on data with a better understanding of the risks involved.
Alessandro Del Prete, Zahida Mashaallah
Postprocessing Solar Radiation with a Self-attentive Transformer for Renewable Energy Predictions
Abstract
Physics-based ensemble weather prediction models form the backbone of probabilistic operational weather forecasting systems. The ensemble weather forecasts, however, suffer from systematic biases and inappropriate dispersion, which can be corrected by applying statistical postprocessing techniques. In this work, we apply a Transformer based on self-attention to postprocess forecasts of solar radiation from the EUPPBenchmark dataset. Our method results in a \(5.3\%\) improvement in CRPS over the raw forecasts, realizes a significant increase of \(25 \%\) in ensemble spread, and outperforms a classical member-by-member method employed as a competitive baseline. Finally, we convert these postprocessed weather forecasts into solar power predictions, highlighting the potential of the Transformer for practical renewable energy applications.
Aaron Van Poecke, Ayoub Aouraghe, Joris Van den Bergh, Peter Hellinckx, Hossein Tabari
Diff-Ensemble: An Ensemble of LSTMs and Diffusion Models for Day-Ahead Load Forecasting Using Limited Data
Abstract
Load forecasting plays a pivotal role in industrial demand response, enabling businesses to plan their electricity needs ahead of time through day-ahead scheduling. However, data is often limited or outdated due to frequent infrastructure modifications. To this end, load forecasting using limited data has recently attracted research interest. This paper explores the application of learning-based algorithms to day-ahead load forecasting in data-constrained environments. Moreover, we introduce Diff-Ensemble, an ensemble incorporating the long short-term memory (LSTM) and diffusion model, to enhance load forecasting capabilities when data is limited, and evaluate it using real-world data from an industrial site as part of the InStaFlex project. Results show that Diff-Ensemble reduces the normalized MAE (NMAE) by 8.1% and 18.5% compared to the LSTM and diffusion model, respectively. Furthermore, with just seven days of training, Diff-Ensemble achieves an NMAE of 2.77%, outperforming all other methods in this study. This work demonstrates the viability of ensemble learning for load forecasting with limited data.
Stijn Van Raemdonck, Joris Van den Bergh, Brecht Zwaenepoel, Tomas Van Oyen, Dieter Van den Bleeken, Hossein Tabari, Peter Hellinckx
Disaggregating Household Electricity Production and Consumption from Smart Meter Data
Abstract
We present a model-based method to disaggregate residential smart meter data into behind-the-meter (BTM) production and consumption profiles without requiring PV measurements. The approach fits a PV system model to feed-in power under clear-sky conditions, minimizing ramp-period loss while enforcing peak-time validity constraints, and leverages high-resolution solar radiation data to reconstruct accurate production and consumption profiles. Evaluation on 33 households from the Pecan Street dataset yields normalized mean absolute errors of 0.038 for production and 0.031 for consumption relative to ground truth. The proposed method is non-intrusive and explainable, enabling smart meter data applications for grid management, balancing and demand-side flexibility.
Benoit De Vrieze, Hossein Tabari, Peter Hellinckx
Improving Household Electricity Consumption Forecasting with Smartphone Data
Abstract
The increasing penetration of renewable energy sources and the electrification of residential load have amplified volatility in distribution networks. This paper investigates the influence of human behavior on household electricity usage and investigates how smartphone-collected data can enhance single-household consumption forecasting. By incorporating features related to human activity into a state-of-the-art sequence-to-sequence model based on bidirectional long short-term memory (LSTM) with attention, prediction accuracy improves by 15–22%. Our findings highlight that behavioral information, such as app usage patterns, can significantly improve forecasting performance for five-hour-ahead or longer horizons, but also raise important privacy considerations that must be addressed.
Pieter Jan Houben, Vincent Verbergt, Benoit De Vrieze, Peter Hellinckx
Graph Retrieval Augmented Generation for Privacy-Sensitive Information
Abstract
In this paper the use of graph retrieval augmented generation (GRAG) in domain-specific knowledge bases (DSKBs) which contain both public and private data is explored. Our proposed methodology utilizes a modified embedding-based GRAG implementation capable of preserving private information in DSKBs. Our privacy-preserving GRAG methodology is evaluated against a baseline GRAG implementation using a modified version of the MultiHopRAG dataset and based on three metrics: privacy preservation, data quality and computational performance. This research proves that our methodology outperforms the baseline and is capable of preserving private information in DSKBs without compromising on output quality and computational performance.
Rien Van Campenhout, Benoit De Vrieze, Pieter Jan Houben, Jens de Hoog, Peter Hellinckx
An Evaluation Index System for Mental Health in Colleges and Universities Based on Random Forest Algorithm
Abstract
The mental health of college students is a key focus of societal concern, and constructing a scientific evaluation system is fundamental to achieving effective intervention. This study proposes a method for developing a mental health evaluation index system for universities based on Random Forest (RF) algorithm. First, guided by the biopsychosocial model, 20 secondary indicators were preliminarily established across five dimensions: emotion, self-cognition, interpersonal adaptation, academic career, and behavioral lifestyle. Subsequently, the Gini impurity decrease, and permutation importance criteria of Random Forest were employed to screen the indicators, identifying a core set of metrics and constructing a binary classification prediction model. Empirical results demonstrate that the RF-screened indicator system is concise and effective, with the RF model achieving an accuracy of 0.892 and an AUC value of 0.945, significantly outperforming models such as logistic regression and support vector machines. This study confirms the superiority of Random Forest in handling high-dimensional nonlinear problems in psychological data, providing theoretical and methodological support for universities to establish intelligent mental health warning systems.
Jing Zhang, Linjun Fan, Yan Yang, Zheng Liu
Backmatter
Title
Advances on P2P, Parallel, Grid, Cloud and Internet Computing
Editors
Leonard Barolli
Tomoyuki Ishida
Mario Dantas
Copyright Year
2026
Electronic ISBN
978-3-032-10344-4
Print ISBN
978-3-032-10343-7
DOI
https://doi.org/10.1007/978-3-032-10344-4

PDF files of this book have been created in accordance with the PDF/UA-1 standard to enhance accessibility, including screen reader support, described non-text content (images, graphs), bookmarks for easy navigation, keyboard-friendly links and forms and searchable, selectable text. We recognize the importance of accessibility, and we welcome queries about accessibility for any of our products. If you have a question or an access need, please get in touch with us at accessibilitysupport@springernature.com.