Cybersecurity and Artificial Intelligence Strategies
Second International Conference, CAIS 2025, Baghdad, Iraq, September 17–18, 2025, Proceedings
- 2026
- Buch
- Herausgegeben von
- Safaa O. Al-Mamory
- Ali Makki Sagheer
- Abeer Salim Jamil
- Mahmoud Shuker Mahmoud
- Haider Hadi Abbas
- Kewei Sha
- George S. Oreku
- Verlag
- Springer Nature Switzerland
Über dieses Buch
Über dieses Buch
This book constitutes the post-conference proceedings of the Second International Conference on Cybersecurity and Artificial Intelligence Strategies, CAIS 2025 held in Baghdad, Iraq during September 17-18, 2025.
The 23 full papers included in this book were carefully reviewed and selected from 101 submissions. The papers are organized in these topical sections: Security and Privacy; Applied Computing and Computing Methodologies.
Inhaltsverzeichnis
-
Frontmatter
-
Security and Privacy
-
Frontmatter
-
An Improved DBSCAN Clustering Algorithm for Bot Detection on Twitter
Raad G. Al-Azawi, Safaa O. Al-MamoryAbstractTwitter-Bot is challenged to fight the attacks, that harm society. Getting label data for monitored bot detection is difficult and time-consuming. This study suggests an innovative unsupervised cluster method to effectively identify bots on Twitter using non-labelled data. The primary goal is a dependable system that can detect the bots properly, which can reduce misinformation and improve online chat. The proposed technology takes into account many important factors, including accuracy, computational complexity, and delayed handling. The system includes feature extraction and bot prediction steps, which use the adaptive DBSCAN algorithm. To validate the effect of technology, the study appoints various evaluation measurements, such as homogeneity (0.988), completeness (0.989), V-measure (0.989), adjusted Rand index (0.996), adjusted mutual information (0.989), silhouette coefficient (0.786), and Fowlkes Mallow's score (0.998). The results show the possibility of effectively labelling Twitter data and offer a practical and cost-effective approach to the success of technology and bot identity. This research emphasizes the importance of integrating several unsupervised algorithms to feature extraction, dimensionality reduction, and improved bot detection. This makes a significant contribution to the BOT detection and analysis of social media and creates broad opportunities to analyse Twitter data. -
Anomaly Detection in Blockchain Transactions Using Supervised Machine Learning
Heba M. Fadhil, Mohammed I. Younis, Sajad EsfandyariAbstractThe concept of blockchain has changed the landscape of digital transactions by introducing the concept of a decentralized and secure architecture. Nevertheless, it is susceptible to the 51% attack in which a malicious user uses a majority part of the computer processing power of the network to manipulate transactions, either through fraud or through duplication. This paper suggests a supervised-based machine learning algorithm form of anomaly detection designed to detect such attacks. On an annotated Bitcoin transaction dataset with artificially added anomalies, we compare two de facto standard classifiers to each other, Support Vector Machine (SVM) and Random Forest (RF). Important blockchain attributes like confirmations, block height, transaction volume, and difficulty are applied when differentiating between normal and anomalous behavior. The measurement of performance is done by accuracy, precision, recall, and F1-score. RF showed complete scores in all scores (100% accuracy, precision, recall, and F1-score), which means that it is really powerful in consistently spotting a malicious activity. Conversely, SVM showed high values of accuracy and precision (98.96 and 97.56, respectively) but low recall and F1-score (90.91 and 94.12, respectively), indicating that it could not always identify some anomalies. The findings confirm the usefulness of ensemble models such as RF in real-time anomaly detection and improvement to blockchain security systems. -
Artificial Intelligence Strategies for Advancing Cybersecurity and Intrusion Detection Systems (IDs)
Ali Azeez Ahmed Al-Rubaye, Omar ayad Ismael, Lujain Qasim Naser LamiAbstractWith the growing prevalence of highly sophisticated and frequent cyber-attacks aimed at the infrastructure and, in particular, at the Internet of Things (IoT) devices, more intelligent, high-level intrusion detection methodologies are required. Standard security measures such as encryption and authentication are not enough to detect new or advanced threats like zero-day attacks and DDoS attacks. In this paper, we propose a Hybrid IDS architecture for the development of a rule-based IDS based on ML and DL mechanisms in order to improve cybersecurity in IoT and WSN networks. The system uses Support Vector Machines (SVM), Convolutional Neural Networks (CNN), and anomaly detection to reliably and timely characterize new and known threats. Experimental results on three benchmark data sets, namely, NSL-KDD, DS2OS, and IoT Botnet, show that the Hybrid IDS could have an overall detection accuracy of 96.4% and outperformed traditional IDS models in terms of various performance measures, such as precision, recall, and F1-score. The experimental results at least show the system's capability and flexibility to defend against various types of threats, and pose a practical and scalable solution for real-world IoT security challenges. -
BERT-Enhanced Dual-Attention RNN for Short Text Spam Detection
Ali Kadhem Jasim, Mohammed Riyadh Al-Rikabi, Fuqdan A. Al-Ibraheem, Hussein Alaa Al-Kaabi, Ali KamberAbstractDetecting spam is a crucial challenge for a user's safety and privacy. Unlike long texts such as emails, short text messages have a limitation in the number of words, making it more challenging to analyze. Artificial intelligence techniques and Machine learning approaches for spam detection often face challenges due to modern spam’s dynamic and context-specific nature. To solve this issue, we proposed a novel method consisting of these key steps: pre-processing, word embedding using BERT, self-attention for feature weighting, RNN for temporal feature extraction, temporal attention for feature selection, and a fully connected layer for classification. The model generates rich word representations by utilizing BERT, while its dual-attention mechanism enables it to concentrate on significant words and patterns within the message. The RNN establishes the relation between the words in the sentences. We evaluated the proposed model on the most famous dataset (UCI SMS Spam v.1). The proposed method achieved an accuracy of 98.9%, surpassing the state-of-the-art techniques. -
Effective Knowledge Graph Representation for Cybersecurity Using AI-Based X Data and Named Entity Relation Technique
Sara Faez Abdulghani, Bushra Abdullah Shtayt, Mustafa Sabah Taha, Mohammed Mahdi HashimAbstractGlobally, Twitter, now known as X, is the 3rd most popular Online Social Network (OSN), behind Facebook and Instagram. Its data model and data access API are simpler than those of other OSNs. This makes it perfect for social network studies that aim to examine the structure of the social graph, the sentiment towards different entities, the patterns of online behavior, and the types of malicious attacks in a vibrant network with hundreds of millions of members. Over the past ten years, Twitter has been used in over 10,000 research articles, establishing it as a significant research platform. There are few attempts to map this research landscape overall, despite the fact that the majority of the research that uses Twitter has outstanding evaluation and comparative studies. Using data from Twitter, this study attempts to create a knowledge graph (KG) of ransomware attacks (RA) and investigate the difficulties related to ransomware knowledge. Ransomware is a worldwide threat that is constantly changing. Three essential processes are involved in creating a KG from unstructured text: gathering and cleaning data, extracting entities, and extracting relationships. This work extracts ransomware entities from unstructured data using a previously suggested ransomware ontology, customizing it to match attacks that have been reported on Twitter. The KG is created by identifying the relationships between the entities that have been extracted. A tracing technique is used to assess the accuracy of the generated knowledge graph to demonstrate its efficiency. The proposed method has surprisingly achieved high accuracy when compared to relevant studies, with an accuracy metric of 93.42 and an F1-score of 94.03. -
Enhancement of Cybersecurity in AI Services Using Hybrid Homomorphic Encryption
Mina Sameer Haji Al-Okbi, Shahad A. Mnati, Noor Adel jawedAbstractThe fast progression of Artificial Intelligence (AI) in nearly all fields is equated with a high number of cyber-attacks aimed at ruining AI models. This important challenge of ensuring that data privacy holds in the course of computation remains consistently a big hurdle to the widespread adoption of AI services. The techniques of Privacy-Preserving Artificial AI (PPAI) such as Homomorphic Encryption (HE) make it possible to securely carry out computations on encrypted data. However, conventional HE often suffers from scalability and computational efficiency bottleneck, which are not suitable in resource constrained environments. In order to tackle these challenges, this paper presents a Hybrid Homomorphic Encryption (HHE) technique that makes use of the symmetric cryptography along with HE to strengthen both security and performance for AI services. With that in mind, we present the GuardAI framework, a new framework that is meant to be used for deploying AI applications, where the applications run on resource limited devices. Encrypted data classification is made possible by GuardAI which also ensures data confidentiality of the input data as well as AI models. Finally, we evaluate the performance of the proposed HHE method on a heart disease classification task using electrocardiogram (ECG) signals as an example of data contamination susceptible signals. We demonstrate that the proposed approach provides strong data privacy while resulting in little computational and communication overhead, at least as good as unencrypted inference in classification accuracy. The contribution of this work is to lay the foundation to integrate HHE in AI based cybersecurity solutions, specifically in computationally constrained environments. This work strengthens the security of AI services, by providing both increased privacy and efficiency, and thereby increases their resilience to emerging cyber threats. -
Innovative Neural Network Architecture for Progressive Windows Malware Detection via Adaptive Feature Fusion and Multi-stage Learning
Muthana S. MahdiAbstractWindows-based malware continues to evolve in complexity, leveraging obfuscation, polymorphism, and anti-analysis techniques that bypass traditional security systems. The rapid evolution of malware targeting Windows systems poses significant challenges to conventional detection techniques, which struggle to adapt to dynamic and obfuscated threats. This study presents a novel neural network framework that integrates static and dynamic analysis through an Adaptive Feature Integration Module, hybrid convolutional-recurrent layers, and an ensemble decision mechanism. The proposed model employs a progressive, multi-stage training strategy on diverse datasets, including EMBER, EMBERSim, and SoReL-20M, to enhance generalization and resilience against emerging malware variants. Experimental results demonstrate that our approach achieves superior accuracy, precision, recall, and F1 scores compared with related methods. It achieved an average detection accuracy of over 96% across the evaluated datasets. The framework effectively captures local and temporal patterns inherent in malware behavior, mitigates overfitting, and adapts to new data without catastrophic forgetting. This innovative integration of advanced deep learning techniques represents a substantial advancement in Windows malware detection, offering improved performance and robustness for real-world cybersecurity applications. -
LiteStegNet: A Lightweight Deep Learning Framework for Video Steganography in IoT-Based Systems
Hussein Ali Hussein Al-Janabi, Ziyad Tariq Mustafa Al-Ta’iAbstractEmbedding secret messages within videos makes video steganography one of the most powerful techniques used to protect communications in IoT systems. Unfortunately, the traditional deep learning approaches to steganography are very computationally intensive which makes them impractical for the extremely limited resources available in IoT systems. In this paper, we present LiteStegNet, which is a low-cost deep learning architecture to be used in video steganography in IoT devices. Our video steganography model uses a convolutional autoencoder (CAE) which allows the insertion and retrieval of confidential information with minimum distortion and high fidelity. Through extensive experiments on the UCF101 dataset, we found that LiteStegNet has a peak embedding accuracy of 98.92% and lower processing overhead suitable for real-time IoT applications. Besides, LiteStegNet achieves low reconstruction loss of 0.0140 after the last epoch while SSIM remains over 0.95, indicating high similarity between the original video and the stego-video frames. The proposed framework considerably improves the level of security, efficiency, and computational scalability of steganography in the context of IoT-based multimedia communications. -
Secure and Workload-Aware Virtual Machine Migration: Enhancing Performance, Energy Efficiency, and Cyber Resilience in Cloud Data Centers
E. I. Elsedimy, Riyam Amer Wahed, Sura Mustfa Abbas, Fadhil Abd RasinAbstractWorkload prediction is necessary for efficient Virtual Machine Migration (VMM) in cloud computing systems as it improves security, prevents energy waste, and satisfies SLAs. In this paper, we propose a multi-stage method for fast VM migration using the Smart Adaptive Virtual Machine Migration (SAVMM) technique. This approach works in three stages: monitoring, VM management, smart migration, and proactive optimization by a hybrid LR-PSO algorithm. SAVMM uses a combination of advanced monitoring, predictive models such as ARIMA, as well as Particle Swarm Optimization (PSO), in order to provide an accurate prediction of the host utilization. The algorithm LR-PSO operates by reducing unnecessary migrations, idle servers are powered off, and minimal energy consumption is achieved. Simulations with real workload traces from CoMon confirm that SAVMM outperforms benchmark algorithms (IQR, MAD, THR, and LR) by reducing SLA violations and achieving maximum resource utilization, and thus SAVMM is suitable for dynamic and high-load cloud environments. Through accurate prediction of CPU usage, SAVMM significantly reduces idle server time, reduces power consumption, and maintains SLA satisfaction. It always has lower SLA violation rates (6.23% to 7.96%) and host shutdowns compared to baseline methods. The results confirm that SAVMM is a scalable, reliable, and power-saving solution for the optimization of VM migration and enhancing the performance of cloud infrastructure. -
AI-Powered Malware Analysis in Military Cybersecurity: A Deep Learning Approach
Zaid Ali Hussein, Omer Abdulhaleem NaserAbstractMilitary cybersecurity faces increasing threats from Advanced Persistent Threats (APTs), zero-day exploits, and adversarial AI-driven malware, necessitating real-time, adaptive defense mechanisms to protect critical networks, UAV systems, and cyber-physical infrastructures. Traditional detection methods, such as signature-based approaches, struggle with high false positives and poor zero-day attack detection, making them inadequate in addressing evolving cyber threats. Unlike traditional methods, AI-driven approaches, particularly deep learning, provide significant improvements in malware classification accuracy, real-time detection, and robustness against adversarial attacks. This research introduces a deep learning-based malware detection framework that integrates Convolutional Neural Networks (CNNs), Long Short-Term Memory (LSTM) networks, Transformer models, and Reinforcement Learning (RL) to enhance malware detection performance. The framework employs a hybrid AI-powered threat intelligence system that combines static, dynamic, and adversarial AI-based defenses to counter evolving malware tactics such as polymorphism, obfuscation, and zero-day exploits. Experimental results demonstrate over 99% detection accuracy, 99.5% adversarial robustness, and inference speeds under 15ms, ensure low-latency threat response for 5G/6G tactical networks and military cyber defense systems. By incorporating GAN-based adversarial training and integrating real-time cyber threat intelligence (CTI) platforms, this research advances next-generation AI-driven military cybersecurity solutions, enhancing resilience, adaptability, and autonomous defense against modern cyber warfare threats. Unlike traditional methods, this approach delivers robust, scalable, and adaptive defense mechanisms, critical for securing military assets in the face of advanced cyber warfare tactics.
-
-
Applied Computing
-
Frontmatter
-
A Modern Software Engineering Approach to UML Class Diagram Evaluation
Omar Raad Alsammak, Ashraf Abdulmunim AbdulmajeedAbstractClass Diagrams are a fundamental element in the software development process, providing an organized visual representation of software components and their relationships. Therefore, ensuring the quality of these diagrams is importance to maintain consistency, design integrity, and project success. Design quality within class diagrams encompasses both data organization methods and systematic assessment of individual system elements. The quality standards of class diagrams determine software system execution speed along with future maintenance needs thus requiring strict attention to minimize later development errors. Higher quality enables designers to discover design issues at early development stages thus allowing corrections prior to advancing to more detailed phases. An innovative tool plays a key role in automating the evaluation process and quality classification which run on the Enterprise Architect platform. This tool merges XML data analysis methods and machine learning intelligent algorithms to supply immediate assessments for software developer high-level designs. Evaluations performed in the early design phase enable software engineers to execute an extensive analysis which reveals both positive and negative features in the first system model. The tool delivers profound design insights to designers which helps them make vital development process improvements throughout early development stages. This helps mitigate risks, enhance the overall quality of the final product, and refine designs before advancing to more complex detailed design stages, ensuring higher performance and greater efficiency in the long term. This tool leverages insights from software quality datasets and applies machine learning techniques to ensure accurate and efficient evaluations. Experimental results demonstrate the tool's efficacy, with the Ensemble (Soft Voting) method achieving the highest accuracy of 96%. Other models performed closely, with the nearest result being 1% lower and others ranging from 2% to 7% below the top accuracy. These outcomes highlight the advantage of combining models to enhance assessment performance. -
Advancement in E-Marketing Strategy Using Support Vector Machine and Apriori Algorithm
Evan Madhi Hamzh Al Rubaie, Mehdi Ebady Manaa, Fryal Jassim Abd Al-RazaqAbstractE-marketing has emerged as a fundamental component of contemporary commercial strategy, utilizing internet channels to advertise items and ser-vices. Nonetheless, conventional e-marketing tactics frequently encounter challenges with accuracy, particularly for individuals with minimal transac-tion history. This study tackles the issue by offering an improved e-marketing approach that combines the Support Vector Machine (SVM) with the Apriori Algorithm to boost predictive accuracy. The approach entails gathering user data from social media (utilizing the Twitter API) and e-commerce platforms, preparing the data using natural language processing (NLP), and employing support vector machines (SVM) for sentiment analysis and classification. The Apriori Algorithm is employed to develop association rules for things that are regularly purchased. Experimental findings indicate that the suggested approach attains an accuracy of 92.1%, a precision of 0.89, and a recall of 0.90, surpassing conventional techniques, particularly for users with restricted transaction histories. The results indicate that the amalgamation of social media data with machine learning methodologies may substantially improve e-marketing efforts, providing enterprises with a more effective and precise instrument for tailored product suggestions. -
Advancing Real-Time Facial Expression Recognition: A Deep Learning-Based Video Stream Analysis
Sura Salah, Sama HayderAbstractFacial Expression Recognition (FER) is a critical component of artificial intelligence (AI) with applications in human-computer interaction, affective computing, security, and healthcare. Despite significant advancements, real-time FER remains challenging due to computational inefficiency, dataset biases, and the difficulty of detecting subtle micro-expressions. This study proposes an optimized deep learning framework that integrates Convolutional Neural Networks (CNNs), Long Short-Term Memory (LSTM) networks, and Transformer-based architectures to enhance video-based emotion analysis. The model leverages self-supervised learning (SSL), contrastive learning, and motion tracking to improve micro-expression recognition, achieving a 9.2% increase in classification accuracy.At the same time, the framework adopts low-bit quantization (INT4/INT2) and model pruning along with parallelism for enabling real-time performance on edge AI devices with reduced latency up to 159% with deployment on Jetson Nano, Google Coral, and Edge TPU. To address the fairness and bias concerns, RCNN, the adversarial debaser, and the reinforcement learning-based dataset balancing led to a 14.3% reduction in misclassification for the minority demographic group. The study also discusses ethical concerns in FER application, especially relating to the privacy risks in surveillance and mental health diagnostics. The reason for this is to promote multi-modal emotion recognition via facial expression integration with voice and physiological signals for enhancing context-aware emotion analysis. Experimental evaluations show that using the proposed model achieves state-of-the-art accuracy of 94.1%, much better than the best baseline models in terms of accuracy and computational efficiency. -
An Enhanced Deep Learning Approach for Writer Identification and Verification Using Corner Detection
Mays Zeedan Khalaif, Muhanad Tahrir YounisAbstractWriters identification\verification using handwriting biometrics are used in forensic analysis, document verification, and security systems, they have attracted a lot of attention as a means of identifying and verifying writers. Both Arabic and English texts still present difficulties especially in Arabic because of the script's cursive style and the minute variations in letter placement, which are frequently indicated by the positioning of dots. This research presents a novel offline system for identification/verification Arabic and English handwriting that uses convolutional neural networks (CNNs) in conjunction with several techniques, such as Harris Corner Detector, Shi-Tomasi, and SIFT. The technique does not require word or character segmentation and employs data augmentation to enhance the quality of the training data and upsampling to enhance image clarity. Results from experiments show that the model is more effective than previous methods used in the literature. -
Deep Learning Techniques for Predicting Transcription Factor Binding Sites (TFBSs) from DNA Sequencing Data
Shahad Raed Al-Alwash, Sura Zaki AL-RashidAbstractDetecting transcription factor binding sites (TFBS) within DNA sequences is crucial for understanding the mechanisms of gene regulation. However, this remains a task of a challenge. Various methods have been employed to identify potential transcription factor binding sites (TFBSs), where machine learning is one of the most effective techniques for this purpose. Nevertheless, many of these approaches do not offer a reliable and effective method for encoding the genetic data under analysis. Deep learning has recently made significant advances in bioinformatics. This study suggests a deep learning method for predicting transcription factor binding sites (TFBSs) that combines an attention layer and a convolutional neural network (CNN)—the attention layer aids in concentrating on the most essential portions of the AGRIS DNA-seq data. The proposed method consists of two main parts: the first predicts binding sites, which are then utilized in the second part to predict transcription factors. The proposed model achieves high predictive performance, with an overall accuracy of approximately 99% in predicting TFBSs. -
Dual-Phase Sensor Deployment Using Pelican Optimization Algorithm: A Case Study on Air Pollution Monitoring Baghdad City
Raed Waheed Kadhim, Ahmed T. Sadiq, Asmaa Sadiq AbdulAbstractWireless Sensor Networks (WSNs) represent one of the most important research fields. Their performance can be affected by the issue of area coverage, as the sensor’s position is crucial in Wireless Sensor Networks (WSNs), particularly in high-risk environments. This paper presents a robust approach for sensor deployment in Wireless Sensor Networks (WSNs). The proposed approach utilizes Voronoi diagrams with the Pelican Optimization Algorithm (POA), employing effective objective functions to partition the area into as evenly partitioned areas as possible, thereby encouraging sensor coverage and balanced load distribution within each partition. The proposed methodology comprises two phases: the distributing phase and the grouping phase. POA is utilized in both phases to enhance sensor positions iteratively, thereby reducing gaps in coverage and achieving a balanced load distribution between regions and the sensors within each region. The simulation results demonstrate that the heuristic-driven deployment model significantly enhances coverage consistency and minimizes the variation of sensor distributions, proving its high performance in diverse configurations. A case study of Baghdad City illustrates the practical application of the proposed approach in environmental monitoring scenarios, such as tracking air pollution. This research develops WSN deployment methodologies, introducing an adaptable and scalable solution for achieving optimal resource allocation through optimal sensor coverage in real-world complex environments. -
Hand Gesture Classification on Custom Abductees-Rescue Dataset Using an Optimized LSTM
Aws Saood Mohamed, Nidaa Flaih Hassan, Abeer Salim JamilAbstractThis paper presents a solution for classifying abduction-related hand gestures in surveillance systems through the development of a specialized dataset and an optimized LSTM model. Previous research on hand gesture classification for security applications, particularly those focused on distress signals and abduction detection, has utilized datasets with significant practical limitations. These limitations include data captured via mobile phone or laptop front-facing cameras rather than surveillance equipment, restricted detection ranges, and predominantly controlled lighting conditions that inadequately represent real-world surveillance environments. To address these limitations, the Abductees-Rescue dataset has been developed using surveillance cameras mounted at 3-m heights. Following preprocessing and landmark extraction, the dataset has yielded 9,111 samples (4,545 normal hand gestures and 4,566 abduction signals). Each sample contains sequential data from 45 frames of hand videos, where each frame provides 21 three-dimensional landmarks extracted by using MediaPipe Hand framework. An effective preprocessing approach normalizes these landmarks relative to the wrist joint, eliminating variations in hand position and size. The classification model implements a hierarchical Long Short-Term Memory (LSTM) architecture with L2 regularization, batch normalization, and optimized dropout layers. Experimental results demonstrate 95.06% accuracy, 95.24% precision, and 94.89% recall on the test dataset, with balanced performance across both gesture categories. The confusion matrix analysis shows a proportional distribution of misclassifications, indicating robust generalization capability. This research contributes to the field of security surveillance by providing both a standardized dataset and an effective classification methodology for detecting potential abduction situations through hand gesture analysis. -
WLDRD: Dynamic Weighted Load Balancing with Adaptive Request Distribution in Distributed Systems
Nuha H. Alameedi, Mahdi S. AlmhannaAbstractLoad balancing is an important issue in any distributed system or in cloud computing, where optimal performance and stability of the system are chiefly concerned with effective allocation of resources and distribution of workload. This is notwithstanding, due to the failure of certain classical load balancing algorithms, namely, Round Robin and Least Connections, to appreciate parameters such as dynamically changing workloads, heterogeneous capacities of servers, and different sizes of requests, a situation of suboptimal resource utilization and high response times rise. Hence, we propose an advanced version of Weighted Load Balancing with Dynamic Request Distribution (WLDRD) algorithm. WLDRD maintains a dynamic coordination among the weights of the servers, the sizes of incoming requests, and the live load situations to achieve real-time fine-tuning of request distributions depending on the loads and capacities of selected servers. The algorithm's strong point lies in balancing resource usage while keeping minimum server response time. Results from the experiments established that WLDRD gives better results compared to the traditional load balancing approaches with load balancing efficiency increased by 20% and response time decreased by 15% and resource utilization increased by 20%. In addition, the algorithm has the potential to achieve 25% energy savings through effective resource allocation that complies with the principles of green computing. Such an improvement would, of course, be very pronounced when variable workloads and heterogeneous server capabilities are given as input into the system. These results show the potential of this algorithm in improving the performance and scalability of distributed systems and cloud computing environments.
-
-
Computing Methodologies
-
Frontmatter
-
Design and Implementation of a Handshaking Algorithm for Enhanced Protection of High-Tension Towers in Iraq
Nadhir Ibrahim Abdulkhaleq, Mohannad Jabbar Mnati, Ahmed Saad Hussein, Ihsan Jabar HasanAbstractInformation and Communication Technology (ICT) plays a critical role in enhancing infrastructure security and preventing terror attacks. In Iraq, the electrical energy sector has been severely affected by frequent sabotage of high-tension (HT) power towers. This paper presents a stand-alone, low-cost prototype for early threat detection based on a decentralized handshaking protocol between sensor nodes installed on HT towers. Each node is equipped with three Passive Infrared (PIR) motion sensors managed by an ESP32 microcontroller. Upon detecting suspicious motion for a defined duration, the node triggers an alert using the ESP-NOW protocol, transmitting the warning through neighboring nodes until it reaches a base station located in a remote, internet-deprived area. The proposed system includes a fallback mechanism where, if Wi-Fi connectivity is available at a specific node (Node N), the alert is uploaded directly to the cloud. The overall design aims to reduce operational costs, minimize response time, and limit human exposure by enabling a smart, autonomous monitoring network that enhances the protection of critical power infrastructure against targeted attacks. -
DiploNet: A Deep Learning Semantic Segmentation Model for Glaucoma Diagnosis
Abdullah Ahmed Al-Dulaimi, Raghad Alshabandar, A. H. Mohammed, Hayder Hussein KareemAbstractThis study proposes two deep learning models, DiploNet-1-Res and Diplo-Net-2-Res, and it’s Inspired by Diplodocus, for optic disc and optic cup segmentation in glaucoma and compares their performance with state-of-the-art models. The evaluation is performed on four publicly available retinal image datasets—ACRIMA, Drishti_GS, ORIGA, and REFUGE—with mosaic augmentation applies to the training sets. Each dataset is split into subsets of 70%, 15%, 15% for training, validation, and testing respectively. The experimental results show that DiploNet-1-Res achieved optic-disc mDice of 0.950–0.972 and mIoU of 0.906–0.946, with optic-cup mDice of 0.855–0.937 and mIoU of 0.750–0.884; DiploNet-2-Res achieved optic-disc mDice of 0.955–0.975 and mIoU of 0.915–0.952, with optic-cup mDice of 0.877–0.943 and mIoU of 0.785–0.894. Regarding to this, the results demonstrate that DiploNet-2-Res achieves superior segmentation accuracy, consistently outperforming other models in terms of mean Dice and mean intersection over union across all datasets. These results demonstrate that multi-scale feature extraction, deep semantic encoding, and enhanced spatial retention markedly improve segmentation performance and emphasize deep learning promise for scalable glaucoma screening and automated diagnosis in clinical decision-making. -
Summarizing Research Papers with Transformer Models
Shahad Arkan Harb, Dhafar Hamed AbdAbstractThe exponential growth of academic publications has created an urgent need for reliable and automated summarization tools that can help researchers quickly grasp the essence of scholarly documents. This study investigates the effectiveness of two transformer-based models—T5-small and BART-large—for summarizing full-length research papers. These models were selected for their complementary design: T5’s unified text-to-text framework enables general-purpose summarization, while BART’s bidirectional encoder-decoder architecture offers rich contextual modeling. Using a preprocessed dataset of over 50,000 research abstracts from Kaggle’s arXiv collection, both models were fine-tuned and evaluated on 15 representative samples using standard NLP metrics, including BLEU, ROUGE-1/2/L, and BERTScore. Experimental results show that BART outperforms T5 across most metrics, achieving ROUGE-1 (45.67%), ROUGE-2 (40.35%), ROUGE-L (41.54%), BLEU (8.51), and BERTScore F1 (89.68%), while T5 achieved a slightly lower BERTScore F1 (88.62%) but showed better performance in certain semantic aspects. The average word length was also assessed to ensure lexical consistency. A focused case study on the complex NLP paper “Attention Is All You Need” revealed performance limitations, with ROUGE-1 dropping to 25.62% and BERTScore F1 to 80.36%, indicating challenges in handling dense, technical content with mathematical expressions and domain-specific language. This work provides a practical, comparative benchmark for transformer-based academic summarization and highlights the need for future research in domain-specific fine-tuning, hybrid summarization architectures, and human-in-the-loop evaluation frameworks to further enhance summary quality and robustness. -
LESS is More: Guiding LLMs for Formal Requirement and Test Case Generation
Abhishek Shrestha, Bernd-Holger Schlingloff, Jürgen GroßmannAbstractLarge Language Models (LLMs) demonstrate impressive in-context reasoning capabilities; however, generating structured outputs remains challenging. In this paper, we investigate prompt-based techniques to guide LLMs in producing outputs compliant with a pre-existing domain-specific controlled natural language called Language for Embedded Safety and Security (LESS). Additionally, we evaluate the effectiveness of LLMs in automating test case generation. Our results show that structured prompt engineering significantly enhances the accuracy and consistency of generated requirements, and that using controlled language formats such as LESS as an intermediate representation substantially improves test case generation accuracy. -
Traffic Equilibria Prediction Using Inverse Reinforcement Learning
Xinyuan Wu, Costas Courcoubetis, Antonis DimakisAbstractWe analyze the equilibrium behavior of a large population of self-interested drivers, such as taxi drivers or those operating on a ride-hailing platform. Each driver is modeled as a Markov Decision Process (MDP), aiming to maximize their long-run average reward by strategically selecting repositioning actions. The interaction among drivers arises through shared resource constraints, leading to the model of fluid queues. The resulting equilibria can be characterized as solutions to a convex program, which is an extension of the classical Eisenberg-Gale program where waiting times play a role analogous to prices.To accurately predict actual mobility patterns, the reward function in the MDP model is inferred using an inverse reinforcement learning (IRL) technique in the data. We adopt a linear reward structure, and the features in the reward function are constructed by employing sparse principal component analysis (SPCA) and symbolic regression. These methods provide explicit linear and non-linear combinations of raw features.
-
-
Backmatter
- Titel
- Cybersecurity and Artificial Intelligence Strategies
- Herausgegeben von
-
Safaa O. Al-Mamory
Ali Makki Sagheer
Abeer Salim Jamil
Mahmoud Shuker Mahmoud
Haider Hadi Abbas
Kewei Sha
George S. Oreku
- Copyright-Jahr
- 2026
- Verlag
- Springer Nature Switzerland
- Electronic ISBN
- 978-3-032-07244-3
- Print ISBN
- 978-3-032-07243-6
- DOI
- https://doi.org/10.1007/978-3-032-07244-3
Die PDF-Dateien dieses Buches wurden gemäß dem PDF/UA-1-Standard erstellt, um die Barrierefreiheit zu verbessern. Dazu gehören Bildschirmlesegeräte, beschriebene nicht-textuelle Inhalte (Bilder, Grafiken), Lesezeichen für eine einfache Navigation, tastaturfreundliche Links und Formulare sowie durchsuchbarer und auswählbarer Text. Wir sind uns der Bedeutung von Barrierefreiheit bewusst und freuen uns über Anfragen zur Barrierefreiheit unserer Produkte. Bei Fragen oder Bedarf an Barrierefreiheit kontaktieren Sie uns bitte unter accessibilitysupport@springernature.com.