Power Engineering and Intelligent Systems
Proceedings of PEIS 2025, Volume 3
- 2026
- Book
- Editors
- Vivek Shrivastava
- Jagdish Chand Bansal
- Bijaya Ketan Panigrahi
- Book Series
- Lecture Notes in Electrical Engineering
- Publisher
- Springer Nature Singapore
About this book
This book presents a collection of the high-quality research articles in the field of power engineering, grid integration, energy management, soft computing, artificial intelligence, signal and image processing, data science techniques, and their real-world applications. This book is presented at International Conference on Power Engineering and Intelligent Systems (PEIS 2025), held during March 8–9, 2025, at National Institute of Technology Srinagar, Uttarakhand, India. This book is presented in two volumes.
Table of Contents
-
Frontmatter
-
Smrithiraksha: A Review on Safety and Support for Dementia Patients’ Navigation
M. Pintaram, M. S. Spoorthi, C. R. Narendra Babu, P. Vignesh, A. Vidhya GaneshThis chapter delves into the innovative application of Smrithiraksha, a technology-driven solution designed to address the multifaceted challenges of dementia care. The text highlights the integration of GPS tracking, AI, and wearable technology to provide real-time monitoring, emergency alerts, and comprehensive support for patients, caregivers, and healthcare providers. Key topics include the role of geofencing in enhancing patient safety, the use of AI for predictive analytics, and the impact of Smrithiraksha on reducing caregiver burden. The chapter also explores the system's architecture, including its client-server model and the various modes tailored for patients, caregivers, and doctors. Furthermore, it discusses the results and expected outcomes of implementing Smrithiraksha, such as improved patient safety, increased caregiver confidence, and enhanced healthcare efficiency. The conclusion emphasizes the transformative potential of Smrithiraksha in setting a new standard for dementia care, leveraging technology to improve patient outcomes and streamline healthcare delivery.AI Generated
This summary of the content was generated with the help of AI.
AbstractMore than 50 million individuals are affected by dementia, a neurodegenerative disorder, which is expected to increase fourfold in the next three decades. Severe behavioural and cognitive problems such as forgetfulness, confusion, and inability to make sound decisions are as a result of this disease. These symptoms lead to patients’ severely impaired mobility and are often extremely stressful on patients as well as caretakers. Routine surveillance to address such issues is essential due to patient safety, health, or well-being complications; this may make caregivers burned out or lower overall service delivery. Traditional methods of caregiving practices that largely rely on surveillance are often wanted, especially in cases of emergency or wandering. These problems can be solved through the possibility of technology. However, currently there are systems available where attention is paid to one particular solution, for example GPS tracking or medication reminders, and do not have a complex platform that would be capable of meeting the demands of dementia care. Smrithiraksha is a cutting-edge Android application that qualifies for it in detail as it provides a whole range of features. Real-time location tracking, emergency control, simple navigational controls, and health information are all aspects of this customer-oriented design. It gives patients more control, is cost-effective for caregivers, and also increases healthcare effectiveness because it is for patients, caregivers, and healthcare professionals. -
Digital Restoration of Archaeological Artifacts Using a Hybrid U-Net and CycleGAN Framework
Rohit Beeravalli, Sujal Naduvinamani, Anup Kolabal, Manikantha Shivalli, Uday KulkarniThis chapter explores the digital restoration of archaeological artifacts using a hybrid framework that combines U-Net and CycleGAN models. The text delves into the methodology of using U-Net for initial restoration tasks such as noise removal and crack filling, followed by CycleGAN for stylistic refinement and consistency. The evaluation metrics, including Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index Measure (SSIM), are discussed in detail, highlighting the framework's effectiveness. The results demonstrate a PSNR of 30.7 dB and an SSIM of 0.92, indicating high perceptual similarity and structural accuracy. The chapter also addresses the challenges and limitations of the approach, such as the dependence on simulated damage patterns and computational complexity. Future directions for improving the framework are proposed, including enhancing damage simulation and extending the approach to 3D artifact reconstruction. This comprehensive overview provides insights into the potential of deep learning models for preserving cultural heritage and restoring invaluable artifacts.AI Generated
This summary of the content was generated with the help of AI.
AbstractRestoring damaged archaeological artifacts is important for the preservation of cultural heritage and for the possibility of historical research. This paper introduces a new approach that combines two deep learning models, U-Net and CycleGAN, to restore images of degraded artifacts. U-Net is used for the first restoration, which reduces noise and local damage such as cracks and occlusion, while CycleGAN refines these outputs by maintaining stylistic consistency and enhancing intricate details. The framework is trained on the dataset provided by Zhao (Mendeley Data V1, 2023 [1]), which includes photographic images of painted pottery artifacts. To improve robustness, artificial degradation techniques such as scratches, occlusions, and cracks were applied. The methodology involves simulating realistic damage patterns over undamaged artifact images and sequentially training the models before their evaluation using objective metrics like Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index Measure (SSIM) in addition to the expert visual inspection. The combined U-Net + CycleGAN approach yielded superior performance over standalone models with a PSNR of 30.7 dB and an SSIM of 0.92, showing high fidelity and perceptual similarity. However, challenges remain in handling highly intricate artifact details and variations in real-world degradation patterns. Addressing these issues will be crucial for practical deployment in cultural heritage preservation, museum digitization, and automated artifact restoration. -
Real-Time Text-to-Braille Conversion and Precision Punching Using IoT for Accessible Reading
G. Anuradha, Rakesh Sarma Ponukupati, Bharadwaj RachakondaThis chapter explores the development of a real-time IoT-integrated text-to-Braille conversion framework designed to enhance accessibility for visually impaired individuals. The system utilizes versatile Braille formatting and precision punching technology to accurately translate digital text into tangible Braille output. Key topics include the integration of AI and IoT for real-time conversion, the use of cost-effective hardware like Arduino and CNC components, and the system's offline functionality. The chapter also discusses the methodology behind the system's architecture, the working mechanism of the 2D plotter, and the experimental results that demonstrate the system's high accuracy and efficiency. The conclusion highlights the system's potential to revolutionize Braille conversion, making it faster, more cost-effective, and accessible for a wider audience.AI Generated
This summary of the content was generated with the help of AI.
AbstractIoT is transforming connected systems and the way we live our lives, from industrial automation to smart homes. Routine Braille change techniques are direct, costly, and as often as possible require specialized equipment, obliging openness. To address these challenges, this think approximately presents a real-time text-to-Braille change system joining IoT, pointed at moving forward movability and independence for ostensibly challenged people. The proposed system translates the computerized substance into Braille and punches it onto paper with great precision utilizing a computerized punching device. An exceptionally arranged segregating line ensures a true blue right-sided Braille course of action for perfect fabric scrutinizing. Our key disclosures illustrate that IoT integration through and through makes strides in real-time planning, system efficiency, and client experience. The system joins flexible Braille organizing and precision punching, ensuring a correct Braille representation. Besides, biomechanical interface and kinetic modeling move forward the punching mechanism’s precision without affecting lucidness. The test comes approximately confirm the achievability of this approach, outlining correct Braille abdicate with irrelevant goofs. This exploration highlights the practicality of IoT in assistive progresses and off as a viable, cost-effective course of action for Braille alter. -
Optimized Facial Expression Recognition Using Deep Learning and Optuna Hyperparameter Tuning
Prajwal Patil, Samarth Benni, Arun Sunkad, Darshan Shet, Uday HiremathThis chapter explores the optimization of Facial Emotion Recognition (FER) models using deep learning and hyperparameter tuning with Optuna. The study addresses key challenges in FER, including class imbalance, environmental variations, and the need for robust, adaptive models. The proposed approach integrates advanced techniques such as customized loss functions, advanced data augmentation, and hybrid DNN-transformer models. The model is optimized using Optuna, a framework for hyperparameter optimization, which systematically explores the hyperparameter space to improve model performance. The chapter also discusses the impact of hyperparameter optimization on model accuracy and generalization, highlighting the importance of fine-tuning hyperparameters for optimal performance. The results demonstrate the effectiveness of the proposed model, achieving a test accuracy of 89.5%, which surpasses baseline methods. The chapter concludes with a discussion on future work, exploring advanced optimization techniques and potential applications in dynamic emotion recognition and cross-domain facial expression prediction.AI Generated
This summary of the content was generated with the help of AI.
AbstractFacial Expression Recognition (FER) is a challenging task with the goal of classifying an individual’s emotional state from facial expressions. In this research work, the introduced method offers a robust solution to FER with the FER2013 dataset, spanning a broad spectrum of facial expressions in grayscale faces. The model is interested in the classification of seven main emotions: anger, disgust, fear, happiness, sadness, surprise, and neutrality. State of the art techniques are applied, such as the application of Optuna for hyperparameter optimization, customized loss functions to deal with class imbalance, and sophisticated data augmentation strategies. Such techniques collectively advance the scalability and generalizability of the model under diverse conditions. Leverage a deep neural network architecture together with transformers, the introduced approach achieves a test accuracy of 89.5%, outperforming conventional baseline methods. Important hyperparameters such as filter sizes, dense units, dropout rates, and learning rates—are optimized to achieve maximum feature extraction while preventing overfitting. The incorporation of class weights also mitigates the impacts of imbalanced datasets, encouraging recognition performance. Applications of this research work are in mental health screening, personalized learning devices, and human-computer interaction. Future work is in the extension of the framework to dynamic emotion recognition and cross domain applications, a significant leap in adaptive and efficient FER systems. -
A Novel Mega Voltage Gain Boost DC/DC Converter for DC Microgrids
Dhivya Panneerselvam, Kodumur Meesala Ravi EswarThis chapter delves into the design and analysis of a novel high-voltage gain boost DC/DC converter tailored for DC microgrids, focusing on the integration of renewable energy sources. The article explores the converter's operating modes, including energizing and deenergizing states, and provides a detailed theoretical analysis supported by simulation results. Key topics include the converter's high voltage gain, low switch voltage stress, and simple circuit structure. The simulation results demonstrate a significant voltage gain of 19.05 and an efficiency of 95.3%, highlighting the converter's potential for practical applications. The chapter concludes with a comparative analysis, showcasing the advantages of the proposed design over existing solutions.AI Generated
This summary of the content was generated with the help of AI.
AbstractIntegrating renewable energy supplies with a DC microgrid requires DC/DC converter. This article proposes a new non-isolated boost DC/DC converter (NBC) which can provide lower voltage stress to switches along with high boost voltage gain (VG) and facilitate common grounding nature between supply and load. The circuit is designed by adding two control MOSFET switches, four diodes, one inductor and three capacitors. Therefore, keeping the component count of the proposed NBC to 10, high VG and low switch voltage stress is realized. The boost mode of the proposed NBC in a cycle operates on the principle of inductor charging with parallel connection of two in-built circuit capacitors and inductor discharging with added voltage of two in-built circuit capacitors across load. This article contains thorough examination on the proposed NBC in steady-state condition. The non-ideal simulation analysis of proposed NBC is performed in PSIM where voltage and current response of elements are captured at the attained VG of 19.05 with 95.3% converting efficiency. Overall, this article justifies the proposed NBC by detailed investigations through theoretical steady-state and non-ideal simulation analysis. Therefore, implementation of proposed NBC for DC microgrids is a viable solution. -
Car Anti-theft System Using Driver Facial Biometrics Authentication and Telegram Alert
Sourav Kumar, R. KarthikaThis chapter explores the development of a sophisticated car anti-theft system that utilizes facial biometrics authentication and Telegram alerts for real-time monitoring. The system employs advanced facial recognition techniques, including MTCNN for facial detection and MobileFaceNet for generating unique facial embeddings. Key components of the system include a Raspberry Pi 4 Model B, a car door lock actuator, a GPS module, and a Telegram bot for alerts. The system is designed to handle various authentication scenarios, ensuring secure access control and immediate notifications to the vehicle owner. The implementation of the system is discussed in detail, highlighting its accuracy in identifying users under different conditions and its ability to integrate with existing vehicle systems. The chapter also compares different face detection and embedding generation techniques, demonstrating the superiority of the chosen methods. The system's practical applications and future enhancements are also explored, making it a comprehensive guide for professionals interested in automotive security and IoT.AI Generated
This summary of the content was generated with the help of AI.
AbstractCar anti-theft systems have evolved from simple mechanical locks to high-end digital solutions. The automotive industry has consistently enhanced features to improve security and user control. This paper presents a facial authentication-based car anti-theft system as a part of a driver monitoring system. The system uses an in-cabin camera for user authentication and a Telegram bot for live alerts. The facial recognition module leverages the Multi-task Cascaded Convolutional Network (MTCNN) model for facial detection and the MobileFaceNet model for facial embedding generation. This module demonstrates superior separability (d-prime value of 3.5126) and the lowest inference time (0.153 s) among the techniques analyzed. The anti-theft system includes a fail-safe door lock, an electronic immobilizer, and a sound alarm system. The proposed system was implemented on a Raspberry Pi 4 for real-time operation. -
Effective Prediction of Critical Stress Intensity Factor of Fly Ash-Based Geopolymer Concrete Using Machine Learning Techniques
Thi-Thanh-Huong Nguyen, Ngoc-Thanh Tran, Chi-Trung NguyenThis chapter delves into the application of machine learning techniques to predict the critical stress intensity factor (CSIF) of geopolymer concrete, a sustainable alternative to conventional concrete. The study collects 190 experimental test results, considering fourteen input factors such as aggregate content, sand content, and fiber type, to develop accurate predictive models. Two machine learning models, Decision Tree Model (DTM) and Support Vector Machine Model (SVMM), are employed and evaluated based on their performance metrics. The results reveal strong correlations between predicted and experimental values, with both models demonstrating high reliability. Notably, fiber volume fraction is identified as the most influential factor affecting the CSIF, while curing time has the least impact. This research highlights the potential of machine learning in enhancing the understanding and prediction of mechanical properties in geopolymer concrete, offering valuable insights for professionals in the field.AI Generated
This summary of the content was generated with the help of AI.
AbstractGeopolymer concrete is an innovative and sustainable material for civil infrastructure, utilizing an alkaline solution and fly ash as binders instead of the cement used in conventional concrete. Among its mechanical properties, the critical stress intensity factor is particularly important, as it represents the material's crack resistance under various loading conditions and governs the crack propagation process. However, most studies on the critical stress intensity factor (CSIF) of geopolymer concrete rely heavily on experimental approaches, which require significant time and resources. To address this challenge, this study aims to develop accurate machine learning models to predict the CSIF of geopolymer concrete. A total of 190 experimental test results were collected from literature studies, encompassing fourteen input factors and one output factor. Based on this dataset, two machine learning models were developed: the decision tree model (DTM) and the support vector machine model (SVMM). The prediction results demonstrated strong correlations between the predicted and experimental test data in both the train and test phases, with R values exceeding 0.93. Furthermore, the reliability of the machine learning models was confirmed, with RMSE values less than 15% of the average experimental test data. Sensitivity analysis revealed that fiber volume fraction was the most influential factor affecting the critical stress intensity factor, contributing between 25 and 40%. Conversely, curing time was identified as the least influential factor, with a contribution of approximately 1.3%. The results of this study contributed to the development of accurate models for predicting the CSIF of geopolymer concrete and helped identify the most influential factor affecting it. -
Optimizing Dairy Cattle Health Through Personalized Nutrition and Comprehensive Management Solution
Aryan Goswami, Shruti Kunale, Shaktiprasad Kadam, G. S. MundadaThis chapter explores the development and implementation of a mobile application designed to optimize dairy cattle health through personalized nutrition and comprehensive management solutions. The app features a ration balancing module that allows farmers to create precise feeding schedules based on forage availability and nutrient requirements. It also includes a cattle profiling system, real-time performance tracking, and a community forum for peer-to-peer information exchange. The text highlights the benefits of the app, such as increased milk yield, reduced methane emissions, and improved farm profitability. Additionally, it discusses the app's user-friendly interface, multilingual support, and the potential for future enhancements, such as the integration of advanced analytics and machine learning. The chapter concludes with a comparison of different research studies on the impact of ration balancing on milk production and methane emissions, emphasizing the app's role in enhancing farm productivity and sustainability.AI Generated
This summary of the content was generated with the help of AI.
AbstractThe project, “Optimizing Dairy Cattle Health through Personalized Nutrition and Comprehensive Management Solutions,” features a mobile application that improves calf health through personalized nutrition regimens. The software provides individualized feeding regimens for each animal based on its age, breed, weight, and nutritional demands, decreasing the risk of underfeeding or overfeeding. It includes ration balancing, health tracking, linguistic support, and a community forum for farmer participation and knowledge sharing. The approach attempts to increase feed efficiency, animal health, and overall dairy farm productivity, particularly in areas with limited access to modern agricultural equipment. -
Transformers in Dermatology: A Deep Learning Approach to Skin Lesion Classification
Pothuraju Raju, Hari Priya Tanala, Bellamgubba Anoch, Ramesh Babu Mallela, M. Prasad, Thammuluri RajeshTransformers in Dermatology delves into the critical role of early detection in reducing melanoma-related mortality, highlighting the use of dermoscopy and macroscopic imaging for skin lesion analysis. The study explores various deep learning models, including convolutional neural networks (CNNs), and their application in skin disease diagnosis. It provides a detailed comparison of different models, such as AlexNet, YOLOv8, and CNN2D+BiGRU, and their performance metrics. The text also discusses the importance of data preprocessing, feature extraction, and classification techniques in improving the accuracy and generalisability of skin disease detection systems. Additionally, it emphasizes the need for better explainability in AI models to build trust with dermatologists. The study concludes with a performance analysis of the proposed models and their potential for real-time, remote diagnosis, making it a comprehensive resource for professionals interested in the intersection of deep learning and dermatology.AI Generated
This summary of the content was generated with the help of AI.
AbstractOne of the worst types of cancer, skin cancer can spread to other body parts if not found and treated promptly. Dermoscopy pictures have been at the vanguard of a paradigm shift in medical management: the application of deep learning for skin cancer screening. This work assesses how well Transformer-based deep learning models work for automated skin lesion classification. Our suggested Transformer-based model outperformed conventional CNN architectures such as AlexNet and YOLOv8 with an accuracy of X per cent. Comparative studies reveal enhanced feature extraction and classification performance in dermatological image diagnosis. This paper also addresses issues including dataset biases and the demand for explainability in AI-driven diagnostics, therefore highlighting their importance for actual clinical use. -
Iot-Based Automated Smart Granary Monitoring System for Real-Time Monitoring
G. Karthick Kumar, B. Nantha Kumar, R. Nishanthan, R. Ranjith, S. Dhanasekar, D. Sathish KumarThis chapter explores the design and implementation of an IoT-based Automated Smart Granary Monitoring System that addresses the critical issue of post-harvest grain spoilage. The system utilizes a network of temperature, humidity, and gas sensors to continuously monitor storage conditions, ensuring optimal preservation of grains. Key features include real-time data transmission via MQTT protocol, cloud-based storage for scalability, and a web-integrated dashboard for remote monitoring and automated alerts. The system's architecture and working methodology are detailed, highlighting the use of NodeMCU microcontrollers and various sensors for comprehensive environmental monitoring. The chapter also discusses the system's performance, demonstrating its accuracy in detecting and responding to adverse conditions, thus minimizing grain spoilage and enhancing food security. The integration of AI-driven predictive analytics and blockchain-based secure data sharing is explored as potential future enhancements, making the system even more robust and user-friendly.AI Generated
This summary of the content was generated with the help of AI.
AbstractThe IoT-based granary monitoring system is designed to track real-time storage conditions with temperature, humidity, and gas sensors like (ds18b20, HS1101, MQ-135, MiCS-5526, MH-Z19B). Data is sent over the MQTT protocol to MongoDB a cloud-based platform and displayed in an interactive web interface for real-time insights using ReactJS. Users will be automatically alerted when predefined thresholds are breached so that they can intervene in real time to prevent spoilage. Comparable IoT-based agricultural monitoring systems are available, though they tend to be limited by scalability and real-time monitoring. This is a scalable, automated, real-time monitoring and energy-efficient system as compared to conventional storage systems. Performance evaluation comprises the high accuracy of sensor response, rapid response time, and reduced rates of spoilage. The experimental results achieved an accuracy of 85% in tracking the environmental condition; the notification response time is under 5 s outpacing traditional system. Its efficiency in storage management is better than the existing ones. The future enhancements concern AI predictive analytics with blockchain integration for better security and decision-making for agricultural storage. This solution seeks to enhance food security and minimize waste in farm storage. -
Automated Detection of Cardiac Arrhythmia Using Deep Learning
S. Nagaraj, M. B. Ezhil Venthan, K. Guna, R. Kavin, N. G. ShriharshanThis chapter explores the automated detection of cardiac arrhythmias using advanced deep learning techniques, focusing on Convolutional Neural Networks (CNNs) and Long Short-Term Memory (LSTM) networks. The study leverages the MIT-BIH Arrhythmia Database, a widely accepted benchmark dataset in cardiac research, to train and validate the models. Key topics include data preprocessing techniques such as noise reduction and feature extraction, the architecture of the hybrid CNN-LSTM model, and the analysis of RR intervals and Power Spectral Density (PSD) for enhanced arrhythmia detection. The experimental results demonstrate exceptional accuracy, sensitivity, and specificity, with the model achieving over 95% accuracy in classifying common arrhythmias. The chapter also compares the performance of the proposed model against existing approaches, highlighting its superior reliability and robustness. The conclusion emphasizes the potential impact of this technology on early diagnosis and treatment of cardiac conditions, particularly in underserved regions, and outlines future enhancements to further refine diagnostic accuracy.AI Generated
This summary of the content was generated with the help of AI.
AbstractHeart arrhythmia is considered a global health issue that requires early detection and contributes to the complexity of diagnosis and treatment. This paper introduces a deep learning approach that uses CNNs and LSTM networks to classify arrhythmias with high accuracy. Robust analysis is ensured by automating important procedures like signal preprocessing, feature extraction, and temporal pattern detection using combined CNNs for spatial feature extraction and LSTMs for temporal dependence tracking. Conventional approaches to arrhythmia detection involve physicians manually interpreting ECG readings, which is time-consuming and error-prone. With the advent of deep learning and AI, these limitations can be bypassed by automating the diagnosis process. While LSTMs capture the dynamic temporal patterns required for arrhythmia detection, CNNs are highly effective at detecting anomalies in waveform structures. Our model, trained on the MIT-BIH Arrhythmia dataset, achieved an accuracy of 95%, significantly outperforming traditional methods by 77%, with a precision of 91%, recall of 90%, and an F1-score of 90.5%. The primary reason for this initiative is the need for faster, more precise, and scalable diagnostic tools in heart health. Compared to conventional methods, modern AI techniques diagnose patients much more rapidly and accurately, thereby enhancing patient care and reducing the chances of potentially fatal outcomes such as stroke and sudden cardiac arrest. This method may thus revolutionize the identification and treatment of cardiac arrhythmias through AI-powered tools, providing a robust and scalable solution for real-world clinical applications. -
Effect of Wavelet Filter Banks on Epileptic Seizure Detection
Aswini Kumar Samantaray, Amol D. Rahulkar, Satyajeet SahooThis chapter delves into the crucial role of wavelet filter banks in enhancing the accuracy of epileptic seizure detection through EEG signal analysis. It compares the performance of orthogonal wavelets, such as Daubechies, with bi-orthogonal wavelets, like Bior4.4, highlighting their distinct advantages in capturing seizure-related patterns. The study employs a dataset of EEG recordings from the University of Bonn, utilizing wavelet transform for feature extraction and support vector machines (SVM) for classification. Key findings reveal that bi-orthogonal wavelets, particularly Bior4.4, demonstrate superior accuracy, sensitivity, and specificity in seizure detection. The research also underscores the importance of selecting the appropriate wavelet filter bank based on the specific requirements of the application, whether it be computational efficiency or robust feature representation. Additionally, the study discusses the potential of wavelet filter banks to localize both transient and periodic seizure patterns, making them an attractive choice for clinical environments. The results contribute to the development of more reliable and accurate seizure detection systems, ultimately improving epilepsy management and patient care.AI Generated
This summary of the content was generated with the help of AI.
AbstractOne of the most important tasks in the field of medical diagnostics is epileptic seizure detection, where we need more accurate and reliable approaches for the analysis of electromagnetic EEG signals. The wavelet transform became a versatile signal processing technique that allows multi-resolution analysis with both time and frequency information. In order to obtain robust detection accuracy, choosing suitable wavelet filter banks remain a critical part. This research explores the effect of orthogonal and bi-orthogonal wavelets filter banks on epileptic seizure detection. Data of EEG signals consisting of multiple publicly available datasets were decomposed in multiple levels using the discrete wavelet transform (DWT), and features such as energy, entropy, and variance were extracted. Performance was assessed using machine learning classifiers. The orthogonal wavelet Daubechies 4 (Daub4) gives the highest accuracy, sensitivity, and specificity of 92.3%, 91.5%, and 93.8%, respectively, whereas the bi-orthogonal wavelet (Bior 4.4) gives 94.8%, 93.2%, and 95.6%, respectively. The results further show that the bi-orthogonal wavelets outperform orthogonal wavelet in terms of retention characteristics of the signal and classification performance. These characteristics come from their symmetry and edge preserving properties, both of which are crucial for identifying patterns relating to seizures. By emphasizing the need for ideal wavelet filter banks for improved seizure detection systems, the results are pushing forwards more accurate and efficient diagnostic instruments for use in clinical settings. -
Feature-Driven Energy Efficiency Modeling with Advanced Machine Learning Techniques
J. Dhanalakshmi, D. Praveena Anjelin, A. Prabhu ChakkaravarthyThis chapter delves into the application of machine learning techniques to enhance energy efficiency in buildings. The study focuses on four key areas: the integration of big data analytics and machine learning in energy systems, a comparative analysis of machine learning models for predicting energy efficiency, the importance of feature selection and data preprocessing, and the results of time series analysis on heating and cooling efficiency. The findings reveal that ensemble methods like Random Forest and Gradient Boosting outperform linear models, with Gradient Boosting achieving the highest accuracy. The time series analysis uncovers significant trends and seasonal patterns, providing valuable insights for optimizing energy systems. This detailed examination offers a comprehensive overview of how advanced machine learning techniques can revolutionize energy efficiency modeling.AI Generated
This summary of the content was generated with the help of AI.
AbstractUsing the energy efficiency dataset, this study explores the application of machine learning (ML) methods in big data analytics for energy efficiency prediction. Along with target variables for heating and cooling energy efficiency, the dataset includes building energy consumption parameters such wall area, roof area, and glazing area. To forecast energy efficiency results, a number of machine learning (ML) methods were used, such as gradient boosting, random forests, support vector machines (SVM), and linear regression. Gradient Boosting obtained the best R2 score of 0.90, according to the data, while Random Forests came in second with an R2 of 0.87. Despite being straightforward, linear regression had an R2 score of 0.74, while SVM had an R2 score of 0.79. Furthermore, Gradient Boosting outperformed other methods in terms of prediction accuracy, especially when dealing with intricate non-linear correlations between energy usage and characteristics. Random Forests demonstrated more stability and resilience to overfitting, particularly with high-dimensional data, but being marginally less accurate than Gradient Boosting. The investigation also highlighted how important feature selection is, showing that adding pertinent features like the buildings’ surface areas and orientation angles greatly benefited Random Forests and Gradient Boosting. According to the results, machine learning models—in particular, ensemble techniques like Random Forests and Gradient Boosting—can greatly enhance energy efficiency forecasts and offer insightful information for building management system energy consumption optimization. -
A Cross-Lingual and Culturally Adaptive Framework for Bilingual Image Captioning in Low-Resource Languages
V. Jothi Prakash, R. Balamurugan, S. Sudharsan, T. Sanjai, A. Ahamed SameerThis chapter introduces the CABIC framework, a bilingual image captioning system designed to generate culturally enriched captions in English and Tamil. The framework addresses the challenges of low-resource languages like Tamil, which has unique linguistic and cultural nuances. Key topics include the cultural adaptation module, which enhances the quality and relevance of Tamil captions, and the attention mechanism that aligns visual and textual features. The chapter also discusses the TamilCOCO dataset, used for training and evaluating the model, and presents extensive quantitative and qualitative evaluations. The results show that CABIC outperforms state-of-the-art models, achieving notable improvements in metrics like BLEU-4, METEOR, CIDEr, SPICE, and a novel Cultural Relevance Score (CRS). The chapter concludes with a discussion of the framework's limitations and future directions, highlighting its potential for advancing multilingual and culturally aware AI systems.AI Generated
This summary of the content was generated with the help of AI.
AbstractImage captioning is a challenging task that involves generating descriptive textual content for visual inputs, and it becomes even more complex when extended to low-resource languages. Tamil, a Dravidian language spoken by millions, is underrepresented in current image captioning datasets and frameworks, which limits the development of AI systems tailored for this community. To address this gap, we propose the Culturally Adaptive Bilingual Image Captioning (CABIC) framework, which generates captions in English and Tamil with an emphasis on cultural relevance. The proposed framework integrates Inception V3 for visual feature extraction, XLM-R for multilingual textual embeddings, and an attention mechanism for effective multimodal fusion. Additionally, a novel cultural adaptation module refines Tamil captions to incorporate idiomatic expressions and contextual nuances. Evaluations on the TamilCOCO dataset demonstrate that CABIC outperforms state-of-the-art models, achieving a BLEU-4 score of 35.6, METEOR of 29.4, CIDEr of 120.1, SPICE of 20.3, and a Cultural Relevance Score (CRS) of 85.6. Qualitative analysis highlights CABIC’s ability to generate culturally enriched captions while identifying areas for improvement. This research bridges the gap in Tamil image captioning, offering a robust framework for bilingual and culturally aware AI systems and paving the way for future advancements in low-resource language technologies. -
Satellite Image Segmentation System
Aditya Kumbhar, Prem Deshmukh, Pranav Bhiungade, P. S. AgnihotriThis chapter delves into the development of an interactive satellite image segmentation system designed to analyze and visualize land cover changes. The system retrieves historical satellite images using Google Earth Engine's API and applies advanced machine learning-based segmentation techniques, including a customized U-Net architecture for precise segmentation. The literature survey highlights the transformative role of Google Earth Engine in remote sensing, its extensive applications, and its limitations. The system's architecture is thoroughly explained, covering image acquisition, preprocessing, feature extraction, image segmentation, and change detection. The chapter also compares various machine learning algorithms used for satellite image segmentation and discusses their performance metrics. The system's potential applications include monitoring deforestation, assessing riverbank erosion, urban planning, and agricultural monitoring. The conclusion emphasizes the system's accuracy and its potential to exceed existing benchmarks in land cover mapping.AI Generated
This summary of the content was generated with the help of AI.
AbstractThis project presents an interactive satellite image segmentation system designed to analyze historical land cover changes across specified locations. Utilizing Google Earth Engine’s Image API for efficient satellite image retrieval, the system offers a seamless interface to explore historical geographical data. The project revolves around machine learning techniques, employing a customized U-Net neural network for accurate image segmentation. This system visualizes land cover change trends through dynamic animations and charts, offering a tool for observing environmental transformations over time. -
VSecureSphere: Developing Virtual Lab for Simulating Safe Environment for Multiple Cyber-Attack Patterns
Shivani Zagade, Mrunal Shardul, Siddhi Patole, Vishal Badgujar, Sneha Dalvi, Kiran DeshpandeThis chapter explores the development and implementation of a comprehensive virtual lab for cybersecurity education. The lab is designed to provide an immersive, hands-on learning experience that bridges the gap between theoretical knowledge and practical application. Key topics include the use of Docker and Kubernetes for creating isolated, reproducible environments, the integration of noVNC for web-based access, and the implementation of structured documentation and analytics for performance evaluation. The lab offers a range of cybersecurity modules, each with clear objectives, theoretical frameworks, and practical exercises that replicate real-world threats and vulnerabilities. Users can interact with state-of-the-art tools and technologies, gaining valuable experience without the risks associated with real-world experimentation. The lab's evaluation tools monitor performance and provide constructive feedback, enabling learners to measure their progress over time. The chapter also discusses the scalability and security considerations of the virtual lab, as well as its potential to enhance cybersecurity education and professional development.AI Generated
This summary of the content was generated with the help of AI.
AbstractWith the rapidly accelerating digital technology, the world breathes one more breath of fresh air into advanced cyber threats with each tick of a second. Hands-on cyber training thus becomes indispensable for students and professionals alike. The Cybersecurity Virtual Lab is an interactive web-based platform that provides structured practical training within the context of containerized environments. Unlike other platforms such as TryHackMe and Hack The Box, this lab gamifies the scenario with a dual-system interface: users experience both roles as attackers in one space and/or victims in another space; this insight greatly augments one’s real-world understanding of cybersecurity threats. It boasts guided simulations, real-world attack scenarios, and interactive quizzes; this maintains engaging and purposeful learning environments. Primarily aimed at students and IT professionals, the lab engages learners of every caliber. A three-month survey involving nearly 100 respondents shows 35% more engaged, 40% more interactive, and 30% better skill retention than other methods. Being based on Kubernetes scalability, the lab provides a seamless experience even under heavy user loads. Minimum recommended specs suggested are 8-core CPU, 16GB RAM, and SSD, while network capacity is totally dependent on concurrent user base. While bridging the gap between theoretical knowledge and hands-on practice, the Cybersecurity Virtual Lab provides a very effective, very scalable, and realistic cybersecurity training experience. -
Hybrid GCN-CNN Model for Robust Deepfake Detection
Shivam Singh Srinet, Jaytrilok Choudhary, Manish Pandey, Dhirendra Pratap Singh, Vandana ShakyaThis chapter explores the development and implementation of a hybrid GCN-CNN model for deepfake detection, focusing on the integration of Graph Convolutional Networks (GCNs) and Convolutional Neural Networks (CNNs). The model leverages the strengths of both architectures to capture pixel-level details and structural nuances in facial landmarks, enhancing the detection of manipulated images. The article begins with an overview of the challenges in deepfake detection and the limitations of existing CNN-based methods. It then introduces the hybrid model, detailing the data preprocessing steps, including facial landmark extraction and graph construction using Delaunay triangulation. The architecture of the hybrid model is explained, highlighting the roles of the GCN and CNN branches, as well as the fusion layer that combines their outputs. The training procedure and experimental results are presented, demonstrating the model's high accuracy and robustness. The article concludes with a discussion on future research directions, including the integration of temporal features for video deepfake detection and the use of more sophisticated graph-based representations.AI Generated
This summary of the content was generated with the help of AI.
AbstractOver the past few years, the proliferation of deepfake content—digitally manipulated media that successfully alters appearances or voices—has created considerable challenges in social, ethical, and cybersecurity realms. Existing deepfake detection approaches, based mostly on Convolutional Neural Networks (CNNs), tend to have difficulty with capturing fine details of subtle facial irregularities and spatial relationships. To overcome this, we introduce a hybrid model that incorporates Graph Convolutional Networks (GCNs) and CNNs. Our model utilizes GCNs to study facial landmarks as graphs, preserving relational information, while CNNs concentrate on pixel-level features in images. By combining results from both models, we provide a strong methodology that takes advantage of the respective strengths. We test our approach on a deepfake dataset taken from kaggle, demonstrating better accuracy compared to conventional CNN-based methods, especially in detecting subtle manipulations. This work provides a novel hybrid system to improve the accuracy of deepfake detection.
- Title
- Power Engineering and Intelligent Systems
- Editors
-
Vivek Shrivastava
Jagdish Chand Bansal
Bijaya Ketan Panigrahi
- Copyright Year
- 2026
- Publisher
- Springer Nature Singapore
- Electronic ISBN
- 978-981-9697-24-3
- Print ISBN
- 978-981-9697-23-6
- DOI
- https://doi.org/10.1007/978-981-96-9724-3
PDF files of this book have been created in accordance with the PDF/UA-1 standard to enhance accessibility, including screen reader support, described non-text content (images, graphs), bookmarks for easy navigation, keyboard-friendly links and forms and searchable, selectable text. We recognize the importance of accessibility, and we welcome queries about accessibility for any of our products. If you have a question or an access need, please get in touch with us at accessibilitysupport@springernature.com.