Zum Inhalt

2025 | Buch

Smart Systems and Wireless Communication

Proceedings of SSWC 2024

herausgegeben von: Rituparna Chaki, Agostino Cortesi, Suparna DasGupta, Soumyabrata Saha

Verlag: Springer Nature Singapore

Buchreihe : Smart Innovation, Systems and Technologies

insite
SUCHEN

Über dieses Buch

Der Band ist eine Sammlung qualitativ hochwertiger Forschungsarbeiten, die auf der Internationalen Konferenz für intelligente Systeme und drahtlose Kommunikation (SSWC 2024) präsentiert wurden, die vom 29. bis 30. November 2024 vom Department of Information Technology des JIS College of Engineering in Kalyani, Westbengalen, Indien, organisiert wurde. Dieses Buch konzentriert sich auf Smart Cities, Smart Farming, Smart Healthcare, drahtlose Netzwerkkommunikation, Internet der Dinge, Cyber-physische Systeme, menschliche Computerinteraktion, Big Data und Datenanalyse, Hochleistungsrechner, Requirements Engineering, Analyse- und Verifikationstechniken, Sicherheitssysteme, verteilte Systeme, Biometrie, Bioinformatik, Roboterprozessautomatisierung und maschinelles Lernen.

Inhaltsverzeichnis

Frontmatter

Algorithms

Frontmatter
Chapter 1. Person-Independent Hand Gesture Recognition Using MediaPipe and Multi-layer Perceptron

Hand gesture recognition has become increasingly essential, not only for facilitating communication for hearing-impaired individuals but also for implementing automation that minimizes human contact with surfaces, especially in the post-pandemic era. However, the implementation of accurate recognition systems faces several challenges due to variations in human hand size, varying distances from the camera, and hand orientations, making it difficult to establish a direct correspondence between reference gestures and target gestures. To address these challenges, this research employs a machine learning (ML)-based technique that emulates the human brain’s ability to identify gestures in a user-independent manner, regardless of distance, palm size, and orientation. In this research work, the pre-trained MediaPipe hand model has been used to extract hand landmark points from the hand data corpus. These landmark points, along with hand-face position, were utilized to develop feature vectors for the classification process. After normalization, these features exhibited a high correlation within the same category of hand gestures. Initially, a spot-checking activity has been done to recognize the machine learning model that can accurately identify gestures from the normalized feature vectors. The spot-checking process uses eight different machine learning models and they have been fed with a subset of total available hand gestures from the corpus. It has been identified that the multi-layer perceptron (MLP) ensures 100% classification accuracy that motivates the development of the final model using the MLP classification algorithm with 25 different hand gestures. This developed model was able to ensure a 99.62% accuracy level when it was tested with data collected from more numbers of hand profiles.

Subhajyoti Barman, Sejuti Majumdar
Chapter 2. Bone Tumor Detection and Classification Using GoogleNet Algorithm

Tumors are aberrant tissue growths that can develop in any body organ. Numerous varieties of human tumors have been discovered in recent years, including brain, bone, and lung. Image processing is essential in tumor analysis and categorization. Medical image processing is an essential field of study since the results are utilized to improve health issues. When cells within the bone divide uncontrollably, they produce lumps or lumps of aberrant tissue. There are numerous types of bone tumors, each with its own set of characteristics. Bone tumors are classified as noncancerous (benign) or cancerous (malignant). Our project primarily focuses on picture segmentation and classification of skeletal images. Initially, the Fast Non-Native Media (FNLM) filter is used for preprocessing. For the segmentation procedure, a sophisticated algorithm is employed to distinguish malignant nodules from lung pictures. In feature extraction, distinct features are extracted using a gray-level co-occurrence matrix (GLCM). The GoogleNet ranking algorithm is then used to rank the specified features. The suggested classifier is capable of detecting bone cancer with high accuracy. The proposed approach was built in MATLAB, utilizing the dataset from the Bone Image database. Various performance measures are assessed and connected with existing classifiers and cutting-edge algorithms. Simulation results show that the devised scheme has a high classification accuracy (99%) compared to previous methods.

N. Dharmaraj, A. Suresh Kumar, J. Lakshmikanth, R. T. Charulatha, R. Murugasami, M. A. Amarnath
Chapter 3. A Holistic Approach to the Prediction of Football Matches Using Machine Learning

The world’s most popular game, football piques the interest of billions across the globe and so, the prediction of football matches intrigues everyone from the fans to the managers themselves. The prediction of matches is not a simple task, as the game is inherently chaotic. In this paper, we have tested out various machine learning models, trained on historical match data of both sides, considering their attacking, midfield, and defensive abilities, along with data on the player attributes of both sides to predict the outcomes of matches. We have chosen the most relevant features from our dataset after processing it, and used them to train some base classifier models. Using the trained base models, we have built a modified heterogeneous ensemble classifier, with the aim of maximally utilizing the strengths of its constituent models. We have seen that our model has shown an improvement over some of the base line work in this domain.

Rishav Chanda, Adriz Chanda, Jayeeta Chanda
Chapter 4. Course Recommendation System Design Using Advanced AI

Recommender systems produce a compilation of all available content, analyze it in accordance with moderation of content criteria, and then reduce the list to items that users seem most inclined to be interested in. The goal is to develop a course recommendation system that considers the demands of the users. The recommendation system is useful, since it only displays the courses that customers want from a large range of online courses. In this study, we use unsupervised learning approaches such as similarity measures, K-means, and Principal Component Analysis (PCA). It is a content-based recommendation system which will group users with similar interests together and suggest the most popular courses to the user based on their user profiles. The dataset used for training the model has courses covering the most engaged genres like Machine learning, Data Science, Big Data, and so on.

Monalisa Dey, Atreyee Bhattacharjee, Sagnik Chakraborty
Chapter 5. A Novel Channel Selection and Classification in Motor Imagery for Brain Computer Interface Using Meta Heuristic Algorithms

A vital input for a task using the brain in BCI (Brain Computer Interface) applications is the motor imagery (MI) signal from the brain. Users of BCI systems can operate external equipment by using their brain activity, using motor imagery as a control method. Innumerous Electroencephalography (EEG) channels are used to gather nerve impulses from the brain, which are the most prevalent input for Brain Computer Interface systems while they are minimally invasive, flexible, and low in price. The computational overhead is increased by multichannel BCI systems’ high-dimensional data, which causes processing to be slower and to cost more money. EEG details are regularly gathered from over 100 different brain regions; therefore, it is essential to use channel selection algorithms to choose the ideal channels for a given circumstance. However, the primary objective of choosing the channel in EEG data analysis is to lessen the computer intricacy, improve the precision of classification by eliminating over fitting, and save setup time. In this study, we suggested a remora optimization technique that was inspired by nature to lessen the computational load brought on by several channels. Using predetermined criteria, a number of channel selection evaluation techniques, whether classification-based methods used or not it extracted the proper channel subsets. In order to determine the greatest classification accuracy, the classification procedures were utilized in the end. Three publicly available EEG datasets are used to validate the experiment (BCI Competition IV-1,2a, Competition III-3a), and it resulted in superior classification accuracy.

Nagaraju Devarakonda, Raviteja Kamarajugadda, Sarvani Anandarao
Chapter 6. Ahead of the Curve: Predictive Maintenance Solutions for Modern Automobiles

Predictive maintenance in the automotive sector has received a lot of attention because of its potential to improve operating efficiency and lower maintenance costs. This paper presents an approach for predictive maintenance that focuses on engine condition prediction and battery remaining usable life (RUL) predictions. The work intends to develop precise predictive models to detect anomalies in engine health and anticipate battery degeneration, allowing for proactive maintenance interventions. The research entails gathering and analyzing data from engine sensors and battery characteristics in order to train and test various machine learning models. XGBoost was the best algorithm for forecasting engine health, with an accuracy of 0.66, while Bagging Regressor was the best regression technique for engine prediction, with an MSE of 4443.04 and RMSE of 66.66. These models are highly accurate and efficient, demonstrating their potential for real-world use in automobile maintenance.

Aishwarya, Ananya Barath, T. Satish Kumar, L. Khushi, N. Lavanya
Chapter 7. Real-Time Traffic Monitoring Using Roadside Digital Camera

Due to the rise of private vehicles (e.g., car) makes road traffic more congested. To resolve this complex situation without manual road traffic monitoring, computer vision takes place. The dedicated road surveillance cameras and sensors are used as helping devices. Both are costly. However, we are using the roadside digital camera for road traffic monitoring. Our research work mainly focused on vehicle detection along with the speed estimation and license plate detection with the same camera in this article. The proposed method uses neural network techniques Faster RCNN for speed estimation, and for license plate detection, we use easy optical character recognition (OCR) system to extract the alphanumeric characters. Here we trained and tested our model on three different scenarios and over 30 vehicles and get a result of 93.7% accuracy for speed estimation and 87.5% on license plate detection.

Trinanjan Daw, Sandipan Majumdar, Piyali Bagchi Khatua, Anindita Sarkar Mondal
Chapter 8. A Gender Classification for Scylla olivacea (Indian Crabs) Using Deep Convolution Neural Network

The Indian coastal belts boast several different types of crabs, out of which the Scylla olivacea (Orange Mud Crab) is more commonly occurring. The crabs are an intrinsic part of the marine ecosystem. It is important to maintain a balance of male and female varieties to conserve the marine bio-diversity. The crabs are also parts of many gastronomic fares all over the world. However, due to the difference in taste of the male and female crabs, it is important to identify them precisely before they are sold. The study of existing works in this domain revealed that not many works focused on Indian crabs. In this paper a new CNN method is employed to classify the gender of the crab. Firstly, the dataset was created by various methods of augmentation. Next a pretrained model was chosen to extract the crab image out of the entire image which contained a lot of background noise. The augmentation was performed on the extracted images to enrich the dataset. Next a CNN model was trained to these images. The results showed a 99.6% accuracy in the validation set and a 98.9% accuracy in the test set. A F1 score of 0.992 was achieved.

Pratim Guha, Ankita Bhowmik, Rituparna Chaki

Deep Learning and NLP

Frontmatter
Chapter 9. HARQ Soft Combining Using Bidirectional LSTMs

Hybrid Automatic Repeat reQuest is used in modern wireless data communication to integrate both Automatic Repeat reQuest and high-rate Forward Error Correction mechanisms to enhance the reliability of data transmission. Unlike traditional ARQ, where an error-ridden frame is discarded upon reception, HARQ temporarily stores the erroneous frame in a buffer. When the re-transmission of the same frame occurs, these two frames are combined to generate a new frame, trying to minimize errors. This is HARQ with Soft Combining. Existing methods like Chase Combining (Type-I) and Incremental Redundancy (Type-II and Type-III) implement Log-Likelihood Ratio and Maximum Ratio Combining to combine two erroneous frames. This paper uses a Bidirectional Long Short-Term Memory model to combine two frames with high channel noise errors. This paper introduces the BiLSTM model, which aims to reduce the Bit Error Rate and provides an approach for integrating this model into the existing HARQ structure.

Adithya S. Ubaradka, Hemang J. Jamadagni, Rohit Sunil, B. R. Chandavarkar
Chapter 10. Emotion Recognition System Using Indian Music for Therapeutic Cause

The therapeutic uses of music have been appreciated and refined by researchers as its effects are vast in relaxation as well as recreation. Through its ability to regulate our emotions, music aids in returning the body to its original vibration. A song is the result of the interplay of language, melody, and singer’s voice. The listener’s emotional state is influenced by the emotions that a singer conveys through their voice, including happiness, sorrow, worry, calm, or exhaustion. Indian music is examined, and a method for information retrieval is started in order to suggest a therapeutic system by identifying and detecting Indian music. Utilising Music Information Retrieval is an effective way to examine many aspects of a piece of music. This approach, however, studies various aspects of music and creates a therapeutic reason through categorisation.

Paromita Das, Somsubhra Gupta, Biswarup Neogi
Chapter 11. Exploring the Extractive Method of Text Summarization

Whenever we collect the main points from a different documents or from a big paragraphs, it becomes quite a tough and a tedious task to summarize it. Extractive text summarization is a technique which provides a brief summary extracting significant sentences and main features from a particular text, document or a paragraph. In this paper, a comparative study and a literature survey have been performed on extractive learning-based text summarization which includes classification of different algorithms, introducing techniques to compare between current models and traditional models and an analysis and observation based on these criteria that which is a best method. Text summarization using Natural Language Processing (NLP) is indeed a valuable tool for organizations dealing with vast amounts of customer feedback and data. It helps in extracting meaningful insights from large volumes of text, making it easier for people to understand semantic relationship behind that text. This survey mainly focusses on a wide range of study that includes feature representation to sentence selection and tells that how to generate a summary using machine learning models, graph methods, frequency driven approach, etc. The overall survey will definitely help us how to effectively and easily handle large sets of data in building an effective Natural Language Processing applications.

Mehuli Dam, Sumanta Roy, Darothi Sarkar
Chapter 12. Text-Based Sentiment Analysis: Experimentation with CNN and Basic Machine-Learning Approaches

Sentiment analysis, a burgeoning sector within natural language processing (NLP) plays a crucial role in discerning and perceiving attitudes, emotions and opinions expressed in the form of textual data in the social media platform. With the growing proliferation of social networks, sentiment analysis has become indispensable for the government, firms, and researchers, providing beneficial insights into customer feedback, public perception and new actuality. Over the years, Convolutional Neural Networks (CNN) have emerged as the most influential approach in NLP to implement text classification. In this article, we have incorporated Deep Neural Network (DNN) namely CNN in contrast with various basic supervised machine-learning (ML) approaches evaluated on the IMDB Movie Review Dataset. We have analyzed the pros and cons of each approach, simplifying their efficiency in analyzing the sentiments to compel the manifest model into a prolific and reliable approach for better text classification.

Umang Kumar Agrawal, Debashreet Das, B. V. Ramana, Nibedan Panda
Chapter 13. A Comparative Study of Bitcoin Price Prediction Using Various Deep Learning Models

Predictive modeling approaches have garnered substantial interest in predicting price fluctuations of cryptocurrencies, especially Bitcoin, due to their volatile nature. Deep learning models in particular, which are machine learning algorithms, have become extremely effective instruments for financial forecasting in current time. In order to judge bitcoin prices, this paper compares the performance of convolutional neural networks (CNNs) such VGG16 and VGG19, Gated Recurrent Unit (GRU) and Long Short-Term Memory (LSTM). As input features, the study makes use of previous Bitcoin price data and pertinent market indicators. Recurrent neural networks, such as LSTM and GRU, are popular choices for time series forecasting jobs because they perform well in sequential data analysis. However, by using pre-processing techniques, VGG16 and VGG19—which were first created for picture classification—can now interpret temporal sequences of price data. The Bitcoin price time series temporal dependencies are captured using LSTM and GRU, which are well-known for their efficacious modeling of sequential data. A variety of metrics, including as MAE and MSE, are used to determine each model’s behavior. The study also looks at each algorithm’s computational effectiveness and training duration. The discoveries of this ponder contribute to the developing body of writing on cryptocurrency cost expectation by advertising a comprehensive comparison of LSTM, GRU, VGG16, and VGG19 calculations. The results provide insightful advice for researchers and practitioners who want to use machine learning methods to predict Bitcoin prices and navigate the ever-changing cryptocurrency marketplaces.

Ira Nath, Anik Sarkar, Sumit Das, Arpan Ghosh, Manoj Sarkar, Dharmpal Singh, Atanu Misra
Chapter 14. Detection of Brain Tumor Using U-Net Model Approach

This study introduces an innovative deep learning methodology leveraging the U-Net framework for medical image segmentation and lesion detection in brain tumors. U-net architecture contains encoder and decoder blocks which in turn are capable of capturing and propagation of information of the features. In this paper, for brain tumor segmentation, a U-Net Convolutional Neural Network (CNN) is employed, utilizing multimodal magnetic resonance imaging (MRI) data and skip connections for precise tumor localization and segmentation. Through comprehensive experiments, we show that our method outperforms traditional techniques and current CNN models for medical image segmentation and lesion detection in brain tumors. Moreover, the proposed method can perform the task with higher accuracy. This tailored U-Net architecture demonstrates the potential for advancing medical image analysis, enabling precise segmentation and lesion detection, ultimately leading to improved patient care and treatment outcomes.

Arunima Patra, Krishnangshu Paul, Prithwineel Paul
Chapter 15. A Survey on Recent Advancements in Neural Machine Translation

Language, being the most important tool of communication, has encouraged researchers to solve the issues related to cross-language communication. As a result, neural machine translation (NMT) has become a highly researched matter, and significant development has been made in the domain recently. This paper gives an overview of the LSTM-based and the transformer-based encoder-decoder neural architectures in MT, which are the most predominant ones in recent times. The concept of attention concerning NMT, which allows selective focus on some parts of the sentence to improve NMT, has also been discussed. Though NMT has worked well for high-resource languages, translating low-resource languages remains among the major challenges in machine translation, the reason being the unavailability of an appropriate corpus for such languages. In this context, the transfer learning approach has shown notable results. The evolution of the transfer learning approach concerning machine translation has also been explored.

Soumi Datta, Deena Joseph Wilson, Sainik Kumar Mahata
Chapter 16. Image Captioning Using VGG-16 and VGG-19: A Mixed Model Approach

In modern years, with concurrent development in the adoption of various social platform stages, image captioning is dominant in naturally defining the entire image into a sentence of natural language form. Captioning images (CIs) play a significant part in the computer-established community. CI is the method of naturally producing the regular language detailed document depiction of the image utilizing Artificial Intelligence (AI) methods. Computer vision (CV), natural language processing (NLP) is critical facets of image clarification scheme. Convolution Neural Network (CNN) is a component of CV, recycled characteristic extraction, object discovery, alternative NLP methods aid in producing the caption of the image in textual form. Reproducing acceptable image depiction by machine is a demanding exercise as it is situated at the time of object discovery, area, and semantic communications in human-intelligible expression as English. Our paper aims to establish an encoder-decoder (E-D) located hybrid captioning of image that utilize VGG-16, VGG-19, ResNet50, YOLO. ResNet50, VGG-16 are pre-prepared characteristic eradication models on numerous (more than 100,000) of images. YOLO is worn for actual time for action or event of object discovery. First, it excerpts the characteristics of images using VGG-19, VGG-16, YOLO, and ResNet50 and connects the outcome into an individual file. Finally, BiGRU and LSTM are worn for detailed analysis of document depiction of image. The suggested model is computed utilizing ROUGE, METEOR, BLEU scores.

Moloy Dhar, Mrinmoy Sen, Bidesh Chakraborty, Shibaprasad Sen, Milan Kumar Dholey, Soumyanil Dey, Soham Bhattacherjee
Chapter 17. Convolutional Block Attention Augmented Convolutional Neural Network for Indian Sign Language Recognition System

Despite being a common method of communication for speech-impaired individuals, sign language often goes unnoticed due to a lack of understanding among the general public and many deaf and dumb individuals. This paper aims to design a mediator system that can translate sign language alphabets, addressing the lack of systematic communication methods for these individuals. The system will help identify the alphabets of Indian Sign Language, enhancing their communication capabilities. In recent years, the neural network has extensively used sign language recognition. The CNN network, which is commonly used in sign language recognition, struggles to achieve optimal accuracy due to the equal importance of each image component. This problem can be solved with attention. The proposed model uses the convolution block attention module (CBAM) to focus on important areas. This helps the model make decisions based on the most relevant data, which makes the CNN network more accurate overall. The model gives 99.89% accuracy on the validation test, while only the CNN model gives 97.21% accuracy.

Anudyuti Ghorai, Utpal Nandi, Chiranjit Changdar, Bachchu Paul, Biswajit Laya, Tapas Si
Chapter 18. Feature Extraction and Similarity-Based Movie Recommendation Using Bag-of-Words Model

Recommender systems have become pervasive in guiding users through a lot of choices available in today’s digital landscape. This paper presents a content-based recommender system focused specifically for movie recommendations, offering a viable alternative to collaborative filtering methods. Unlike collaborative filtering, which relies on user ratings and demographic data, the proposed system operates without such prerequisites, making it particularly advantageous in scenarios with limited user engagement. By leveraging content attributes such as title, genre, cast, and crew, the system identifies similarities between movies and recommends those akin to a user’s preferences. Through innovative preprocessing techniques and feature extraction methodologies, the system transforms textual descriptions into numerical vectors, enabling efficient recommendation generation. The proposed model contributes to addressing critical challenges inherent in collaborative filtering, including the cold start problem, data sparsity, and popularity bias. Furthermore, it underscores the broad applicability of recommender systems across various domains and emphasizes the continuous evolution of interface designs and algorithms within the research community. Overall, the proposed content-based recommender system offers a promising solution for delivering personalized movie recommendations while advancing the state of the art in recommendation technology.

Swati Raj, Jhalak Dutta, Saikat Bandopadhyay, Rajesh Prasad
Chapter 19. Robust Human Target Detection and Acquisition Through Deep Learning

Our idea points to the detection or acquisition or locking of entities in a particular area or room or space where their presence is judged through facial recognition, object detection, and posture detection. We simply present a smooth deep learning-based software that can identify an individual through facial recognition utilizing the face recognition module, track him or her through different cameras, his or her actions, and postures and detect an anomaly in behavior through all these parameters, and send the required information with alert generation to the handler. It will be able to identify a person (imposter or not), object (unethical or ethical i.e., cell phones, earpiece, weapons, done through the YOLOv5 deep learning-based model), or postures and actions (necessary or unnecessary, done through VGG19 model), and sends information accordingly. If an anomaly is detected in any case, a visual alarm will be sent to the handler. Our idea being simple to implement is suitable for high-profile areas, field work, and surveillance.

Soumyajit Paul, Debasree Mitra, Souvik Kundu, Subhayan Kundu, Swapnil Datta, Syed Aman Ahamad, Anisha Sarkar, Dipanjan Saha, Harsh Kumar Shaw

Machine Learning and Applications

Frontmatter
Chapter 20. Enabling Independence: Machine Learning-Driven Obstacle Detection for the Visually Impaired

The paper presents a comprehensive system tailored to address the unique challenges faced by individuals with visual impairments, with a focus on object detection. The proposed system integrates three primary components: dimensionality reduction with unstructured data, an algorithm for object preference determination, and the utilization of the YOLOv8 model for object identification. Additionally, Python-based text-to-speech functionality is incorporated to provide auditory feedback to the user, ensuring accessibility and usability. A key aspect of the system’s functionality lies in the utilization of dimensionality reduction techniques to effectively manage unstructured data, thereby simplifying complex information to enhance comprehensibility. The system employs autoencoder neural networks for this purpose, drawing inspiration from the human brain’s information-processing mechanisms. Autoencoders specialize in condensing high-dimensional data into a more streamlined format with fewer dimensions, akin to transforming a detailed picture into a simplified sketch while retaining the essential features. By streamlining sensory input into a lower-dimensional representation, the system enhances its capacity to understand and process diverse and complex sensory information effectively, improving its adaptability to different environments and enhancing accuracy in providing feedback to the user. Through the integration of autoencoder neural networks for dimensionality reduction, the proposed system offers promising potential to assist blind individuals in navigating their surroundings with increased efficacy and independence.

Shiplu Das, Abhishikta Bhattacharjee, Panchami Das, Ayan Chatterjee
Chapter 21. A Comprehensive Study of Boosting Algorithms for Class Imbalance Dataset

This research paper provides a concise but comprehensive study of boosting algorithms as an effective solution for class imbalance issues. It seeks to address the ongoing problem encountered in machine learning algorithms that is class imbalance in the datasets, and more specifically, evaluate the efficiency of boosting algorithms to counteract this issue. The findings are collected by following the procedure which includes the adoption of the SMOTEEN and various boosting algorithms carefully. The research paper concludes with a finding that various boosting methods for example XGBoost, Light GBM, and CatBoost had the potential to manage complicated imbalanced data. All boosting algorithms are subjected to a diverse selection of assessment criteria including accuracy, precision, and recall for instance. Among all, Light GBM shines bright, by managing to strike a balance at both precise and scalable prediction performance. The process of analysis, findings and conclusion, therefore, becomes an essential tool for practitioners who want to address the problem of skewed data class, enabling them to get better performance and reliability through machine learning.

Udit Rawat, Bhawna Rawat
Chapter 22. Supervised Machine-Learning Methods and Its Application on Automated Telecom Fraud Detection

Fraud is a widespread issue in our society and affects private as well as government entities. Every year, telecom operators lose millions of dollars to fraudsters, and thus, use fraud management systems to detect and prevent fraud. Early such systems were based on a rule-based approach, however, it is not the best for detecting fraud nowadays. A rule-based approach is reactive in nature and has many drawbacks, and machine-learning techniques can help to overcome such limitations. In this paper, we have explored many machine-learning models, like XGBoost, random forest, support vector machine, decision tree, and artificial neural network, to detect telecom frauds. We have compared the performance of these five supervised machine-learning algorithms. We have used call detail records as the primary source of information to feed into these models. Our research reveals highly encouraging results and provides valuable insight into the performance of the individual models, which might translate into a guide for the Telcos in their proactive journey for fraud prevention. The experimental results indicate that XGBoost achieves better accuracy than other classifiers (99.9%), while random forest has the second-highest accuracy. The algorithms with the third, fourth, and lowest accuracy are decision tree, support vector machine, and artificial neural network, respectively. Hence, we can say that XGBoost is the more appropriate classification model for telecommunication fraud detection.

Hasnahana Khatun, Abhinandan Khan, Ranjan Mehera, Rajat Kumar Pal
Chapter 23. A Homogeneous Federated Learning Approach Toward Prediction of Diabetes

Federated learning (FL), which provides a decentralized method of model training without jeopardizing data privacy, has become a paradigm-shifting breakthrough in artificial intelligence (AI). This work addresses homogeneous federated learning in healthcare. The homogeneous federated learning (HFL) aims to train models across several devices or nodes while maintaining data security and privacy. The same data distribution and model architecture are usually shared by all nodes involved in HFL. In this paper, we propose a HFL model for diabetes prediction case study. The dataset is divided into five segments and assign one segment to each client, then apply four machine learning classification algorithms to find some metrics such as accuracy, precision, recall, and f1 score. The healthcare field has been experiencing a major impact from federated learning. FL has several other uses outside of the medical field, like IoT and Android, Blockchains, among others. This paper primarily addresses federated learning about data privacy in the medical field.

Rahul Karmakar, Arindam Sarkar, Debraj Malik, Priyabrata Sain
Chapter 24. A Study of Various Machine Learning Models in Public Health Care Sector

This study explores the potential of machine learning (ML) models for analyzing public health data related to obesity and diabetes. The paper investigates the effectiveness of various ML algorithms in predicting an individual’s susceptibility to the chronic diseases. The analysis leverages two publicly available datasets: one on obesity levels in Latin American countries and another on diabetes within the Pima Indian population. The employed ML models achieve high accuracy in obesity prediction (85.79–90.44%), with light gradient boosting machine (LGBM) and eXtreme gradient boosting (XGBoost) demonstrating a slight edge over Random Forest and Support Vector Machine (SVM). For diabetes prediction, the accuracy ranges from 70.13 to 76.62%, likely reflecting the inherent complexity of the disease. LGBM again exhibits the highest base accuracy for both obesity and diabetes prediction. The evaluation using precision, recall, and F1-Score metrics suggests a well-balanced performance by the models, effectively identifying true cases while minimizing false positives. This highlights the potential of ML for early disease detection and intervention in public health.

Kunal Bansal, Soumil Suri, Lakshay Sharma, Pratham Jindal, Bhawna Rawat
Chapter 25. Medical Image Noise Reduction Methods Based on Cutting-Edge Hybrid Filtering Designs

A high-resolution noise free medical image is an essential requirement for accurate disease detection. Medical image processing and analysis techniques have become utmost important in recent decades as a supplementary competence for medical professionals in the fields of diagnosis and prevention. Obtaining high-quality images without any interference is a crucial requirement for effectively managing therapeutic images. Medical images may contain many types of noises for involvement of simultaneous operation of multiple equipment involved in data capturing and transmission. Various filters have been suggested for eliminating different types of noises, each having its own merits and drawbacks depending on its capabilities and projected benefits. Our research depicts a technique for removing noises from medical images using a hybrid filtering approach based on ensemble adaptive mean filtering and Block-matching-3D filtering. The proposed methodologies specifically aim to reduce the speckle noise in brain MRI images obtained from ultrasonic therapy by employing hybrid filtering techniques. This proposed hybrid filtering algorithms integrate the most effective aspects of high-performance filters to achieve a de-noised image that retains fine details, edges, and textures. The statistical performance measurement metrics like Root Mean Square Error (RMSE), Peak Signal-to-Noise Ratio (PSNR) are employed to evaluate the improved visual quality of the images. The results indicate that our proposed hybrid filtering technique functioning efficiently and provide better quality de-noised image compare with other existing de-noising techniques.

Tanusree Saha, Kumar Vishal
Chapter 26. An Intelligent Heart Disease Prediction Utilizing Decision Tree Optimized Through Particle Swarm Optimization

Cardiovascular disease is one of the major causes of death today also. Machine learning can play a major role in early prediction of disease. The main goal of this work is to combine Particle Swarm Optimization (PSO) techniques with Decision Tree algorithms to create an accurate and efficient prediction model for cardiovascular disease. Because the study makes use of a well-known dataset from Kaggle, the data used for model training and validation is guaranteed to be reliable and relevant. The study finds the most important characteristics by using Mutual Information for feature selection, which lowers dimensionality and boosts model performance. Using these chosen features, a Decision Tree model is created and trained, with an initial accuracy of 86.38%. When Particle Swarm Optimization is combined with the Decision Tree model, accuracy increases dramatically, reaching 92.38%. The enhanced model provides a strong tool for medical practitioners, supporting early and precise cardiovascular disease prediction—an essential step toward prompt intervention and therapy.

Annwesha Banerjee Majumder, Monali Sanyal, Anirban Mondal, Priyasha Banerjee
Chapter 27. Parkinson: A Web-Based Parkinson’s Disease Detector Based on Machine Learning to Detect the Presence of Parkinson’s Disease in Human Beings

This paper presents an effective web-based approach for diagnosing Parkinson’s Disease using deep learning algorithms. Our architecture combines MLP and SVM models, enhanced by SMOTE to address class imbalance, ensuring equal distribution among all classes. We employed GridSearchCV during training and testing to optimize model accuracy, achieving 91.6% accuracy, 89% precision, and 88% recall. The final model, saved as a pickle file, is integrated into a Flask server powering the web application. This application allows users to input data and receive predictions about Parkinson’s Disease, with an interface designed using HTML and CSS for responsiveness and ease of use. The model offers a significant advancement in diagnosing Parkinson’s Disease, providing a user-friendly tool accessible to medical professionals, diagnostic centers, doctors, and the general public. Our study addresses previously overlooked challenges in this domain, offering a timely diagnosis solution. The model’s potential extends to future app-based versions for mobile and other electronic devices, further increasing accessibility.

Debmitra Ghosh, Sourasish Nath, Tiasha Dutta, Atin Bera, Arya Bhattacharyya, Dharmpal Singh, Soumalya Chowdhury, Sudipta Sahana
Chapter 28. Examining the Accuracy of Machine Learning Classification Models in Stroke Prediction

Stroke is a critical medical condition caused by blockage (ischemic stroke) or rupture (hemorrhagic stroke) of blood vessels in the brain. However, stroke research and diagnosis are advancing, requiring ongoing studies to enhance understanding and management. The goal is to refine prevention strategies, develop effective treatment options, and improve patient outcomes. Continued research aims to improve prevention, early detection, and treatment decisions. Machine learning algorithms, including Support Vector Machine (SVM), Decision Tree (DT), Random Forest (RF), and XGBoost, have been used to analyze data and detect stroke. XGBoost achieved the highest accuracy among them all. The field seeks to further enhance our knowledge and drive advancements to address this critical condition.

M. Nazma Naskar, Abhinav Prakash, Vanshita Sinha, Harshita Kumari, Anirban Bhattacharjee
Chapter 29. Enhancing Communication with Advanced CNN Models for Recognizing American Sign Language

The research work endeavors to enhance automatic sign language interpretation and communication by leveraging Convolutional Neural Networks (CNN) for the recognition and classification of static hand gestures representing the 26 English alphabets (A-Z) and limited to 10 numbers (0–9). First, a comprehensive dataset of hand gestures for the English alphabet and numbers is created. Next, images are carefully pre-processed by cleaning and resizing them. Finally, models are trained using various CNN architectures, including LeNet-5, MobileNetV2, and a custom network. To identify and categorise hand gestures, the dataset is thoroughly trained. The models’ accuracy and performance are evaluated on additional datasets. Approximately 96.70% of the tested dataset is recognised accurately by the trained CNN models. Using an ensemble approach and efficient picture pre-processing techniques are important variables that contribute to the strength and durability of the models. Confusion matrix analysis, which demonstrates considerable differences between accurate and inaccurate identifications, highlights the models’ efficacy. An effective method for translating static hand motions into text for sign language was created by the study team. Right now, the system can recognise letters and numbers with accuracy from hand gestures, but it cannot understand dynamic movements or full-body language. It is limited to static motions. The system's vocabulary might be increased and its capacity to identify dynamic movements could be strengthened in future iterations. With this advancement, the deaf and hard-of-hearing community's communication accessibility could be greatly enhanced by sophisticated technologies that comprehend and facilitate sign language communication better.

Kallol Acharjee, Sumit Das, Sandip Roy, Pratiksha Nandi
Chapter 30. Breast Cancer Classification Using Machine Learning: A Comprehensive Review and Analysis

Breast cancer is one of the most prevalent and potentially life-threatening diseases affecting women worldwide. Early and accurate diagnosis is crucial for improving patient outcomes and survival rates. In recent years, machine learning techniques have shown promising results in aiding medical professionals in the classification and diagnosis of breast cancer. This paper presents a comprehensive review and analysis of various machine learning approaches employed for breast cancer classification. The study covers data preprocessing, feature extraction, model selection, performance evaluation, and emerging trends in the field. Through an extensive survey of existing literature, this paper aims to provide a holistic understanding of the advancements, challenges, and future directions in utilizing machine learning for breast cancer classification.

Priyanka Roy, Joyjit Patra, Srijita Bhattacharyya, Soumi Badyakar, Prasenjit Maji
Chapter 31. Psychometric Educational Stress and Anxiety Analysis of Undergraduate Students Using Machine Learning Approaches

The prevalence of stress and anxiety among undergraduate students is a growing concern worldwide, impacting academic performance, mental health, and overall well-being. Currently, an alarming number of adolescents and young adults are experiencing anxiety and depression. Mental tension is one of the primary causes. This research paper aims to investigate the psychometric educational stress and anxiety among undergraduate students, especially in engineering colleges using machine learning algorithms. Various psychometric scales like Perceived Stress Scale (PSS), General Anxiety Disorder 7-item (GAD-7) scale, Patient-Reported Outcomes Measurement Information System (PROMIS) three outcome variables like stress, anxiety, and depression are analyzed and few machine learning techniques, like Logistic Regression, K-NN Classifier, Decision Tree Classifier, Random Forest, and Stacking provides insights into the factors contributing to stress, anxiety and depression levels among undergraduates. Also, two machine learning models logistic and Support Vector Machine (SVM) are proposed to predict the stress level of the students.

Poulami Bhar, Anirban Bhar, Soumya Bhattacharyya, Priyanka Shee

Security Solutions

Frontmatter
Chapter 32. An Architecture for Phishing URL Detection Using Concatenated Features

Cybercrime is the bane of technology and computer networks, instilling anxiety in everyone who uses computers and the internet. While traditional crimes are on the decline, cybercrime is on the rise, with activities such as hacking, cyberstalking, denial of service, and phishing. Phishing assaults are a kind of social engineering, and one of the most common types of cybercrime to which most people are exposed to. Blacklists, which are lists of phishing URLs, have traditionally been used to detect phishing URLs. However, the blacklist method has a drawback, as new phishing URLs absent in the blacklist cannot be avoided. As a result, deep learning algorithms and Natural Language Processing techniques are employed to create and develop a model for real-time anti-phishing systems that understands the core semantic pattern differences between phishing and legitimate URLs. Hence, a deep learning architecture is designed over the concatenated features, where the concatenated features are the hand-crafted features and the convoluted character level embeddings of the URL. Furthermore, the model was experimented in three different ways with respect to activation functions, which included the model with only ReLU activation functions, the model with only Swish activation functions and the model with both activation functions. After a comparative analysis using various metrics, it is found that the model with ReLU activation functions performed better and yielded the highest accuracy of 96.83%.

B. Ranjitha, R. Bharathi, Navan Kakwani, M. K. Shreya, Alisha Maini
Chapter 33. Hyperledger Fabric Network Deployment Controller

Hyperledger Fabric Blockchain, a prevalent platform in enterprise systems, often poses challenges in network setup due to the diverse skillsets required in Hyperledger Fabric, chaincode development, and DevOps technologies. The scarcity of resources with these skillsets makes setup cumbersome and time-consuming. Addressing this, a GUI-based tool called Hyperledger Fabric Deployment Controller has been developed to simplify network creation and deployment. It allows spawning networks on local servers or cloud instances with minimal effort. The tool abstracts the complexities of setup, requiring only basic details like network name and organization count, while generating necessary configuration files automatically. With just four clicks, developers can initiate network setup, freeing them to focus on application development. Compatible with both single and multi-host environments, the tool significantly reduces operational complexity and time, benefiting businesses by expediting network deployment and operations.

Suman Kumar Das, Soumyabrata Saha, Suparna DasGupta
Chapter 34. Challenges in Security and Mitigation Measures at Different Purdue Model in Industrial Control Systems

Cyber-attacks on industrial control systems are very common and have been happening from very beginning of industry level 3.0. Currently majority of industries are following industry level 4.0 and moving slowly toward 5.0. Multiple attacks have been already performed on industrial control systems, and still it’s quite mysterious when it is getting compared with traditional IT (Information Technology). This paper is going to discuss challenges and controls which can be implemented for any SCADA network. It will also discuss how to implement security controls for each Purdue level of SCADA network and how it is different from traditional IT security.

Gulab Kumar Mondal, Biswarup Neogi, Dharmpal Singh, Sandip Roy, Rajesh Bose
Chapter 35. Graph-Based Approaches in Cybersecurity: A Comprehensive Survey

For the ever-expanding threat cybersecurity needs some advanced methodologies for threat detection, vulnerability, assessment, and incident response. This comprehensive survey paper explores the pervasive role of graph-based approaches in fortifying cybersecurity measures. The survey begins with navigations through the landscape of intrusion detection systems (IDSs) that leverage algorithms like PageRank. The paper elucidates how threat intelligence graphs enhance the understanding of cyber threats by modeling relationships between threat indicators. Furthermore, the survey investigates the vital role of Graph-Based Approaches in Cybersecurity assessment. It examines attack surface analysis and dependency mapping, elucidating how these techniques prioritize security efforts by identifying critical entry points and assessing the impact of potential vulnerabilities. The integration of blockchain technology with graph theory is discussed. The survey paper also explores real-world applications, comparative studies, and the challenges faced in deploying graph-based cybersecurity approaches at scale. In summary, this survey paper offers a panoramic view of the graph-based approaches employed in cybersecurity. It illuminates the strengths, limitations, and real-world implications of these methodologies, providing a valuable resource for cybersecurity practitioners, researchers, and policymakers navigating the complex and dynamic landscape of securing digital ecosystems.

Uttaran Ghosh, Aditya Paul, Darothi Sarkar, Monalisa Dey, Prithwineel Paul
Chapter 36. Anomaly-Based Detection of Log4j Attack in SDN

The widely used Java library Log4j logging framework has lately come under attack from a major vulnerability known as CVE-2021–44228, also known as Log4Shell. Through the use of malicious log messages, the vulnerability enables attackers to compromise vulnerable systems by executing remote code. A key component of SDN systems is the RYU controller, an open-source software-defined networking (SDN) controller that provides a wide range of functionalities. It can communicate and interact with SDN-capable switches and other network elements. It is used in SDN installations for managing, controlling, and supporting network automation and orchestration. This paper proposes a new anomaly-based Log4j detection approach in SDN networks. We have used multiple machine-learning classifiers to analyze and detect anomalies in network traffic generated by Log4j attacks to get the best achievable accuracy for real-time model development. The conducted experiment results that the proposed approach can give high accuracy up to 98% in detecting Log4j attack in SDN environment.

Pruthviraj Nitin Deshmukh, R. Harish, V. Sangeetha, K. Praveen
Chapter 37. Machine-Learning-Driven Android Malware Detection: Techniques and Performance Analysis

The rise of malware attacks on Android devices necessitates robust and efficient detection mechanisms to protect users’ security and data integrity. This study proposed machine-learning techniques to detect malware on Android devices. By analyzing various features and behaviors of known Android malware samples, we train the machine-learning models to identify instances of malware accurately. To assess the performance of our method, we conduct experiments using a dataset of Android malware samples and evaluate the efficiency of the ML using metrics, like accuracy, precision, recall and F1 score, execution time, and memory usage. Our findings demonstrate that various machine-learning approaches accurately detect Android device malware. Furthermore, exploring today’s changing landscape of safeguarding Android devices from malicious attacks remains crucial. The research aims to contribute to the ongoing discourse on improving the security of Android systems amidst the changing world of malware risks. We harness the capabilities of machine learning to establish a robust malware detection system for detecting malware on Android devices. Our research confirms that ensemble methods like Random Forest and Bagging Classifier provide the best accuracy and efficiency in malware detection and are well suited for practical use in real-world scenarios.

Asif Iqubal, Ashish Payal
Chapter 38. Quantum-Resistant Lattice-Based Cryptography for Secure UAV Communications

Quantum-Resistant Lattice-Based Cryptography, designed especially for secure communication in networks of Unmanned Aerial Vehicles (UAVs). The integrity and security of data must be guaranteed since UAVs are becoming increasingly prevalent in both military and civilian applications. Quantum computing poses an immediate challenge to traditional cryptography. The proposed Quantum-Resistant Lattice-Based Cryptography architecture guarantees secure communication with little computational expense over previous works by using the resilience of lattice-based encryption against quantum assaults. The proposed work also provides the groundwork for trustworthy communication among UAVs via the use of encryption and lattice-based key exchange protocols by optimizing cryptographic processes while addressing UAV network restrictions. Finally, thorough assessments confirm the present work’s effectiveness, guaranteeing stakeholders safe and essential UAV operations.

Bodhisattwa Baidya, Atanu Mondal
Chapter 39. Cyber Forensic Investigation—A Hunt for Windows Registry

In contemporary times, the extraction of digital proof from storage media is becoming an increasingly significant concern in digital forensics. This is primarily due to the intricate challenges associated with acquiring, preserving, and examining digital evidence, including time and space complexity considerations. Amidst these concerns, the analysis of the Microsoft Windows Registry has emerged as a valuable asset in the field of cyber forensics. Windows Registry analysis involves a systematic examination of the Windows Registry databases to identify any irregularities, shedding light on the precise processes involved in cybercrimes. The Windows Registry is a substantial source of digital evidence, comprising a database containing evidential information about the device and its users. Within the Windows Registry, one can discern traces of activities conducted on the device, with voluminous data related to various components of the host computer being stored. This paper studies and analyzes several prominent forensic tools designed for Windows Registry extraction. A comprehensive comparison of different cyber forensic tools covering diverse digital forensic domains like Windows, Image, Mobile, and Network forensics is provided. The main objective of the paper is to identify and evaluate the forensically relevant data that can be recovered using these tools. Furthermore, a scenario-based approach, illustrating the investigative process through a practical situation using an integrated digital forensic investigation process model, is presented. This allows for demonstrating the complete process from the discovery of evidence to the substantiation of findings with supporting documentation. Through this detailed examination, the paper aims to contribute valuable insights into the efficacy of digital forensic tools in the realm of Windows Registry analysis for cybercrime investigations.

Sikander Azad, Mehak Khurana
Chapter 40. A Proposed the Multi-modal Approach to Detect Cyberbullying in Multi File Format-An Empirical Study

Internet usage has become increasingly important in people’s lives without a doubt. But it has certain dark sides such as: Larceny of privacy, cyberstalking, phishing scams, online harassment, cyberbullying, and illegal works. Basically, cyberbullying is a burning issue that leaves a profound imprint on the victim. In recent years, there has been an instant growth in using machine learning algorithms to detect cyberbullying. This paper presents a comparative study of various classification-based machine learning and deep learning models for detecting cyberbullying. It proposes a multimodal approach that leverages four types of social content—text, images, video, and audio (TIVA)—to identify instances of cyberbullying. We investigated this problem by gathering a variety of datasets, which include various sources of data relating to other media platforms. The data contain text, audio as well as visual content and are labeled as bullying or not. The features considered are facial expressions with emotions, speaker recognition with the frequency of the voice, body language, and context with text and symbols. Then combined the classification techniques according to the task and feature used for comparison are made among their performances. All the statistics are done in Python Colab notebook. The resulting multi-format cyberbullying detection platform provides a comprehensive approach to detecting incidents such as cyberbullying. Author’s framework will outperform several state-of-the-art models by conducting experimental analysis on the multifile format.

Teena Jaiswal, Varinder Kaur Attri
Chapter 41. Blockchain-Enabled Transparency in Disaster Relief Supply Chains: A Strategic Framework

In the face of escalating challenges posed by natural disasters and humanitarian crises, the necessity for efficient and timely relief measures has become strikingly apparent. Traditional disaster management systems grapple with critical issues such as real-time data sharing, resource allocation, and transparency. This research propounds a paradigm shift towards decentralized disaster relief, harnessing the latent potential of blockchain technology. The project’s core objective is to establish a secure and transparent platform by leveraging blockchain’s intrinsic attributes, namely decentralization, immutability, and smart contracts. This abstract amalgamates the comprehensive goals and methodology of the project, emphasizing the urgent need for improved coordination, communication, and resource allocation in disaster response operations. The endeavor aspires to elevate data integrity, enhance decision- making processes, and instill accountability. Situated at the nexus of technology and humanitarianism, our project on the “Application of Blockchain Technology in Disaster Management” unfolds a pioneering narrative. It delves into the design, implementation, and impact of a decentralized blockchain system, with a core objective of enhancing real-time data sharing in disaster relief through the transformative power of blockchain technology. The project incorporates innovative approaches in missing persons’ tracking, aid distribution, critical area identification, refugee relief camp mapping, disaster reporting, record- keeping, security, and continuous system improvement, promising to fundamentally reshape humanitarian efforts and disaster response mechanisms.

Debraj Bhattacharjee, Souvik Chattopadhyay, Mainak Paul, Tanmay Bhattacharya

Smart System and Networks

Frontmatter
Chapter 42. Enhanced Clustering and Channel Allocation in Wireless Mesh Networks

Wireless mesh networks (WMNs) are crucial for establishing adaptable and scalable communication infrastructures among interconnected devices. Effective clustering and channel allocation are vital for enhancing WMN performance by addressing energy efficiency, latency, throughput, and interference challenges. Proper clustering facilitates the organization of network nodes into cohesive groups, enhancing communication efficiency and resource utilization. Additionally, channel allocation strategies ensure minimized collisions and improved overall network throughput, enhancing network stability and reliability. Existing approaches, such as clique-based channel assignment (CCCA) and two-hop neighbor clustering, present complexity, and interference level limitations. The significant contribution of this paper is to introduce a novel approach focused on clustering and channel assignment, referred to as enhanced clustering and channel allocation (ECCA), to optimize WMN performance—the clustering technique groups nodes based on maximal cliques in one-hop neighbors. Furthermore, channel assignment strategies are employed to minimize collisions and improve overall network throughput. The performance of ECCA is compared with state-of-the-art clique-based channel assignment (CCCA) in terms of the modularity, average number of nodes per cluster, average node degree, and coefficient of variance.

Ch. V. Sushma Reddy, V. Harshini, Abhyuday Rayala, B. R. Chandavarkar
Chapter 43. Water Quality Prediction in Aquaculture (WQPA) Using Machine Learning and Internet of Things

The growing popularity of Internet of Things (IoT) technologies has created new opportunities for aquaculture monitoring and management. An Internet of Things (IoT)-based water monitoring system uses sensors to gather data in real time on water quality factors that are important to aquatic life, such temperature, pH, dissolved oxygen, and ammonia content. The gathered data is sent to a central server for analysis and interpretation over a wireless network. The WQPA uses data analytics and cloud computing to give aqua culturists actionable information so they may make proactive decisions and make exact modifications to minimize water pollution caused by various human activities. This improves the general well-being and growth of aquatic life, which boosts aquaculture operations’ sustainability and production. A scalable and affordable answer to the ever-changing problems in aquaculture management, the Internet of Things-based WQPA encourages economical resource use and ecological preservation. WQPA strives to achieve Sustainable Development Goals 8 and 14, which safeguard aquatic life in order to boost fish producers’ economic prosperity.

Debdutta Pal, Sunanda Jana, Nandini Roy
Chapter 44. Post-operative Smart Healthcare Kit

The Internet of Things is becoming an increasingly important tool for the healthcare sector. It is utilized in common healthcare settings, such as smart watches and oxygen monitors, as well as in hospitals for anything from routine checks to critical care units. One of the most crucial hospital departments is post-operative care, where patients are monitored to make sure they are getting better. Here, a constant watch is kept on the patient’s vital signs and the associated organs to detect any anomalies that might develop following the treatment. Utilizing IoT in post-operative treatment will guarantee patient contentment. This is ensured by the Post-Operative Smart Healthcare Kit (POSHK), presented in this paper, furnished with several sensors, including heartbeat, oxygen and carbon dioxide sensors, blood pressure, room, and patient temperature sensors, and more. During discharge, data is gathered and analyzed to look at possible future recovery trends.

V. Kanchana Devi, E. Umamaheswari, Xiao-Ghi Gao, Nirupama Rathinavel
Chapter 45. Predicting the Spread of Covid-19 Using Machine as a Service (MaaS) Through Modified Apriori Algorithm

The cloud computing and machine learning is one of growing technology in the field of computer science. In recent years, the word “Cloud” is like a cup of tea in our daily lives. Every day, a significant volume of data is produced by different IT sectors, banking sectors, e-commerce websites like flipkart. It provides the organization with consistency, scalability, and strong enumerating power. It also delivers eminence service by generating proper outcomes in a brief amount of time. The objective of this paper is to predict the growth of Covid-19 virus in near future using Machine as a Service (MaaS) through modified Apriori algorithm. In this paper, we will obtain the working graph of the algorithm. The result is also analyzed by obtaining the graphs for confidence against lift, support against confidence. At last, a running time comparative graph for modified apriori and apriori is shown for a certain duration of time.

Samir Kumar Sett, Aishik Sett, Anupam Ghosh
Chapter 47. Design of a Smart Road Accident Management Framework Using Federated Learning

The automobile industry is experiencing tremendous advancements due to the proliferation of Intra-Vehicular Connectivity, Intelligent Transportation System (ITS), Edge Computing and self-contained motor vehicles. Recent improvements in hardware and software have made this technology attainable. With these technologies, the driving experience can be made more effortless and safer. These technologies are inseparable part of ITS and will make our future Smart Cities. Being conscious of self and surroundings, these vehicles can make more effective decisions on the road, leading to a safer and more efficient traffic management system. This paper presents a smart road accident management framework utilizing IoV and Federated Learning. Being interconnected, the vehicles can share data within each other. Utilizing this data, the vehicles can judge the probability of collisions and can take necessary action of their own. Federated Learning is used to decentralize the process. Instead of perfecting a broader traffic system, our aim is to divide the system into a few zones and improve the traffic within, so that the system can adapt according to its needs, without affecting or depending on any other zone traffic.

Manju Biswas, Sourav Banerjee, Debashis Das, Susobhan Das, Snehashis Dhibar, Deep Kisku, Puja Das, Utpal Biswas
Chapter 47. Enhancing the Longevity Through Improved Leach Protocol with Fuzzy Logic in Cognitive WSN

A Cognitive Wireless Sensor Network (CWSN) employs AI for autonomous optimization in smart cities, industry, military, healthcare, and animal protection, enhancing sensor networks with adaptive intelligence. The portrayed method frames the usage of LEACH in CWSN to enhance energy efficiency and improve network life span. The cluster heads (CHs) are chosen using fuzzy logic, which provides a way to make wise choices even in the face of inaccurate data. While selecting a CH, a number of parameters considered are residual energy, distance from the base station (BS), and angle to the BS. In this proposed method, the base station assumes a vital role, taking on responsibilities such as selecting CHs for various areas, assigning nodes to specific zones, calculating distances between nodes and cluster centers, and determining node angles. The proposed protocol effectively balances energy consumption, improving CWSN performance and maximizing throughput, minimizing delay compared to traditional algorithms for environmental monitoring applications.

S. Kalpana, A. Devi, T. Sivasakthi, S. Jayashree
Chapter 49. Enhancement in Loop Prevention Protocols: Dynamic Per-VLAN Multi-shortest Path Bridging (DV-MPB)

Loops in a network pose significant challenges in modern communication systems, leading to congestion, performance degradation, and potential system failures. Additionally, loops can cause broadcast storms, impacting the network performance. While traditional loop prevention mechanisms like the Spanning Tree Protocol (STP) have been effective, the evolving complexity of network environments demands more efficient solutions. This paper provides a comprehensive review of loop prevention protocols, including STP, Rapid Spanning Tree Protocol (RSTP), Multiple Spanning Tree Protocol (MSTP), Shortest Path Bridging (SPB), and Per-VLAN Spanning Tree (PVST). Further, this paper introduces Dynamic VLAN - Multi-Shortest Path Bridging (DV-MPB) as a potential solution to enhance loop prevention efficiency. DV-MPB combines the strengths of SPB and PVST to achieve dynamic traffic optimization and reduce convergence time in loop prevention protocols.

N. Nagabhushanam, P. Abhishek Kumar, D. Pramod Chaitanya, B. R. Chandavarkar
Chapter 49. A Self-regulating Error Correction Technique for Data Transmission in Wireless Acoustic Sensor Networks

This research introduces a unique self-regulated multi-layer method for evaluating error correction techniques in Wireless Acoustic Sensor Networks (WASNs), focusing on the effects of multi-node routing and the broadcast nature of wireless channels across various network levels. By thoroughly comparing forward error correction (FEC), automatic repeat request (ARQ), and hybrid ARQ algorithms, the study shows that hybrid ARQ and FEC systems significantly enhance communication error robustness. These improvements can reduce transmission power or extend node distances in multi-node networks. Results indicate that longer node distances reduce energy consumption and end-to-end delay for hybrid ARQ and specific FEC codes, crucial for real-time, delay-sensitive applications. Additionally, the benefits of FEC codes increase with network density, though transmit power management introduces delays for some codes. The study also identifies scenarios where ARQ is more effective than FEC codes, depending on target packet error rates and distances.

Utpal Ghosh, Uttam Kr. Mondal
Backmatter
Metadaten
Titel
Smart Systems and Wireless Communication
herausgegeben von
Rituparna Chaki
Agostino Cortesi
Suparna DasGupta
Soumyabrata Saha
Copyright-Jahr
2025
Verlag
Springer Nature Singapore
Electronic ISBN
978-981-9613-48-9
Print ISBN
978-981-9613-47-2
DOI
https://doi.org/10.1007/978-981-96-1348-9