Skip to main content

2024 | Book

IoT Based Control Networks and Intelligent Systems

Proceedings of 4th ICICNIS 2023

Editors: P. P. Joby, Marcelo S. Alencar, Przemyslaw Falkowski-Gilski

Publisher: Springer Nature Singapore

Book Series : Lecture Notes in Networks and Systems


About this book

This book gathers selected papers presented at International Conference on IoT Based Control Networks and Intelligent Systems (ICICNIS 2023), organized by School of Computer Science and Engineering, REVA University, Bengaluru, India, during June 21–22, 2023. The book covers state-of-the-art research insights on Internet of things (IoT) paradigm to access, manage, and control the objects/things/people working under various information systems and deployed under wide range of applications like smart cities, healthcare, industries, and smart homes.

Table of Contents

A Comparative Analysis of ISLRS Using CNN and ViT

Indian Sign Language Recognition System (ISLRS) aims at recognizing and interpreting the hand gestures and movements in Indian Sign Language (ISL), in order to facilitate smooth communication between the hearing-impaired individuals and the normal population. This research aims at comparing ISLR System using a custom convolutional neural network (CNN) architecture as well as Vision Transformer (ViT). From the ISL alphabet dataset consisting of 36 classes, 26 classes corresponding to the English alphabets are considered in this analysis. The analysis showed that for the dataset, ViT outperforms CNN in terms of performance metrics considered.

S. Renjith, Rashmi Manazhy
Vehicle Information Management System Using Hyperledger Fabric

The growing popularity of motor vehicles among us has accelerated the growth of the automotive industry. But with the increase in sales, there is a huge burden on the Regional Transport Office (RTO) to manage vehicle data. One of the most difficult challenges in today’s vehicle data management systems is ensuring the integrity and confidentiality of vehicle data. The RTO has full control over vehicle data, which in turn encourages dishonest employees to abuse their data manipulation rights. By submitting false documents, a stolen or smuggled vehicle from one state can be legitimately driven in another state. The aforementioned problems in vehicle data management systems can be addressed by “blockchain” technology. A blockchain is a decentralised, append-only ledger that is replicated among all the peers present in the network. A blockchain-based way out has been proposed for the smooth functioning of the vehicle registration system. The proposed framework consists of three sub-modules that provide a blockchain-based architecture for registration of new vehicles, querying vehicles, and interstate transfer of vehicles. The framework is built on Hyperledger Fabric. Hyperledger Fabric is a permissioned blockchain and has certain features that are best suitable for business applications. Each sub-module is evaluated with respect to throughput, latency, and send rate.

Anwesha Banik, Sukanta Chakraborty, Abhishek Majumder
S-SCRUM—Methodology for Software Securitisation at Agile Development. Application to Smart University

The use of agile methodologies during software development is a common practice nowadays, mainly because they facilitate the delivery of value to the client and contribute to the viability of the project. However, security is an aspect that can hardly be contemplated when focusing on the development of functionalities. In the agile development team, responsibilities are diluted in the team and the individual competence of the members has to be relied upon. This paper proposes to extend the SCRUM methodology with new processes, artefacts, and roles to generate Security SCRUM (S-SCRUM). This methodology contemplates the guarantee of security in any project that uses it and claims the figure of the security expert as an indispensable figure in the development of large-scale software. As part of the proposal, the methodology has been used in a real project being developed by nine Spanish universities, Smart University, demonstrating its usefulness and contribution to both agility and system security, facilitating the delivery of secure value increments.

Sergio Claramunt Carriles, José Vicente Berná Martínez, Jose Manuel Sanchez Bernabéu, Francisco Maciá Pérez
Energy-Efficient Reliable Communication Routing Using Forward Relay Selection Algorithm for IoT-Based Underwater Networks

Underwater communication improved with the aid of the Internet of Things (IoT) and refers to intelligent system activities, and network modules in different conditions. Past few years, the growth in the development of underwater mapping is highly increasing and demanding. This underwater communication is associated with the transmission of acoustic waves, as these waves have the capability to transmit long-range communication. Due to the aqueous aspect of signal absorption, low-frequency signals are conveyed in the underwater Internet of Things (UIoT). Improving transmission and optimizing network communication performance in IoT-enabled hazardous underwater situations are an endless challenge. The quality of service is changed based on the effect of collisions and network interference, which may lead to energy consumption, end-to-end delay, low packet delivery ratio, and low latency in the networks. This research proposes a Reliable Forwarded Communication Routing Transmission Protocol (RFC-RTP) for efficient energy consumption and improved network lifetime. Artificial rabbits optimization is used to pick the relay and balance the energy consumption ratio and end-to-end delay. In this proposed work, an RTP reduces the underwater collision for forwarder nodes sending information to the surface basin. The RFC-RTP protocol simulations are compared with other existing protocols in terms of energy consumption, end-to-end delay, and network lifetime.

N. Kapileswar, P. Phani Kumar
Deep Learning Approach based Plant Seedlings Classification with Xception Model

Plants, being one of the most important elements of the biosphere, are essentially useful for the survival of all living organisms. Plant seedlings are inevitable to produce cash crops at an adequate quantity since the world population is increasing. A crucial issue to be addressed in the production of good quality seedlings is the weed control. In order to create a feasible, effective and better approach for classifying seedlings, this article presents an AI-based approach that can accurately discriminate and categorize seedlings. In this approach, pretrained models are investigated to identify better deep model for efficient seedlings classification. A comparative analysis is also conducted to analyze the performance of deep models for plant seedlings classification. The findings demonstrate the efficiency of the Xception model in plant seedlings classification. The deep models used for comparison are InceptionResnetV2, Xception, InceptionV3, ResNet50 and MobileNetv2. All the employed models yielded accuracy rates above 90% of which the Xception model outperformed the other models by scoring an accuracy rate of 96%.

R. Greeshma, Philomina Simon
Improving Node Energy Efficiency in Wireless Sensor Networks (WSNs) Using Energy Efficiency-Based Clustering Adaptive Routing Scheme

In this era, the application of Wireless Sensor Networks (WSNs) is essential in new private/public or government sector (mainly defense sector). WSNs can be retrofitted in forests, military fields, industrial areas, or residential areas. The existing WSN of the new line has problems such as node failure and high energy consumption. Limitation, deploying a WSN everywhere requires addressing the WSN's energy issues. An energy efficiency-based clustering adaptive routing scheme (EECARS) has been suggested and developed with the intention of identifying malicious nodes on the basis of their energy consumption. This is accomplished by comparing the actual and projected amounts of energy consumption. Nodes with unusual energy are recognized as malignant hubs. EECARS anticipates the energy consumption of sensor hubs by utilising previous data and probability capabilities to forecast the energy consumption of each sensor hub. It is a method of conserving energy that will bring a reduction in the amount of energy that the company uses. Reproduction results show that EECARS further develops network lifetime, execution, and energy efficiency. The available energy capacity at the relay node, clustering and residual energy, and the distance of the relay node to the base station (BS) of the Wireless Sensor Network are all used by the multi-hop relay design. The amount of energy consumption for data transmission in Wireless Sensor Networks is provided by Wireless Sensor Networks in the form of packet transmission rate and the efficiency of node data transmission. This is accomplished in Wireless Sensor Networks.

J. Abinesh, M. Prakash, D. Vinod Kumar
An Evaluation of Prediction Method for Educational Data Mining Based on Dimensionality Reduction

In the area of educational data mining (EDM), it is important to develop technologically sophisticated solutions. An exponential growth in educational data raises the possibility that conventional methods could be constrained as well as misinterpreted. Thus, the field of education is becoming increasingly concerned in resurrecting data mining methods. This work thoroughly analyzes and predicts students’ academic success using logistic regression, linear discriminant analysis (LDA), and principal component analysis (PCA) to keep track of the students’ future performance in ahead. Logistic regression is enhanced by comparing LDA and PCA in a bid to improve precision. The findings demonstrate that LDA improved the accuracy of the logistic regression classifier by 8.86% as compared to PCA’s output, which produced 35 more correctly classified data. As a result, it is demonstrated that this model is effective for forecasting students’ performance using students’ historical data.

B. Vaidehi, K. Arunesh
High-Performance Intelligent System for Real-Time Medical Image Using Deep Learning and Augmented Reality

Evolving new diseases demand the need for technology to identify the disease in an effective way. Medical imaging in the field of disease identification helps to identify the disease by scanning the human parts, thereby preventing the increased rate of deaths. Deep learning algorithms make it easier to identify and analyze disease efficiently through medical imaging. The high performance of these models is needed for the disease to be predicted with accurate results. The prediction rate of the disease can be increased by the efficient use of deep learning modules and algorithms. This research involves the use of deep learning models in identifying brain hemorrhage and retinopathy diseases through deep learning algorithms. The deep learning algorithms AlexNet and convolutional neural network (CNN) with the accuracy of 90% and 96%, respectively, are employed for the detection of brain hemorrhage, and ResNet-50 and CNN with accuracy of 70% and 92%, respectively, are used for the identification of retinopathy. The output of the model is displayed using augmented reality (AR), which makes it interactive for the user to analyze the results. The AR display is achieved using the unity engine along with the Vuforia package and using the barracuda package for importing the deep learning model into unity. Thus, by increasing the accuracy rate of the system, this research demonstrates the high performance of the intelligent system.

G. A. Senthil, R. Prabha, R. Rajesh Kanna, G. Umadevi Venkat, R. Deepa
Diabetic Retinopathy Detection Using Machine Learning Techniques and Transfer Learning Approach

Diabetic retinopathy (DR) is a diseased condition of eyes which arises due to prolonged diabetes. It could result in loss of eyesight if not identified and handled in time. Diabetic retinopathy manifests itself as non-proliferative diabetic retinopathy (NPDR) which is the earlier stage and proliferative diabetic retinopathy (PDR) which is the advanced stage. In this study, a machine learning model has been developed that classifies a given fundus image as normal, NPDR, or PDR. Initially, machine learning algorithms like decision trees, Naive Bayes, random forest, K-nearest neighbor (KNN), and support vector machine (SVM) were applied for binary classification, but the classification accuracy was less. So later, we employed transfer learning techniques such as ResNet-50, VGG16, and EfficientNetB0 for binary classification which gave high validation accuracy. Then the above-mentioned transfer learning techniques were further used for multiclass classification which gave very good validation accuracy in tune with the existing research in this field.

Avanti Vartak, Sangeeetha Prasanna Ram
Recommender System of Site Information Content for Optimal Display in Search Engines

For optimal display in search engines, a recommender system for site information content has been developed. A theoretical analysis of the mechanisms of attracting new customers using search engines was carried out. An overview of the existing techniques of site information content, which affect the ranking of the website in search engines, was performed. The specific features of the site information content are presented and the advantages of using the SILO architecture are given. Basic recommendations for keyword density and their location on the website page are offered. The developed recommender system for site content optimization allows to set off the density of keywords, analyzing the content structure of the website and the competitor’s site, as well as performing the analysis of the site’s general content conveniently.

Oleg Pursky, Vitalina Babenko, Hanna Danylchuk, Tatiana Dubovyk, Iryna Buchatska, Volodymyr Dyvak
Development of IoT-Based Vehicle Speed Infringement and Alcohol Consumption Detection System

One of the leading causes of major accidents is overspeeding of vehicles. Everyone wants to reach their objectives without encountering any obstacles. To save from reckless driving and speeding, the proposed approach helps to develop a system to overcome the problem of excessive speed under various conditions. The proposed approach also helps to develop a user-friendly app that supports users in switching voice modules to notify their parents or guardians about speed violations and alcohol consumption and gives them a voice message to the rider. Different threshold values for speed have been defined. If the rider exceeds the allowed speed-1, a recorded voice message is played, and a message will be displayed on the LCD screen. The app incorporates many recorded voice messages, and one out of these voice modules has to be chosen before the voice message is played using speakers. Additionally, application users have the ability to edit their voice modules and contact details. If the rider exceeds the permissible speed-2, a message will be displayed on the LCD screen, and a notification is sent to the parents or guardians' mobile phone along with the GPS coordinates of the vehicle. Furthermore, the alcohol sensor will be in the ON state from the time the rider starts the engine until the vehicle engine is turned off. The voice message is played, and a notification is sent to the user when the sensor determines the presence of alcohol content.

Raghavendra Reddy, B. S. Devika, J. C. Bhargava, M. Lakshana, K. Shivaraj
Phonocardiogram Identification Using Mel Frequency and Gammatone Cepstral Coefficients and an Ensemble Learning Classifier

The phonocardiogram, abbreviated as the PCG signal, is one of the signals that has proven to be extremely useful in identifying and diagnosing cardiovascular diseases. Given the ease of acquisition that sets this type of signal apart from others, and knowing that the only tool needed to accomplish the acquisition is a stethoscope, it seems reasonable that identifying clinical signs and symptoms with this signal is extremely valuable. The field of signal processing and artificial intelligence AI has now become one of the foundations of biomedical signal diagnosis in general and especially the PCG signal. In this paper, this signal will be subject of a discrete wavelet decomposition in order to eliminate the unnecessary components contained in the signal, which were determined using energy calculations of wavelet coefficients, After a reconstruction of the signal, we extract the cepstral coefficients which are the Mel frequency cepstral coefficients MFCC, delta MFCC, delta-delta MFCC, the Gammatone cepstral coefficients GTCC, delta GTCC, and delta-delta GTCC coefficients. Following that, several feature sets are going to be produced that can help ensemble learning classifiers recognize PCG signals. The models developed in this way were able to achieve an accuracy of up to 87.76% by a holdout cross-validation.

Youssef Toulni, Taoufiq Belhoussine Drissi, Benayad Nsiri
Automatic Conversion of Image Design into HTML and CSS

The article offers a method to automatically convert web design into HTML/CSS code, a task typically performed by web application developers. The research's purpose is to save programmers time to iterate the logic from the design. The used approach extracts and recognizes a list of graphic components—structures, types, sections, elements, and their styles, as well as information about their location in the graphic template. The presented solution automatically converts the pre-built web interface into HTML and CSS code using Adobe XD and existing scripts inside him. Several conversion graphics tools are reviewed and the advantages and capabilities of three of the tools are shown in a comparison table. This is how Adobe XD stood out from the other two and was chosen when prototyping the initial design. The proposed solution can help people who are just starting in graphic design and want to learn how to easily transform their ideas into HTML/CSS source code.

Mariya Zhekova, Nedyalko Katrandzhiev, Vasil Durev
Customizing Arduino LMiC Library Through LEAN and Scrum to Support LoRaWAN v1.1 Specification for Developing IoT Prototypes

The release of LoRaWAN in 2015 introduced specification v1.0, which outlined its key features, implementation, and network architecture. However, the initial version had certain flaws, particularly vulnerabilities to replay attacks due to encryption keys, counters, and nonce schema. To address these concerns, the LoRa Alliance subsequently released v1.1 of the LoRaWAN specification. This updated version aimed to enhance security by introducing new encryption keys, additional counters, and a revised network architecture. While the original LoRaWAN v1.0 specification spawned various device library implementations, such as IBM’s LoRaWAN MAC in C (LMiC) from which Arduini-lmic was derived, none of these existing implementations adopted the improved security features of the LoRaWAN v1.1 specification. To address the lack of an open-source implementation for v1.1 end devices on open hardware platforms and to leverage the security enhancements of v1.1, a solution was devised and implemented to adapt the Arduino-lmic library. This adaptation process followed the principles of continuous improvement derived from the LEAN software development methodology, combined with the utilization of the Scrum framework.

Juan M. Sulca, Jhonattan J. Barriga, Sang Guun Yoo, Sebastián Poveda Zavala
Prevention of Wormhole Attack Using Mobile Secure Neighbour Discovery Protocol in Wireless Sensor Networks

Wireless sensor networks (WSNs) are vulnerable to various types of attacks, and one of them is the wormhole attack. The wormhole attack can severely damage the network by creating a tunnel between two distant nodes, enabling attackers to bypass the normal network routes and steal sensitive information. In this project, we proposed a prevention mechanism for the wormhole attack using the Mobile Secure Neighbour Discovery Protocol in WSNs. We implemented our proposed mechanism using the NS2 simulator and evaluated its performance against the wormhole attack. Our proposed mechanism uses a unique secret key between nodes to prevent attackers from creating a tunnel between them. By tracking the amount of time it takes for the messages to arrive at their destination, we implemented the Mobile Secure Neighbour Discovery Protocol in our system to look for wormhole attacks. Our simulation results show that our proposed mechanism is effective in preventing the wormhole attack in WSNs. It successfully detects and isolates the malicious nodes responsible for the attack, thereby ensuring the security and reliability of the network. Moreover, the proposed mechanism incurs minimal overhead and does not affect the network's performance. Our findings indicate that our proposed mechanism can be a useful tool for securing WSNs against the wormhole attack. And it enhanced network throughput, packet delivery ratio, false detection ratio, and reduced the delay, energy efficiency, and overhead.

D. Jeyamani Latha, N Rameswaran, M Bharathraj, R Vinoth Raj
Comparison of Feature Extraction Methods Between MFCC, BFCC, and GFCC with SVM Classifier for Parkinson’s Disease Diagnosis

Analysis of the voice signal can assist in the detection of Parkinson's disease, a degenerative and progressive neurological disorder affecting the central nervous system. Indeed, early changes in voice patterns and characteristics are frequently observed in patients with Parkinson's disease. Therefore, voice feature extraction can aid in the early identification of Parkinson's disease. This paper presents novel approaches to extracting features from speech signals using Gammatone frequency cepstral coefficients (GFCC), Bark frequency cepstral coefficients (BFCC), and Mel frequency cepstral coefficients (MFCC). The PC-GITA and Sakar databases are used, which contain speech signals from healthy individuals and individuals with Parkinson's disease. The coefficients from 1 to 20 of GFCC, BFCC, and MFCC are extracted from each speech signal and calculated the average value to extract the voiceprint of each speech signal. For classification, the support vector machine with different kernels (linear, RBF, and polynomial) and tenfold cross-validation are employed. Using the first 12 coefficients of the GFCC with a linear kernel yields a higher accuracy rate of 81.58% for Sakar database and 76% for PC-GITA database.

N. Boualoulou, Taoufiq Belhoussine Drissi, Benayad Nsiri
A Comprehensive Study on Artificial Intelligence-Based Face Recognition Technologies

Face recognition technology is a technique that recognizes and authenticates individuals based on their unique facial features. It has various applications, including security, access control, identity verification, and social media tagging. These systems use algorithms to analyze facial traits and create a facial template for comparison with a database of known faces. Advances in machine learning and computer vision have improved the accuracy of face recognition technology. However, concerns about the privacy and security implications of biometric data collection and storage have arisen. This paper provides an overview of the history, techniques, algorithms, applications, performance evaluation metrics, challenges, and future directions of face recognition technology.

Sachin Kolekar, Pratiksha Patil, Pratiksha Barge, Tarkeshwari Kosare
Design of IoT-Based Smart Wearable Device for Human Safety

Nowadays because of various circumstances, woman and children safety has become very important. There is a need for the smart device which automatically senses and rescues the victims. Proposed new approach helps to use Internet of Things (IoT) technology for woman and children safety. The proposed smart wearable device consists of various hardware components and Blynk App. This smart wearable device will communicate continuously with a smart phone through Internet service. It makes use of GPS and messaging services to send the location information and alert message. Whenever it receives emergency signal, the system can alert the relevant authorities, nearest police station, relatives, and emergency response teams through automated notifications, allowing them to respond quickly and efficiently. The proposed approach aims to provide a comprehensive solution for women's and child safety and covers the development of application and hardware, including the integration of the alert system and emergency calls and ensuring that the solution is user-friendly, reliable, and effective in critical situations.

Raghavendra Reddy, Geethasree Srinivasan, K. L. Dhaneshwari, C. Rashmitha, C. Sai Krishna Reddy
Detection of Artery/Vein in Retinal Images Using CNN and GCN for Diagnosis of Hypertensive Retinopathy

A progressive disease of the retina known as hypertensive retinopathy (HR) is linked to both high blood pressure and diabetes mellitus. The severity and persistence of high blood pressure are directly connected with the development of HR. The results of the HR include constricted arterioles, retinal hemorrhage, macular edema, and cotton-like patches as symptoms of eye pathological problems. In this paper, a novel strategy that combines convolutional neural network (CNN) and graph convolutional network (GCN) is proposed to improve the accuracy of categorizing retinal blood vessels and classification of HR phases. The process is extracting vessel features from the spatial domain and representing them using graphs.

Esra’a Mahmoud Jamil Al Sariera, M. C. Padma, Thamer Mitib Al Sariera
An Evolutionary Optimization Based on Clustering Algorithm to Enhance VANET Communication Services

As a framework for facilitating intelligent communication between vehicles and enhancing the topic of interest pertaining to the safety and performance of traffic, vehicular ad hoc networks (VANETs) have developed. Effective communication among the vehicular nodes in VANETs is crucial due to the high vehicle mobility, varying number of vehicles, and dynamic inter-vehicle spacing. Hence, the evolutionary algorithm named Honey Badger Algorithm is used to improve communication in VANETs, which can successfully operate in high mobility node settings. HBA is built on an evolutionary algorithm with biological inspiration and a routing protocol based on game theory that dynamically adjusts to changes in network topology and distributes the load across network nodes through cluster formation as clustering improves network performance and scalability. Experimental comparisons of our approach with popular techniques like ant colony optimization (ACO), Hunger Game Search (HGS), Particle Swarm Optimization (PSO), and Firefly Optimization are made (FFO). The performance metrics Packet Delivery Ratio, Throughput, End-to-End Delay, Mean Routing Load, Control Packet Overhead, and energy used to assess the performance of communication services in VANETs. Experiments are carried out in MATLAB, and findings show that HBA delivers the best results for implementing vehicular services.

Madhuri Husan Badole, Anuradha D. Thakare
Visual Sentiment Analysis: An Analysis of Emotions in Video and Audio

Natural Language Processing (NLP)-based sentiment analysis examines opinions, feelings, and emotions expressed in emails, social media posts, YouTube videos, reviews, business documents, etc. Sentiment analysis on audio and video is a mostly unexplored area of study, in which the speaker’s sentiments and emotions are gathered from the audio, and feelings are gathered from the video. The goal of visual sentiment analysis is to understand how visuals affect people’s emotions. Despite being a relatively new topic, a wide range of strategies based on diverse data sources and challenges has been developed in recent years, resulting in a substantial body of study. This study examines relevant publications and provides an in-depth analysis. After describing the task and its applications, the subject is broken down into different primary topics. The study also discusses about the general visual sentiment analysis design principles from three perspectives: emotional models, dataset creation, and feature design. The problem is formalized by considering multiple levels of granularity and components that can affect it. To accomplish this, the research study looks at a structured formalization of the task that is often used in performing text analysis and assesses its relevance to perform visual sentiment analysis. The discussion includes new challenges, progress toward sophisticated systems, related practical applications, and a summary of the study’s findings. Experimentation was also conducted on the FER-2013 dataset from Kaggle for facial emotion detection.

Rushali A. Deshmukh, Vaishnavi Amati, Anagha Bhamare, Aditya Jadhav
Design and Functional Implementation of Green Data Center

The demand for data storage and processing is increasing worldwide, resulting in a significant environmental impact. To address this, the concept of a “Green Data Center” has emerged, which focuses on reducing the energy consumption and carbon emissions of data centers. In developing country, data centers are energy intensive and rely on non-renewable energy sources, contributing to environmental degradation and climate change. This research paper proposes a green data center with renewable energy sources, energy-efficient equipment, and a cooling system that utilizes outside air. The energy supply will be provided by renewable energy and the energy consumption will be reduced by utilizing the latest technology for power distribution, cooling, and lighting. The research paper presents an economic feasibility analysis of a green data center, which shows that the initial capital investment is higher, but the operating costs are lower due to renewable energy sources, resulting in net cost savings over time. In conclusion, this research paper proposes a sustainable solution for the growing demand for data storage and data processing. The proposed green data center will provide a sustainable solution to meet the increasing demand for digital services while contributing to the country’s efforts to combat climate change.

Iffat Binte Sorowar, Mahabub Alam Shawon, Debarzun Mozumder, Junied Hossain, Md. Motaharul Islam
Patient Pulse Rate and Oxygen Level Monitoring System Using IoT

There is a technology of medical field to minimize the need of hospitalization and craving way for continuous and remote monitoring of condition of patient health. Internet of Things (IoT) gives way to achieve smart monitoring nowadays. It is a new trend in which a large number of built-in devices (things) are linked to the Internet. This technology is extremely useful for storing and sharing data in any situation. The application of data sharing using the Internet of things is vast, and some examples include theft detection, smart home systems, and automatic door locks. Using the Internet of Things, we can share information in remote areas. It has numerous medical applications, including proper health report documentation and detailed maintenance. Identifying a patient’s emergency situations and assisting in the timely initiation of proper treatment. Similarly, we are developing a reliable monitoring technique with ESP32 through Internet of Things. It continuously calculates heart rate (ECG), pulse and oxygen levels of the patients which are to be monitored for patients having much complex conditions in their body. The IoT platform is used to process our data in this application. An IoT analytics tool for aggregating, visualizing, and analyzing live data streams. It displays real-time continuous uptake of data from your nearby devices. IoT can analyze and process the data as which it takes in and transmitted to any desired location in real-time.

K. Stella, M. Menaka, R. Jeevitha, S. J. Jenila, A. Devi, K. Vethapackiam
IoT-Based Solution for Monitoring Gas Emission in Sewage Treatment Plant to Prevent Human Health Hazards

As the global population grows and water resources become scarce, the only way to effectively conserve and utilise them is by adequately treating and reusing them. In Sewage Treatment Plants, ensuring a safe gaseous environment for the worker’s health is essential. Exposure to toxic gases for a long duration may cause health issues for workers. It is essential to understand these factors and take necessary precautions to prevent further deterioration of their health. In this work, we aim to develop a sensor-based system that measures the intensity levels of toxic gases like Methane, Ammonia, Carbon dioxide, and Hydrogen Sulphide emitted from the plant and spread to its environment and to alert at the right time to safeguard the humans around. We compared these gas levels with data captured in good air to demonstrate the high density of the toxic gases in the plant.

S. Ullas, B. Uma Maheswari
Evaluation of the Capabilities of LDPC Codes for Network Applications in the 802.11ax Standard

The use of Wi-Fi-enabled devices is increasing every year. Current consumer demands are related to the requirements for greater rate, greater reliability and greater energy efficiency. In the proposed chapter, studies of signal code construction (SCC) used in Wi-Fi standards: 802.11ac and 802.11ax were carried out. Aspects of beamforming and the use of LDPC codes in robust implementations are described. The relevance of the work is to create recommendations for the use of LDPC and their implementation using the hardware description language (HDL). The chapter focuses on the concept of building an LDPC decoder within the Norm-Min-Sum algorithm in HDL. Recommendations for the efficient use of LDPC-based SCC for 802.11 applications are presented. LDPC codes are popular because they have very good performance and allow for simple hardware implementations. The proposed results will be useful for optimizing signal selection for modern network applications.

Juliy Boiko, Ilya Pyatin, Oleksander Eromenko, Lesya Karpova
Parallel Optimization Technique to Improve the Performance of Lightweight Intrusion Detection Systems

In recent years, the need for effective and lightweight intrusion detection systems (LIDS) has grown significantly due to the widespread adoption of Internet of Things (IoT) devices and the increasing number of cyber threats. This paper presents a novel parallel optimization technique to enhance the performance of LIDS in terms of accuracy, detection rate, and computational efficiency. Our approach employs a combination of machine learning algorithms and parallel computing to process and analyze network data in a highly efficient manner. We investigate the effectiveness of various feature selection techniques and ensemble models in the context of parallel processing to optimize the overall performance of the LIDS. Furthermore, we propose a hybrid model that seamlessly integrates the selected feature subsets and ensemble classifiers for improved accuracy and reduced false alarm rates. To evaluate the proposed technique, we conduct extensive experiments using real-world datasets and compare our approach with existing state-of-the-art LIDS. The results demonstrate that our parallel optimization technique significantly outperforms the current methods, achieving higher detection rates, better accuracy, and reduced computational overhead. This research contributes to the development of more effective and resource-efficient LIDS, which are crucial for the security of IoT ecosystems and other resource-constrained environments.

Quang-Vinh Dang
Enhancement in Securing Open Source SDN Controller Against DDoS Attack

The SDN network paradigm was developed as a solution to overcome limitations of traditional networks by separating the control and data planes, resulting in greater flexibility and scalability. However, its centralized architecture can become vulnerable to DDoS attacks, posing threat to network availability. To address this, the paper proposes the utilization of machine learning, specifically support vector machine to analyze flow table data and detect potentially malicious traffic as a countermeasure. By employing these techniques, SDN networks can detect and mitigate DDoS attacks, reducing their impact on network performance and availability. The efficacy of these techniques has been demonstrated through experimentation on the CIC-DDoS2019 dataset. Furthermore, future enhancements may include optimizing individual flows for DDoS and deploying the model to the SDN Cloud for use in public networks.

S. Virushabadoss, T. P. Anithaashri
Proposal of a General Model for Creation of Anomaly Detection Systems in IoT Infrastructures

The inclusion of IoT in our platforms is very common nowadays, however, it is necessary to be able to monitor its proper functioning and correction to avoid harmful effects on the systems that are supplied with the data. For the development of systems that allow the detection of anomalies—ADS, it is necessary to use proposals that systematize the construction of such systems, but in this case, focused on IoT and the underlying problematic as metadata redundant, inadequate formats for the treatment and generation of algorithms, need for preprocessing or the need to change the structure of the data. This work proposes an ADS model that characterizes and choreographs a series of processes, sub-processes, stages and activities involved in the generation of this type of systems. To validate the proposal, the creation of an ADS using the model has been instantiated for the Smart University platform of the University of Alicante.

Lucia Arnau Muñoz, José Vicente Berná Martínez, Jose Manuel Sanchez Bernabéu, Francisco Maciá Pérez
Internet of Things (IoT) and Data Analytics for Realizing Remote Patient Monitoring

In the wake of recent advancements in technologies such as 5G, the Internet of Things (IoT), and Artificial Intelligence (AI), unprecedented smart solutions are made possible. Plenty of IoT use cases that have not been practical so far. IoT is being used to realize eHealth and mHealth applications to enable Ambient Assisted Living (AAL) in the healthcare domain. The health of human beings has been given the highest importance and there are many practical issues in the existing healthcare domain. The problems include delays in healthcare service and high costs involved in the procedures. Many prominent personalities died of heart attacks due to delays in medical intervention. Therefore, the need of the hour is to have real-time patient monitoring and treatment without causing delay. IoT has brought its grace to realize applications like remote patient monitoring (RPM). Wearable devices (biosensors) with IoT integration transmit the vital signs from patients to doctors in real time, enable to initiate treatment immediately. The phenomenon RPM has the potential to save time, healthcare costs, and improve the quality of life of patients and the quality of healthcare services as well. This paper throws light on the present state of the art on RPM using IoT and paves the way for identifying potential research gaps for leveraging RPM systems. The proposed IoT-integrated RPM for patient health monitoring employs an algorithm called data analytics for remote patient monitoring and presented experimental results that lead the system to analyze patient health details.

A. Bharath, G. Merlin Sheeba
A Brief Review of Swarm Optimization Algorithms for Electrical Engineering and Computer Science Optimization Challenges

Swarm intelligence (SI), a crucial element in the science of artificial intelligence, is steadily gaining importance as more and more high complexity problems demand solutions that may be imperfect but are nonetheless doable within a reasonable amount of time. Swarm intelligence, which takes most of its inspiration from biological systems, imitates how gregarious groups of animals work together to survive. This paper aims to discuss the key ideas, highlight potential application domains, their variations, and provide a thorough analysis of three SI algorithms, including the Dragonfly algorithm (DA), Grey wolf optimization (GWO), Whale optimization algorithm (WOA), and their applicability to many branches of electrical engineering and computer science. Finally, these methods were applied to the Zakharov, Levy, Sum Squares, and Drop Wave functions to calculate their algorithmic cost. According to the study, WOA outperforms the other two algorithms for the Levy and Zakharov functions, while GWO and DA perform better for the Sum Squares and Drop Wave functions, respectively.

Vaibhav Godbole, Shilpa Gaikwad
Facilitating Secure Web Browsing by Utilizing Supervised Filtration of Malicious URLs

Nowadays, Internet use by all people has become a daily matter, whether to do business, buy online, education, entertainment, or social communication. Therefore, attackers exploit users of the Internet by tricking them into clicking on a specific link to carry out a phishing attack on them to collect sensitive information for users, whether credentials, usernames, email passwords banks, electronic payment information via a phishing attack that targets the end user, so many methods have appeared for protection from this type of attack, and among these methods using algorithms of machine learning (ML) to distinguish between the proper URL from the malicious phishing using several types from algorithm classification based on certain features found in any URL and using a ready-made data set, the best types of algorithm best performance are determined. It is recommended to use them based on the accuracy of the algorithm. We applied different machine learning algorithms on the same data set as Decision Tree, Logistic Regression, Linear Discriminant, Gradient, and Random Forest. The best accuracy was for the RF algorithm, with 97%.

Ali Elqasass, Ibrahem Aljundi, Mustafa Al-Fayoumi, Qasem Abu Al-Haija
Unveiling the Impact of Outliers: An Improved Feature Engineering Technique for Heart Disease Prediction

Outliers can have a significant impact on statistical analysis and machine learning models, as they can distort the results and adversely affect the performance of the models. Detecting and handling outliers is an important step in data analysis to ensure accurate and reliable results. The techniques, such as Z-scores, box plots, IQR, and local outlier factor analysis, are commonly used in outlier analysis to identify and handle outliers in datasets. These methods can help to detect the data that differ significantly from the remaining data and can be used to determine whether these points should be removed or transformed to improve the accuracy of the models. This research study proposed a Feature Engineering for Outlier Detection and Removal (FEODR) technique by using statistical methods for outlier detection and removal for the heart datasets. This approach aims to enhance the effectiveness of the prediction system by handling outliers effectively. Evaluating the model can provide insights into the effectiveness of the approach in order to improve the effectiveness of the machine learning classifiers.

B. Kalaivani, A. Ranichitra
Real-Time Road Hazard Classification Using Object Detection with Deep Learning

Potholes and speed bumps are common road hazards that can cause vehicle damage and put drivers in danger. Potholes, or road imperfections, are hazardous to both vehicles and people. This research proposes an innovative deep learning framework relying on the YOLOv8 architecture. For bettering the model's accuracy and resilience, it is being improved on distinctive annotated dataset. The dataset includes images of road surfaces with annotated potholes and speed bumps to help the model recognize these features. The model uses the power of convolutional neural networks to analyze road images and make high-accuracy predictions. The proposed system can be integrated into vehicles and other transportation systems to provide drivers with timely and reliable alerts, improving road safety and reducing vehicle damage. The experiments show that the approach is effective at detecting potholes and speed bumps with good precision, recall, mAP, and F1-score, providing an innovative solution for real-time pothole and speed bump detection.

M. Sanjai Siddharthan, S. Aravind, S. Sountharrajan
A Smart Irrigation System for Plant Health Monitoring Using Unmanned Aerial Vehicles and IoT

The Internet of Things (IoT) and Unmanned Aerial Vehicles (UAVs) in agriculture has revolutionized traditional farming practices, giving rise into precision agriculture. With an increasing global population and the challenges raised by climate change, there is a need for novel farming solutions that conserve resources and increase food production. Agriculture majorly depends on significant water resources, and improper irrigation and excessive nutrient application often lead to the wastage of water and nutrient leaching. On the other hand, monsoon-dependent farmlands suffer drought related challenges. Therefore, IoT-enabled smart irrigation systems are preferred over traditional irrigation methods. This research proposes a smart irrigation system for plant health monitoring using UAVs and the IoT. The data collected though the sensors is transmitted to a cloud-based platform for analysis, offering timely information into plant health. The proposed system could help to enhance plant health, increase yields, reduce resource consumption and labor costs, and provide valuable insights to help farmers make informed decisions.

J. Vakula Rani, Aishwarya Jakka, M. Jagath
Green IoT-Based Automated Door Hydroponics Farming System

Hydroponics refers to growing plants using a nutrient-rich water solution instead. The roots of the plants are submerged in the solution, which provides them with all the necessary nutrients for growth. All the hydroponics systems use water as the main growing medium. The restriction in a greenhouse environment is to maintain a specific amount of temperature, pressure, and humidity. Another challenging task that needs to be maintained in hydroponics is the monitoring of the pH value and electrical conductivity. Manual monitoring and late fixation may lead the plants to die. This paper proposes a fully automated hydroponics system along with the integration of green IoT which ensures an environment-friendly, energy-saving, sustainable farming method. This system automatically monitors and adjusts parameters, supplies necessary resources, and uploads data in a cloud server using IoT. An application will inform users of the current status on their mobile devices for quick monitoring and maintenance.

Syed Ishtiak Rahman, Md. Tahalil Azim, Md. Fardin Hossain, Sultan Mahmud, Shagufta Sajid, Md. Motaharul Islam
Explainable Artificial Intelligence-Based Disease Prediction with Symptoms Using Machine Learning Models

Artificial intelligence (AI) has the potential to revolutionize the field of healthcare by automating many tasks, enabling more efficient diagnosis and treatment. However, one of the challenges with AI in healthcare is the need for explainability, as the decisions made by these systems can have serious consequences for patients. AI can be particularly useful in the classification of diseases based on symptoms. This involves using machine learning algorithms to analyze a patient’s symptoms and classify them as having a particular disease or condition. While using black box machine learning algorithms can be highly accurate, there is little to no understanding on how these models work. Therefore, using techniques such as feature importance analysis and Explainable AI, it is possible to provide clear explanations for the decision-making process, which can improve trust and understanding among healthcare providers and patients.

Gayatri Sanjana Sannala, K. V. G. Rohith, Aashutosh G. Vyas, C. R. Kavitha
Deep Learning Methods for Vehicle Trajectory Prediction: A Survey

Trajectory prediction in autonomous vehicles deals with the prediction of future states of other vehicles/traffic participants to avoid possible collisions and decide the best possible manager given the current state of the surrounding. As the commercial use of self-driving cars is increasing rapidly, the need for reliable and efficient trajectory prediction in autonomous vehicles has become eminent now more than ever. While techniques or approaches that rely on principles and laws of physics and traditional machine learning-based methods have laid the groundwork for trajectory prediction research, they have primarily been shown to be effective only in uncomplicated driving scenarios. Following the increasing computing power of machines, the said task has seen an increase in popularity of DL techniques, which have demonstrated high levels of reliability. Driven by this increased popularity of deep learning approaches, we provide a systematic review of the popular deep learning methods used for vehicle trajectory prediction. Firstly, we discuss the problem formulation. Then we classify the different methods based on 3 categories—the type of output they generate, whether or not they take social context into account, and the different types of deep learning techniques they use. Next, we compare the performances of some popular methods on a common dataset and finally, we also discuss potential research gaps and future directions.

Shuvam Shiwakoti, Suryodaya Bikram Shahi, Priya Singh
Adaptive Hybrid Optimization-Based Deep Learning for Epileptic Seizure Prediction

The abnormal electrical brain activity causes Epilepsy, otherwise termed as a seizure. In other words, an electrical storm inside the head is called as Epilepsy. An effectual seizure prediction technique is required to decrease the lifetime risk of epilepsy patients. Recently, numerous research works have been devised to predict epileptic seizure (ES) based on Electroencephalography (EEG) signal analysis. In this paper, a novel epileptic seizure prediction (ESP) method, namely the proposed Adaptive Exponential Squirrel Atom Search Optimization (Adaptive Exp-SASO)-based Deep Residual Neural Network (DRNN) is introduced. Here, the Gaussian filters removes the artifacts that exist in the EEG signal. In order to increase the detection performance, the statistical and spectral features are excavated. In addition, the significant features suitable for prediction are chosen by the Fuzzy Information Gain (FIG). Furthermore, the ES is predicted by the DRNN, wherein the proposed adaptive Exp-SASO approach tunes the weight of DRNN. Besides, the experimental result revealed that proposed adaptive Exp-SASO method provides the accuracy of 97.87%, sensitivity of 97.85%, and specificity of 98.88%.

Ratnaprabha Ravindra Borhade, Shital Sachin Barekar, Tanisha Sanjaykumar Londhe, Ravindra Honaji Borhade, Shriram Sadashiv Kulkarni
Preeminent Sign Language System by Employing Mining Techniques

People who are deaf and dumb will use sign language to communicate with members of their group and persons from other communities. Computer-aided sign language recognition begins from sign gesture acquisition through text/speech generation. Both static and dynamic gesture recognition is crucial to humans, yet static is more straightforward. This study provides a method to recognise 32 American finger alphabets from statuettes, independent of the signer and surroundings of image capture. The work includes data collection, preprocessing, transformation, feature extraction, classification, and results. Incoming images are binarised, mapped to YCbCr, and normalised. These binary images are then analysed using principal component analysis. After extracting the information, LSTM is used to recognise the alphabet in sign language with 95.6% of accuracy.

Gadiraju Mahesh, Shiva Shankar Reddy, V. V. R. Maheswara Rao, N. Silpa
Cyber Analyzer—A Machine Learning Approach for the Detection of Cyberbullying—A Survey

With the widespread use of the Internet over time, an issue known as cyberbullying has emerged. The victim of cyberbullying may experience major effects on their mental health. So, it is necessary to identify cyberbullying on the Internet or on social media. The subject of detecting cyberbullying has been extensively studied. One method for automatically detecting cyberbullying is machine learning. Some of the studies on cyberbullying were studied for this article. The issue of detecting cyberbullying has also been addressed using a range of models and NLP techniques. The graph depicts the operation of the analyzing target feature and the number of class labels based on the studies examined.

Shweta, Monica R. Mundada, B. J. Sowmya, Meeradevi
Osteoarthritis Detection Using Deep Learning-Based Semantic GWO Threshold Segmentation

Knee osteoarthritis (OA) has been a prevalent degenerative joint ailment that affects people all over the world. Because of the increased occurrence of knee OA, the accurate diagnosis of osteoarthritis at an early stage is a tough task. The osteoarthritis imaging such as conventional radiography, MRI, and ultrasound are the essential components to diagnose knee OA in its early stages. On the other hand, deep neural network (DNN) designs are extensively used in medical image examination for the accurate outcomes in terms of classification of OA diagnosis. Image segmentation, also known as pixel-level categorization, is the method of categorizing portions of an image that are composed of the exact same object class by means of partitioning images into multiple segments. But, the drawback identified from the lack of accuracy of the traditional approach can be overcome using deep learning method. Hence, this paper presents a deep learning-based semantic grey wolf optimization (GWO) threshold segmentation to detect the Osteoarthritis accurately at all stages. The two phases of stages involve in the proposed work which carries CT image normalization and histogram connection to enhance the image with accuracy. The comparative analysis has also been done with the existing methods using the evaluation parameters such as sensitivity, specificity, accuracy, MSE, PSNR, SSIM, and MAE for the accurate diagnosis of OA.

R. Kanthavel, Martin Margala, S. Siva Shankar, Prasun Chakrabarti, R. Dhaya, Tulika Chakrabarti
Federated Learning-Based Techniques for COVID-19 Detection—A Systematic Review

The COVID-19 pandemic has created a significant need for accurate and rapid diagnosis of the disease. Traditional methods of diagnosis, such as PCR-based tests, have several limitations, including high cost, long turnaround times, and the need for specialised equipment and personnel. A promising technique for COVID-19 detection is federated learning (FL), which enables the cooperative training of machine learning models using distributed data sources while ensuring data privacy. This survey report provides an overview of the current state-of-the-art for COVID-19 detection utilising FL. We review the key concepts and principles of FL, and then discuss the various approaches used for COVID-19 detection, including deep learning-based approaches, transfer learning, and ensemble learning. We also examine the challenges and limitations of FL for COVID-19 detection, including data heterogeneity, communication overhead, and privacy concerns. Finally, we highlight the potential future directions of research in this area, including the development of more robust and scalable FL algorithms and the combination of FL with other cutting-edge technologies like edge computing and blockchain.

Bhagyashree Hosmani, Mohammad Jawaad Shariff, J. Geetha
Hybrid Information-Based Sign Language Recognition System

Sign language is generally used by the deaf and mute community of the world for communication. An efficient sign language recognition system can prove to be a breakthrough for the deaf–mute population of the world by assisting them to better communicate with people who don’t understand sign language. The current solutions available for sign language recognition require a lot of computation power or are dependent on some additional hardware to work. Thus, reducing their application in the real world and we aim to bridge this gap. In this paper, we are presenting our approach for developing machine learning models for sign language recognition with low computation requirements while maintaining high accuracy. We are using a combination of hand gesture images and hand skeleton information as model input to classify gestures. For this task, we are training CNN models and linear models to handle gesture image and hand skeleton data and later the results of these models serve as input for final classification layers. The hand skeleton information can also be used to detect and track the hand position in the frame, thus reducing need for additional computation overhead to detect hand in the frame.

Gaurav Goyal, Himalaya Singh Sheoran, Shweta Meena
Addressing Crop Damage from Animals with Faster R-CNN and YOLO Models

Deforestation leads to big problem where wild animals are entering into villages. It creates a great loss of property and life of wild animals. To protect animal from human being and vice versa, we can design a system to help farmers by reducing crop vandalization as well as diverting the animal without any harm. So, here we are trying to reduce crop vandalization by wild animals. The goal of this project is to detect wildlife using the TensorFlow object recognition API on a live feed from a camera. Object recognition is a commonly employed method in a variety of applications, including face recognition, driverless cars, and identifying sharp things such as knives and arrows. A tragic incident occurred when a pregnant elephant stepped into a nearby town in looking for food and died after eating a pineapple stuffed with crackers in Kerala’s valley forest. We can make a system which will detect animals on a farm, protect animal from human being and reduce the crop from damage caused by animal using convolutional neural networks (CNNs) algorithm, deep learning and some more new technologies. In this project, we will get live video from camera, we will apply TensorFlow object detection API to incoming video and we will try to find wild animal in that video. If an animal is discovered, a warning message should be sent to the farmer, and if a man attempts to kill the animal, a message should be sent to the nearest forest office.

Kavya Natikar, R. B. Dayananda
Cloud Computing Load Forecasting by Using Bidirectional Long Short-Term Memory Neural Network

Cloud services play an increasingly significant role in daily life. The widespread integration of the Internet of Things, and online services has increased demand for stable and reliable cloud services. To maximize utilization of available computing power a need for a robust system for forecasting cloud load is evident. This work proposed an artificial intelligence (AI)-based approach applied to cloud load forecasting. By utilizing bidirectional long short-term memory (BiLSTM) neural networks and formulating this task as a time-series forecasting challenge accurate forecasts can be made. However, proper functioning of BiLSTM is very reliant on proper hyper-parameter selection. To select the optimal values suited to this task a modified version of the sine cosine algorithm (SCA) is introduced to optimize the performance of the proposed method. The introduced approach is subjected to a comparative analysis against several contemporary algorithms tested on a real-world data-set. The attained outcomes indicate that the introduced approach has decent potential for forecasting cloud load in a real-world environment.

Mohamed Salb, Ali Elsadai, Luka Jovanovic, Miodrag Zivkovic, Nebojsa Bacanin, Nebojsa Budimirovic
A Security Prototype for Improving Home Security Through LoRaWAN Technology

Wireless networks constant expansion and the growth of smart devices connectivity, has motivated the search of new communication solutions to provide benefits for social problems like citizen insecurity where home robbery is one of the most relevant. A solution this type of problem is home automation, by using sensors that can detect intrusions at a house. This work proposes the development of a LPWAN prototype by using LoRa technology for detecting intruders in houses in a residential area. This approach focuses on the integration of LoRaWAN protocol combined with MQTT and an API REST to generate notification alerts for the security personnel and the house owner. A mobile application is used by the house owner to handle nodes. A web application has been developed as well so that security personnel could manage user authentication and monitor notification alerts generated by the nodes deployed in different houses. Push notifications has been enabled whenever an intrusion occurs or a node is disconnected.

Miguel A. Parra, Edwin F. Avila, Jhonattan J. Barriga, Sang Guun Yoo
Design of a Privacy Taxonomy in Requirement Engineering

The Non Functional Requirement (NFR) plays crucial role in creating software, web applications. It is observed that privacy and security requirements are identified and implemented very late in the software development life cycle. One of the NFR -privacy requirements imposes new challenges in managing PII (Person identifiable information). This information need to be preserved from requirement engineering phase to implementation phase. This paper focuses on designing new taxonomy of privacy in Requirement Engineering. This novel taxonomy covers the major properties of privacy which are considered in developing any secured, web based, privacy-preserving apps.

Tejas Shah, Parul Patel
Python-Based Free and Open-Source Web Framework Implements Data Privacy in Cloud Computing

Data sharing turn into a remarkably striking service delivered by cloud computing architecture due to its suitability and reduced cost. As a probable practice for understanding finely secured distribution of data, using a technique called Attribute-Based Encryption (ABE) with a different variety of users. In real time, ABE technique has its own drawbacks like high computation overhead and less security in cloud data storage with fine-grained data files. Processing tasks could be transferred to remote computing resources utilizing cloud computing. Cyber insurance is a realistic way of passing on cyber security risks, but data security can or cannot be improved depending on the environment's key characteristics. This paper focused on one income insurer, client participants, and insured volunteers, focusing on two different cybersecurity aspects and their impact on the standard form of contracts. Since cyber security is linked, an entity's degree of security is determined not just by its own effort and effort but also by the efforts of others operating within the same system. As such, effective resource utilization could be performed, reducing manufacturer costs, but the only drawback of cloud services is that data will be secured and reasonably priced in the cloud. Hence, before going to the cloud, all content must be encrypted. Consumers of a collision protection information-sharing system are given secure private keys so they may add or remove consumers.

V. Veeresh, L. Rama Parvathy
Using Deep Learning and Class Imbalance Techniques to Predict Software Defects

A software application defect is a variance or diversion from the end user's needs or the original business requirements. A software defect is a coding error that results in inaccurate or unexpected outcomes from a software program. Prediction of defects/faults in software is that the process of identifying software modules is likely to have errors before they are tested. The phase of testing of any life cycle of a software is the most expensive and resource-intensive. Software defect prediction (SDP) can reduce testing expenses, which could ultimately result in the development of those software having the high quality at a more affordable price. This research study utilizes different class imbalance techniques like oversampling and undersampling and uses the three models Random Forest, CNN, and LSTM to determine which one will produce the best results. This study uses the dataset from the public PROMISE repository.

Ruchika Malhotra, Shubhang Jyotirmay, Utkarsh Jain
Benefits and Challenges of Metaverse in Education

The COVID-19 pandemic has had a dramatic impact on all areas of social life. In particular, education is considered one of the areas most affected by many schools being forced to close to minimize the spread of the disease. Online teaching on video conferencing platforms is the suitable solution in that situation, such as Microsoft Team, Google Meet, and Zoom. In that context, changing learning methods is of great interest to educational researchers. By using technology, learning has become more exciting and attractive. In addition, both teachers and learners must learn and know how to use new technology on their own. In recent times, the term Metaverse is creating a prominent technology trend with the ability to combine reality–virtual in a 3D environment, with many features suitable for education. Through this research, we use research materials, solutions, and commercial products to explore the capabilities, effectiveness, classification, benefits, and limitations of the Metaverse in education. Based on this work, we can conclude that Metaverse is an effective learning tool because of its ability to visualize documents that are difficult to construct or dangerous in practice, overcome communication inadequacies, and user interaction in distance learning, etc.

Huy-Trung Nguyen, Quoc-Dung Ngo
Enhancing Pneumonia Detection from Chest X-ray Images Using Convolutional Neural Network and Transfer Learning Techniques

Pneumonia is a serious lung disease caused by a variety of viruses. Chest X-rays may be difficult to use to diagnose and treat pneumonia as it may be difficult to distinguish it from other respiratory disorders. A specialist must review chest X-ray pictures in order to diagnose pneumonia. The process is time-consuming and imprecise. This study aims to simplify the process of diagnosing pneumonia from chest X-ray pictures through the use of CNN-based computer-aided classification methods. One of the key problems with the CNN model is that it requires a lot of data to be accurate. As we began, our dataset was limited, so the model had a reduced accuracy rate. To overcome this issue, we used transfer learning with the VGG19 model, which led to a significant improvement in accuracy, reaching 90% throughout testing.

Vikash Kumar, Summer Prit Singh, Shweta Meena
Intelligent and Real-Time Intravenous Drip Level Indication

The ability of medical technology to diagnose and cure illnesses more successfully has revolutionised the healthcare sector. However, there is still a gap in monitoring intravenous (IV) drip bottles in real time. Real-time monitoring of IV bottles is crucial to prevent backflow of blood from the patient to the empty drip bottle, which can lead to severe consequences, including the patient's death. To address this issue, an Internet of Things (IoT)-based IV drip monitoring system has been proposed. The system incorporates a device that detects the fluid solution level and informs medical personnel via a buzzer and a website that shows the IV drip fluid levels of each patient in real time. This feature enables nurses to monitor each patient’s fluid intake and take necessary actions, thereby avoiding potential risks associated with overconsumption.

V. S. Krishnapriya, Namita Suresh, S. R. Srilakshmi, S. Abhishek, T. Anjali
IoT Based Control Networks and Intelligent Systems
P. P. Joby
Marcelo S. Alencar
Przemyslaw Falkowski-Gilski
Copyright Year
Springer Nature Singapore
Electronic ISBN
Print ISBN