Skip to main content
Top

2024 | Book

Proceedings of Third International Conference on Computing and Communication Networks

ICCCN 2023, Volume 1

Editors: Giancarlo Fortino, Akshi Kumar, Abhishek Swaroop, Pancham Shukla

Publisher: Springer Nature Singapore

Book Series : Lecture Notes in Networks and Systems

insite
SEARCH

About this book

This book includes selected peer-reviewed papers presented at third International Conference on Computing and Communication Networks (ICCCN 2023), held at Manchester Metropolitan University, UK, during 17–18 November 2023. The book covers topics of network and computing technologies, artificial intelligence and machine learning, security and privacy, communication systems, cyber physical systems, data analytics, cyber security for industry 4.0, and smart and sustainable environmental systems.

Table of Contents

Frontmatter
Multiscale Feature Pyramid Network-Enabled Deep Learning and IoT-Based Pest Detection System Using Sound Analytics in Large Agricultural Field

Modern farming techniques can now be implemented globally and at a reasonable cost. In the annals of agriculture, this is a significant turning point moment. The widespread dissemination of information and the advent of cutting-edge technologies have facilitated this revolution. However, pests cause substantial harm to farmland, which has economic, ecological, and societal consequences. As a result, there is a need to employ sophisticated computational methods to eradicate pests before they cause irreparable harm. Recently, ML-based studies have focused on agricultural issues. The study's primary objective is to develop a workable method for identifying pests in big agricultural fields using dependable pest analytics made possible by the Internet of Things. The proposed method incorporates several different sound pre-processing techniques drawn from the field of sound analytics. BPF, Triangular window, Kaiser window, FFT method, DFT, and PLP are all examples. A Multiscale Feature Pyramid Network (MFP-Net) model was trained, tested, and validated using data from an analysis of 650 pest sounds. Compared to the existing MS-ALN, YOLOv5, Faster RCNN, and ResNet-50 models, the recommended MFP-Net model correctly identified the pest with a 99.76% accuracy, 99.03% recall, and 99.06% F1-score. The capability of this research to detect pests in large agricultural fields early on is the primary reason for its significance. As a direct result, the number of crops produced will increase, improving the economic growth of the farmers, the nation, and the world.

Md. Akkas Ali, Anupam Kumar Sharma, Rajesh Kumar Dhanaraj
Detection of Wormhole Attacks Using the DCNNBiLSTM Model to Secure the MANET

The obvious response to an anytime, anywhere network in the current era of heterogeneous networks is MANET. It is a collection of wirelessly communicative mobile gadgets that operate on their own. In order to stop the damage that rogue network nodes could do, security in MANET is a crucial duty. A category of denial-of-service (DoS) assault, the wormhole attack bombards the network with numerous phoney packets and messages in an effort to exhaust its resources. Wormhole attacks are a particular type of network layer attack that imitates routing algorithms. Because of deep learning's exceptional performance in many detection and identification tasks, this research presents an approach for intelligent and effective attack detection in MANET. Our goal is to mimic a wormhole attack in a network environment with many wormhole tunnels. The CNN layer is used by the DCNNBiLSTM architecture to extract features from input data, and BiLSTMs are used to predict sequences. Five performance indicators were used to compare CNN, LSTM and DCNNBiLSTM under wormhole attack.

B. Rajalakshmi, R. J. Anandhi, K. Moorthi, Balasubramanian Prabhu Kavin, Rajesh Kumar Dhanaraj
Enhancing Security in Wireless Sensor Networks: A Broadcast/Multicast Authentication Framework with Identity-Based Signature Schemes

WSN applications are feasible for real-time access to the data. Real-time access to the data may become the cause of security breaches. The proposed authentication scheme incorporates cryptographic credentials, secure communication channels, and biometric authentication to ensure robust user and node authentication. Formal security analysis is conducted using the ROR model, and simulations are performed using AVISPA. Our analysis considers a one-way hash function execution time (Th) based on meticulous examination of experimental results. We approximate Th to be around 0.0005 s, establishing its validity for comparative analysis. The results demonstrate the scheme's efficacy, ensuring confidentiality, integrity, and availability of critical data. Our proposed approach strikes a balance between security demands and WSN limitations, providing a secure framework for seamless access to real-time data. The empirical results reinforce the scheme's effectiveness, practicality, and suitability for WSN applications.

Shilpi Sharma, Bijendra Kumar
TSFTO: A Two-Stage Fuzzy-Based Tasks Orchestration Algorithm for Edge and Fog Computing Environments

One of the main issues addressed nowadays in IOT environments is reducing data latency and ensuring an efficient workload balancing. The classical cloud-computing paradigm can no more face the huge amount of data generated by thousands of nearby connected objects. Consequently, the new emerging edge and fog paradigms can help reducing latency, preserving network bandwidth, and improving service quality. Following this trend, we investigate the use of fuzzy logic to improve tasks orchestration. Our proposed TSFTO algorithm is deployed on a multi-layer architecture challenging various device types and important data volume. It also allows managing workload balancing in a cloud, fog, edge, and IoT infrastructure. This process takes into account the unloaded tasks features and the computational resources states. In addition, we have used a list of new criteria as fuzzy entries having a major impact on improving the orchestration method. To validate our proposed TSFTO protocol, we have conducted several simulation tests using the PureEdgeSim simulator. On average, results show that TSFTO reduces the task execution response time by 46.66%, enhances task failure rate by 3%, and saves 10% energy gain.

Leila Kheroua, Zouina Doukha, Samira Moussaoui
Tracking Climatic Variations Through Smart IoT-Driven Approach: An Exploratory Analysis

Climate crisis refers to the impact of climate change and global warming. We connect sensors and gadgets to Internet servers so that they can produce accurate reports regarding the state of the environment. It is very crucial to choose the right IoT sensors for each domain for climate monitoring. It would be beneficial for the environment if the materials used in constructing the sensors are non-toxic and biodegradable so that after they are used, there won’t be adverse effects of the sensors and contribute to the local region pollution. Due to various IoT devices operating, a notable amount of external power is required, thus increasing the carbon footprint. To lower the carbon footprint, sensors with the capability of harvesting energy from the surrounding environment such as air pressure, solar light, temperature, humidity, etc., are put into use. In this research study, we would be studying the major changes in climate experienced within the past decades, and how IoT has been working for the betterment of the environment, and with the help of two case studies, we will learn about the functioning of IoT components in the monitoring process of the environment as a whole, and the intervention of IoT in groundwater management.

Ashit Kumar Dutta, Abhipsa Sahoo, Ankita Samal, Swastika Bishnoi, Sanchit Verma, Sushruta Mishra, Joel J. P. C. Rodrigues
Design and Manufacturing of an Efficient Low-Cost Fire Surveillance System Based on GSM Communications and IoT Technology

In addition to recent technological advancements, a noticeable trend has emerged toward the development of smart cities and homes. By integrating intelligent home control components with existing household appliances, it aims to significantly reduce the use of human effort. This integration has led to decreased electricity consumption, enhanced home security against accidents and theft, and an overall improvement in residents’ comfort levels. This paper proposes the utilization of Internet of Things (IoT) technology for the design of an intelligent home fire alarm system. The suggested system is comprised of an Arduino microcontroller, a NodeMCU microcontroller, a GSM shield, a flame sensor, a buzzer, and a servo motor. Given the limitations and relatively low accuracy of flame sensors, the LM35 temperature sensor is employed in conjunction with it. Upon detecting a flame and a subsequent temperature rise beyond 50 °C, the Arduino Uno will promptly dispatch an SMS warning to the homeowner. Simultaneously, a logical “1” signal will be transmitted to the NodeMCU microcontroller. This microcontroller will then utilize IoT technology to exhibit the warning message on the pre-programmed graphical user interface (GUI). To conclude the alert sequence, the Arduino Uno microcontroller will activate the servo motor. This activation serves the purpose of initiating the opening of the exit door and triggering the alarm. Rigorous practical testing has been conducted on the implemented system, yielding consistently positive outcomes and successful results.

Hamzah M. Marhoon, Noorulden Basil, Bashar Bahaa Qas Elias
A Comparative Analysis of Optimized Routing Protocols for High-Performance Mobile Ad Hoc Networks

Mobile ad hoc networks (MANETs) let wireless devices interact without a fixed infrastructure, yet routing algorithms are still crucial. MANETs are used in disaster relief, military operations, and smart cities. This study compares MANET Bellman-Ford, Dijkstra, and Genetic routing algorithms. These methods use hop count and connection cost to find the fastest paths between nodes. The report evaluates routing algorithms using six criteria (bandwidth, mobility, transmission power, battery capacity, packet loss, and health). We run numerous simulations with different parameter settings and statistically analyze the results. The paper offers a battery capacity and transmission power-based network node health measure. Many MANET routing methods are tailored for specific performance metrics and simulated conditions. Simulations show that the recommended measure is particularly beneficial for analyzing MANET network node status. The study also finds that certain routing methods perform better in low packet loss and high battery capacity. These findings underscore the need to consider performance indicators and simulated scenarios when choosing a MANET routing algorithm. This research provides insights into enhancing MANET routing algorithms with 90% accuracy and performance dependent on performance measurements and network factors. The findings show the merits and weaknesses of each algorithm, helping choose MANET routing solutions.

Ayushman Pranav, Akshat Jain, Mohd Mohsin Ali, Manish Raj, Umesh Gupta
Integrating Sandpiper Optimization Algorithm and Secure Aware Routing Protocol for Efficient Cluster Head Selection in Wireless Sensor Networks

Wireless sensor networks (WSNs) play a crucial role in various applications, ranging from environmental monitoring to industrial automation. secure and efficient data routing in WSNs remains a significant challenge. This paper proposes an approach for opting optimal CH in WSNs using the Sandpiper Optimization Algorithm (SOA) and Chaos-Secure Aware Routing Protocol (SARP). The methodology also enhances network lifetime, energy efficiency, packet delivery ratio, and coverage while maintaining robustness, scalability, and resource efficiency. A simulation system using Matlab R2020a and the CICIDS dataset has been used to test and evaluate the performance in terms of attack detection and mitigation mechanisms including blackhole, gray hole, and wormhole attacks the results demonstrate that the proposed methodology outperforms existing algorithms in terms of network lifetime, energy consumption 3500 J, packet delivery ratio 95%, and throughput1.2 Mbps, while effectively mitigating routing attacks. This research contributes to the advancement of secure and efficient WSNs.

Tuka Kareem Jebur
The Effects of Weather Conditions on Data Transmission in Free Space for Wireless Sensor Networks

It is necessary to collect data measured in open lands, whether for example for monitoring purposes or for border protection. The process of data transmission from active nodes and its next level to the cluster head (CH), followed by follow-up to arrive at the base station (BS), with safety and acceptability, are the main objectives of wireless sensor networks (WSNs). The key benefits of wireless systems are low-cost system installation and loss-free data transfer. Weather conditions in the spacious area affect wireless data transmission that is done between sensor nodes in free space. This study attempted to ascertain the influence of atmospheric and meteorological conditions on the loss of data when utilizing the wireless data transmission systems in a wireless sensor network. For this reason, the volume of data transfer losses that happen in free space because of three types of weather conditions is the main discussion point. The meteorological criteria that can be considered in this research are temperature, clear weather, dust, and rain as the weather elements that affect the Iraqi climate. The correlation analysis's outcomes show that the amount of precipitation and the size of raindrops have a main effect on the losses of data transmission. From a practical point of view, there is no confirmed statistical evidence of the amount of loss in the volume of transmitted data as a result of the amount of rainfall with the use of different transmission frequencies. In this research, we designed an objective routing protocol called Fuzzy Clustering with Ant Colony System FCACS, which has been utilized as a simulation program using the MATLAB 2018b application. The practical results of the proposed protocol for data transmission in free space and under different weather conditions were examined, and the simulation results were compared with those of two well-known routing protocols, which are DEEC and Z-SEP. In terms of the quantity of data received from the CH or BS and for the various weather circumstances that were suggested in the simulation, the proposed protocol demonstrated a high level of performance. Through the practical results linked to the energy dissipation for the three routing protocols, it is obvious that sending data via WSN consumes high energy in the case of rainy weather.

Aqeel A. Kadhim, Jolan R. Naif, Muna M. Salman, Haider K. Hoomod
Using Wireless Sensor Network Performance and Optimizing in Underground Mines with Virtual MIMO Antenna

Wireless sensor networks rely heavily on battery power for continuous operation, making energy efficiency a critical factor in their longevity. To address this challenge, a novel architectural system known as Virtual MIMO has been developed. Virtual MIMO operates in conjunction with traditional MIMO technology and holds the potential to establish an energy-efficient network. Leveraging the synergistic benefits of Virtual MIMO, which include enhanced system generalization and energy conservation, it becomes possible to employ MIMO technology within a single antenna system. This paper primarily centers on the utilization of Virtual MIMO to prolong the lifespan of wireless sensor networks while minimizing energy consumption. Alongside various S-parameters, the research reveals varying return loss values in decibels (dB) at different frequencies, such as S21, S22, S11, and S12. In summary, Virtual MIMO techniques play a pivotal role in the creation of efficient wireless sensor networks and offer a means to optimize their overall performance.

Shailendra Kumar Rawat, Sanjay Kumar Singh, Ajay Kumar Bharti
A Novel Logistic Regression-Based Fire Detection Model Using IoT in Underground Coal Mines

The environment of underground coal mines (UCMs) is vulnerable to many environmental problems and consequential endangerment. Among those problems, mine fire is a liable threat that causes the loss of lives of mine workers and other valuable infrastructure and resources in the mines. Therefore, continuous monitoring is very important for early detection of mine fire in UCMs. Internet of Things (IoT), nowadays, is widely used for continuous monitoring of the environment of UCMs. For the purpose of monitoring, deploy sensor nodes in the region of UCMs to sense the local information of the environment and transfer that information to the sink for further process. As the gathered information is indistinct, it is necessary to consider the information for taking precautions. Therefore, we propose a method to predict the occurrences of mine fire in UCMs by using logistic regression model. Then, we applied active learning (AL) and semi-supervised learning (SSL) methods on the dataset sequentially to train the model by dividing the dataset into training and testing datasets. The training dataset is used to train a model. On the other hand, the testing dataset is used to evaluate the trained model. The overall process is implemented at the sink instead of control stations. In case of any hazardous situation, the sink takes immediate and necessary actions based on the sensed data rather than the control station. The trained model is simulated using WEKA tool for the data of continuous monitoring on different hazard conditions. This model is more reliable and works effectively when compared to offline monitoring system in any kind of hazards situations in UCMs. The simulated results show the accuracy of the proposed method as 98% whereas the accuracy of linear regression and naive Bayes has been shown as 93% and 96%, respectively, that shows our proposed method outperforms over the existing techniques.

Chaitanya Thuppari, Srikanth Jannu, Damodar Reddy Edla
An MQTT IoT Intrusion Detection System Using Deep-Learning

Nowadays the devices connected to an Internet of Things (IoT) network is enormously increasing day by day. IoT devices widely using Message Queuing Telemetry Transport (MQTT) protocol for their communication. Because of the heterogeneous nature and lack of security in IoT devices manufacturing, security systems against MQTT traffic are essential. In this paper, we proposed a zero biased Convolutional Neural Network (CNN) for the detection of intrusion. By removing the bias term, it reduces the computational complexity and it would be beneficial to deployed as Intrusion Detection System (IDS) for resource constrained IoT devices. The performance of binary classification of the model is compared with other Deep Neural Network (DNN) with the help of two abstract level features such as bidirectional flow and unidirectional flow from a publicly available dataset MQTT-IoT-IDS2020. The proposed model achieves superior results to the other model with the F1 score of 0.99 for bidirectional and unidirectional flow.

Greeshma Andrew, M. P. Deepika, Soumia Chandran
Multi-objective Optimal Feature Selection for Cyber Security Integrated with Deep Learning

Cyber security in the context of big data has emerged as a critical issue and an upcoming challenge in the research community. The security problems in big data are handled using machine learning algorithms. In recent years, artificial intelligence has been an emerging technology that machine imitates human behaviors. The Intrusion Detection System (IDS) is the major component for the detection of malicious activities or cyber attacks. In intrusion detection, Artificial intelligence engages a major role widely, which is the better way of building and adapting IDS. Nowadays, Neural Network algorithms are promising techniques for framing the novel artificial intelligence, which is used for real-time problems. The proposed cyber security model uses a cyber defense dataset and a secured network is created based on multi-objective criteria. Here, the optimal feature selection is performed on the basis of a multi-objective function, which focuses on the correlation and accuracy of attack detection using a meta-heuristic algorithm called Beetle Swarm Optimization (BSO). Further, the deep learning model called Recurrent Neural Network (RNN) is used for detecting the attack. The multi-objective-based optimal feature selection is helpful for enhancing the performance of cyber security with reduced redundancy and complexity.

Anupam Das, Subhajit Chakrabarty
Application of Gradient Boosting Classifier-Based Computational Intelligence to Detect Drug Addiction Threat in Society

Currently, drug and alcohol addiction has become a major menace to society's youth. As responsible citizens of this country, we must act now to keep these young brains from succumbing to this lethal addiction. In this research, a method based on intelligent analytics is designed for forecasting the threat of drug addiction using machine learning algorithms. Initially, we identify several reasons for drug addiction by speaking with health professionals, individuals addicted to drugs, and reading related studies and articles. Then we gather information from both addicts and abstainers. After preprocessing the data, we use Random Forest, Naive Bayes, Adaptive Boosting, and Gradient Boosting machine and assess the interpretation of each of these classifiers using certain well-known performance criteria. Outcome with Gradient Boosting showed promising with the highest accuracy rate of 95.4% and minimum error rate of 0.0854.

Ashutosh Kumar, Abhigyan Sinha, Tamoghno Bakshi, Sibashish Choudhury, Sushruta Mishra, Laith Abualigah
Insurance 4.0: Smart Insurance Technologies and Challenges Ahead

The insurance value chain is being rapidly challenged by insurers as they integrate with technology. Quick customer service, convenience, and customization are three of InsurTech key selling points. The following are some of the most active and well-known InsurTech-related topics covered in this article: Mobile Applications, Smart Insurance Contracts, Blockchain/Distributed Ledger Technology, AI, IoT, Robot Advisors are all examples of digital insurance solutions. Based on a careful evaluation of the literature, this study provides cogent ideas under the umbrella of “InsurTech.” The background of InsurTech and its numerous subfields are reviewed in this paper. Using innovative digital platforms in Insurance 4.0 can result in several advantages like helping with everyday commercial and administrative responsibilities, strategic decision-making, and process improvement and controlling. Blockchain, smart contracts, robot advisors, and mobile applications can all help to smooth the transition to Insurance 4.0. This paper demonstrates that, in order to execute Insurance 4.0, insurance procedures must be significantly digitized. Traditional work cultures, mindsets, methods, and monetary costs may provide problems in this regard.

Tamanna Kewal, Charu Saxena
Review on Vision Transformer for Satellite Image Classification

Satellite image classification has been a topic of interest in the research community for the last two decades. Neural network researchers have given advanced models year by year for this problem. A lot of attention is given to the transformers from 2017 onwards and the vision transformer is a transformer-based model for computer vision problems. This paper reviews the application of vision transformers (and its variants) to the satellite image classification. The article provides a detailed working of the vision transformer and the history of its chronological development. The review prospects that the vision transformer is suitable for a sufficiently large dataset. For small datasets, convolutional network-based models perform well as compared to vision transformers, but pre-trained vision transformers beating convolutional models and transfer learning have produced better results. The review suggests, considering limited research in the field of vision transformer application for satellite data, due to the fairly new model, there are high possibilities for making this model producing good results. The article also explores the ongoing challenges and research opportunities in the vision transformer development for satellite image classification.

Himanshu Srivastava, Akansha Singh, Anuj Kumar Bharti
Primitive Roots and Euler Totient Function-Based Progressive Visual Cryptography

Visual cryptography is a branch of cryptography that focuses on the encryption and decryption of images or visual data. The main objective of this paper is to introduce a visual cryptography scheme based on primitive root modulo 251 for encryption of a grayscale image, making it visually incomprehensible to unauthorized users. The mapping algorithm generated enhances security and efficiency of original image and subsequently shares are generated using bit-plane slicing. The generated shares possess high quality, ensuring accurate reconstruction of the original image. Moreover, for Inverse Mapping we took discrete logarithms and recovered image with no major contrast loss.

Vaibhav Chaudhary, Shivendra Shivani, Divya Srivastava
Approach for Fire Detection Using Image Processing

Huge devastation to humans, and the environment is caused by forest fires, which are one of the forces of nature that are hard to control. Forest ecosystems are destroyed because of this uncontrollable calamity. In this paper, a review is made on previous works done in the field of technology that could be beneficial to detect, control forest fires. Technologies such as GIS, image processing are main sources of this review, further study about these technologies is done in that light. This paper provides, focuses on the usage of the above-mentioned technologies in combating forest fire incidences.

Aniruddha Bardekar, Mohammad Atique
Review of Various Approaches for Authorship Identification in Digital Forensics

Authorship identification involves extracting and analyzing the author writing styles. Digital Forensics along with cyber investigations employ writing style to identify the author and their traits. Authorship identification was done on lengthy and short texts in English, Arabic, Chinese, and Greek. However, it emphasizes a unique and difficult scenario: identifying whether two writings published in distinct discourse types are produced by the same person. We provide sets of texts that use four different discourse styles, including essays, emails, text messages, and business notes, based on a new corpus of English texts. The cross-discourse-type ownership verification assignment is highly challenging due to the disparities in communication intent, target audience, and formality level. This paper evaluates various aspects of authorship identification and provides a thorough analysis of the assessment findings. This study also explores the language proficiency and problems in authorship identification tasks. A number of significant authorship identification domain studies were assessed for data, characteristics, techniques, and outcomes. After reviewing the research, we conclude that the outcomes of authorship identification task depend primarily on the specified stylometric characteristics and dataset used. The beneficial qualities also vary by the language type.

Riya Sanjesh, J. Alamelu Mangai
Machine Learning Model for Traffic Prediction and Pattern Extraction in High-Speed Optical Networks

The tremendous development of network traffic causes the need to develop new network applications. Machine learning provides a pertinent platform to enhance currently used network optimization methods. The information about future traffic volumes is vital for the network operators. This paper presents an issue in traffic prediction and pattern extraction in high-speed optical networks and fills the literature gap by using a multiprocessor module from Python to enhance the efficiency of the work. An ML approach based on regression is designed. In the investigation, a dataset has been simulated and generated using the SNDLIB library and Python module GNPy estimation tool, which provides dynamic traffic matrices stated for various real network topologies and mimics real-world data. The performance of the proposed system is evaluated using different evaluation indexes like Mean Square Error (MSE), Mean Absolute Error (MAE), Max error (ME), and processing time based on the tuning of hyper-parameters. Outcomes of findings indicate better results compared to the existing technique. The findings confirm that proposed approach’s efficiency is better and seems to be a promising solution to the current network problem.

Saloni Rai, Amit Kumar Garg
Improved Markov Decision Process in Wireless Sensor Network for Optimal Energy Consumption

The research looks at the problem of receiving a precise estimate of the sensor node attribute from the Wireless Sensor Network (WSN) in a certain time frame while using as little energy as possible from the sensors. An estimate of an attribute may be obtained by taking readings from wake-up sensors connected to a sink node that has been placed at random. The sink must take relevant measurements within a certain time constraint. As an added bonus, the sink reduces the amount of power needed for sensors to transmit their readings. The predicted energy consumption of WSN sensor nodes is presented in a closed-form formulation in this work. The research establishes a maximum allowable sensor transmission distance. The Markov Decision Process may be used to determine the timing of sensor transmissions over a given time frame (MDP). A strategy for transmitting-sensor programming is also developed in this work. The simulation is conducted in MATLAB environment, where the MDP schedule offers reduced energy consumption, reduced delay, increase packet delivery ratio, reduced false alarm rate, which finds the attribute of the data sent via a sensor node.

Gauri Kalnoor, Prakash B. Metre
Design and Implementation of a Hybrid Deep Learning Framework for Handwritten Text Recognition

Text recognition technology has seen significant advancements in recent years, particularly with the use of Optical Character Recognition (OCR) to evaluate computer-generated text. However, there is much more work to be done in the field of Handwritten Text Recognition (HTR). The challenges posed by handwritten text, such as significant variations in strokes across writers, the vast variety of handwriting styles, human error and damages to the paper, present substantial difficulties in accurately identifying and recognizing handwritten alpha-numeric data. To address these challenges, we proposed a deep learning method that combines long short-term memory (MD-LSTM) and convolutional neural networks (CNN) architectures. This model can identify numbers and characters of the English language from input images. Based on MNIST dataset, bidirectional recurrent neural networks were used to construct the output sequence which was developed using the TensorFlow Framework. The accuracy for alphabets, numbers and alpha-numeric texts are 95.2%, 94.9% and 94.7% respectively. The mean character match index is computed to be 93.2%. The proposed model can substantially boost HRTs precision and efficiency making it more accessible.

Harshit Anand, Milind Singh, Vivian Rawade, Shubham Sahoo, Sushruta Mishra, Laith Abualigah
A LSTM Based Intelligent Framework for Financial Stock Prediction

Stock analysis is a method used by traders and financial experts to evaluate the securities market and make informed decisions about buying and selling shares. It involves conducting extensive research to assess the performance and quality of a stock or an industry before investing. Stock analysis can take different forms, but traders usually use two main categories: technical and fundamental analysis. Technical analysis involves examining documented value outlines and researching previous market structures to predict future advances. On the other hand, fundamental analysis looks at data from the organization and its macroeconomic situation, such as financial history and revenue streams, to assess prospective gains from exchanges. The ultimate goal of stock analysis is to choose the best times to place trades and make the appropriate buying and selling decisions. Traders may specialize in one type of analysis or use a combination of both. Regardless of the approach, conducting extensive research is crucial to ensure wise investments that produce profit and avoid wasting hard-earned money.

Oindrila Ajha, Souryadipta Das, Tiyasha Dutta, Soham Das, Sushruta Mishra, Laith Abualigah
Mispronunciation Detection Using Feature Learning

This research describes a study into the use of feature learning approaches for mispronunciation detection. Mispronunciation detection is essential in speech recognition and language learning applications. Mispronunciation detection has traditionally depended on handcrafted features and rule-based algorithms, which frequently have poor generalisation abilities and demand a lot of manual work but recent advances in deep learning and feature learning have demonstrated promising results in enhancing the accuracy and robustness of mispronunciation detection systems. The study proposes an innovative method using Mel-frequency cepstral coefficients (MFCC) and SVM. Comparing TF-IDF and MFCC feature extraction, labelled audio recordings are categorised as accurate or incorrect pronunciation. Audio data is preprocessed, and MFCC features are extracted using the librosa library. SVM learns patterns between features and labels during training. Performance is evaluated on a common voice dataset, achieving 71% accuracy with MFCC and 70% with TF-IDF. Additional preprocessing and hyperparameter tuning result in 71% accuracy for TF-IDF. Overall, this research shows that MFCC feature extraction and an SVM classifier are effective tools for mispronunciation identification. By utilising the strength of feature learning on the Common Voice dataset, this research advances mispronunciation detection systems, ultimately enhancing language learning processes and facilitating the creation of more precise speech processing applications.

Priyanka Chhabra, Shailja Chhillar, Riya Tanwar, Muskan Verma, Gaurav Indra
Annotating Social Data with Speaker/User Engagement. Illustration on Online Hate Characterization in French

This paper presents an annotation framework relying on the linguistic notion of speaker/user engagement. This notion is suitable for a finer characteriza- tion of online hateful discourse as it allows addressing the following question: does the speaker engage himself to the truthfulness of hate content? Two resources were built to support the annotation framework: a taxonomy of speaker/user engage- ment degrees and a rich semantic resource of pictograms/emoticons. The paper describes the resources used and the annotation process. Preliminary experiments and results on sexism characterization in French are also presented.

Delphine Battistelli, Valentina Dragos, Jade Mekki
Heuristic Learning Model-Based Stochastic Regularization Technique for Reducing the Overfit of Training Data

Researchers may have learned about how to build and develop feedforward neural networks. The moment we develop a network, there is the question of training and testing the dataset along with the machine learning algorithm. We have to test our algorithm not only on the training set but also on the testing set with respect to its fit. When an algorithm performs well on the training set but performs poorly on the testing set, the algorithm is said to be overfitted on the training data. In short, there is a need to focus on reducing the overfitting. To deal with this problem, we have to make our stochastic model generalize over the training data using the emphasized regularization technique. Here, the term stochastic means the outcome derived from a random event or random process; the terms referred to as “Stochastic” and Regularization mean minimizing the error of the testing set directly proportional to increasing the error rate of the training set. Many such regularization techniques are invented by the research practitioner, but minimal evidence has been observed through types of learning rules. So to come up with a more effective regularization technique is the need of the hour. In order to achieve this, there needs to be put some extra constraints, parameter-specific values, changes in learning rate, and changes in activation function; into the machine learning model. If it is computed and chosen correctly, then it helps to reduce testing error. Finally in order to develop such a machine learning model; researchers have to look into the strengths and weaknesses of the model, new or derived theory, synthesis of existing theory along with the support of other derived theories and the gaps to come up with new models.

P. S. Metkewar, Rajesh Kumar Dhanaraj
Convolution Neural Network Versus Transfer Learning in Image Classification

The objective of this research paper is comprehensive comparative analysis between two prominent approaches, namely Convolutional Neural Networks (CNN) and MobileNetV2-based transfer learning, for the task of image classification. Specifically, the focus is on determining the effectiveness of these approaches in accurately classifying images (in our case it is cat vs. dog) (Szyc in 2018 IEEE 22nd International Conference on Intelligent Engineering Systems (INES). IEEE, 2018 [Szyc, K.: Comparison of different deep-learning methods for image classification. In: 2018 IEEE 22nd International Conference on Intelligent Engineering Systems (INES). IEEE (2018)]). Through meticulous evaluation and comparison of results obtained from a benchmark dataset, this study aims to discern the strengths and limitations of each method. By shedding light on their respective merits, this research contributes to the advancement of image classification techniques and paves the way for further investigations in this domain.

O. Rama Devi, U. Surya Venkata Sekhar, S. Siva Rama Krishna, T. S. Rajarajeswari
Unveiling Consumer Segmentation: Harnessing K-means Clustering Using Elbow and Silhouette for Precise Targeting

Consumer segmentation is essential for accurate targeting and successful marketing efforts in today’s competitive business environment. Modern marketing groups individuals by interests and attributes. Segmentation drives targeting, personalization, and ROI. This article segments customers using K-means clustering. The Elbow approach and Silhouette score determine the appropriate number of clusters and increase segmentation accuracy. They also examine the possibility of precision targeting and customized marketing techniques across sectors. Businesses may optimize marketing, improve customer happiness, and increase profits by using K-means clustering. To compete in today’s market, this research helps marketers enhance targeting. Elbow and Silhouette K-means clustering may enhance client segmentation, engagement, loyalty, and economic success.

Shweta Saraswat, Vaibhav Agrohi, Mahesh Kumar, Monica Lamba, Raminder Kaur
Multimodal Sentiment Analysis and Multimodal Emotion Analysis: A Review

Sentiment analysis, or opinion mining, is defined as the process of identifying the overall outlook of a person with respect to any event/entity. Emotion analysis, or affect analysis, is defined as the process of identifying the underlying feeling of a person towards an event/entity. A sentiment can be categorized into 3 categories; an emotion, however, can have multiple categories. Also, an emotion can have intensity, which a sentiment does not have. The following paper presents a review of the various multimodal sentiment analysis (MSA) and multimodal emotion analysis (MEA) techniques. Challenges and future scope are also discussed at the end of the paper.

Soumya Sharma, Srishti Sharma, Deepak Gupta
A Deep Dive into Brain-Computer Interface

Brain-Computer Interfaces (BCI) are the new-age technology that works by acquiring signals directly from the brain. Though the technology is new, it is promising and provides a great sense of hope for the future of computing. The contribution of BCIs spans from the field of computing to the field of medicine, and the need for BCIs is growing at a steady pace. The following study is a comprehensive summary on Brain-Computer Interfaces. This work proceeds by introducing the technology to the audience and works by evaluating the merits and demerits that the technology comes with. Finally, the work transitions into working and gives a detailed description of how BCIs function. The study intends to describe each and every stage in the working of a BCI in a compendious way.

Jafar A. Alzubi, Snahil Subhra, Sushruta Mishra
Climatic Variable Assessment in a Smart Sensory Enabled Setting

This research paper focuses on the use of smart and intelligent environment technology to monitor climatic conditions and their impact on the environment. It explores the various applications of smart environmental monitoring systems in weather forecasting, air quality monitoring, agriculture, and disaster response. It emphasizes the advantages of using different sensors, clouds, communication devices, etc. to collect real-time data on temperature, humidity, air pressure, wind speed and direction, pollution, and harmful gases. This research paper reminds us of how important it is to use technology for environment monitoring and decision-making toward sustainable living; with the help of smart systems, we can better take proactive measures to protect our planet. Overall, this research paper talks about how using smart technology to monitor our environment can help us understand and improve the conditions. It provides information about air quality, natural disasters, agriculture, and weather patterns. The implementation performed shows that the XGBoost model generates the best performance with a 94.7% accuracy rate. This knowledge helps understanding and monitoring through smart systems for sustainable future.

Waleed Hadi Madhloom Kurdi, Parnani Panda, Ankit Garg, Shrishti Swaraj, Sushruta Mishra, Ahmed Alkhayyat
Toward Improved Clustering for Textual Data

This study explores the possibilities of combining manifold learning with contextual embedding from Transformer models for textual cluster analysis. We leverage contextual embeddings to provide a more accurate text representation for text clustering analysis and pass the embedding through a manifold learning algorithm. The results of the experiment show that manifold learning can accentuate the contextual embedding which improves the performance of the clustering algorithms in the characterization and modeling of text data. We used the resulting clusters to distinguish between relevant texts in social media campaigns and showed that the resulting embedding provides a better representation for clustering analysis.

Ridwan Amure, Abiola Akinnubi, Oyindamola Koleoso
Self-adaptive Learning Algorithm as a Tool for the Development and Strengthening of the Dyslexic Student's Skills in the Study of Musical Composition

In this period, the topic of Artificial Intelligence (AI) is at the center of various discussions that mainly concern the teaching/learning system and the contribution that AI can make in the classroom. From a naturally inclusive perspective, technologies (based on AI) from the early stages of design and development must take into consideration the needs of all students, without neglecting dyslexic students. In that sense, the main problem is represented by the ability of these tools to support the dyslexic student in studying and not in carrying out the assigned tasks for him/her. This research paper presents an algorithm (which combines the Viterbi algorithm with the Markov process) designed and developed to enhance the learning process of a dyslexic student in the area of Music Composition, in particular, in the harmonization of a melodic line. The harmonization of a melodic line involves many cognitive processes and systems and a learning disability may lead to over-fatigue and to a psychological distress of the student, with the subsequent loss of confidence and personal motivation. The algorithm looks like a tutor able to support the student through suggestions during the harmonization of a musical melody (and not by providing the solution). The experimental results show that the algorithm, given a melodic line, is able to correctly generate a bass line as well as harmonize each sound of the melodic line. It is also demonstrated (through a case study) that the algorithm can have a positive impact on dyslexic students’ motivation and engagement in learning. Finally, educational implications and research suggestions are provided based on the research results.

Michele Della Ventura
Digital-Game-Based Language Learning: An Exploration of Attitudes Among Teacher Aspirants in a Non-metropolitan Area

The growing acceptance of video games and their usage in language instruction has only served to highlight the importance of good preparation for aspirant teachers in educational institutions. As digital learning games can satisfy students’ desire for enjoyable educational experiences, several learners held the opinion that engaging in video games would improve their capacity for more advanced cognitive processes. The primary factor influencing the standard of digital game-based learning is the sizeable amount of time learners devote to playing games (Kunter et al. in Learn Struct 18(5):468–482, 2008 [43]; Papastergiou in Comput Educ 52:1–12, 2009 [44]). Due to this, offering educational gaming to young learners offers a conceivable advantage in the curriculum. Thus, 90 primary school teacher aspirants were chosen as participants for the current study, which employed a quantitative descriptive approach. This study discloses that learners have a ‘positive attitude’ toward the utilization of video games in the classroom. These results are in agreement with other similar investigations (Kazu and Kuvvetli in Educ Inf Technol 1–26, 2023 [20]; Ray and Coulter in J Digit Learn Teach Educ 26:92–100, 2010 [21]) that examined how learners view the value of games in terms of how they can alter after playing them. They noticed that many players or students in a certain game displayed a favorable attitude following their involvement in class. They believed that playing video games could help learners learn more effectively. Upon playing the online games, attitudes among learners toward VG appeared to have changed, according to Ray and Coulter in J Digit Learn Teach Educ 26:92–100, 2010 [21], Kenny and McDaniel in Br J Edu Technol 42:197–213, 2011 [22]. The results suggest that gaming is advantageous to experiences that may support learners’ understanding of the importance of VGs. Furthermore, in terms of the association between the respondents’ attitudes and gender, it was found that the latter is a factor that influences the attitude of the teacher aspirants. Additionally, this paper also found that in terms of the usefulness of video games in the classroom, gender is not a factor that influences the attitude of the respondents toward DGBLL. The results were further elaborated in the study. Analyzing respondents’ opinions on language learning through digital games can undoubtedly help address the Philippines’ lack of ICT literacy as a future elementary school teacher. The results of this study will establish benchmarks or guidelines for enhancing the nation's educational system.

Princess Krizzle M. Casiano, Bernadeth A. Encarnacion, Shania H. Jaafar, Ericson O. Alieto
A Unified Approach for Identification and Analysis of the Sources of Uncertainty in Machine Learning Techniques

Over the last decade, there has been an intensive effort to introduce technologies that increase system smartness by incorporating artificial intelligence and machine learning, resulting in a better way of life. With this evolution, it has become particularly important to understand how trustworthy are the decisions made by autonomous systems driven by artificial intelligence coupled with machine learning and deep learning capabilities. Uncertainty quantification (UQ) has thus become a topic of interest over the last few years. In this paper, we observe the behavior of the epistemic (model), aleatoric (data), and distribution uncertainty and deduce a mathematical relationship between them. Next, we show that the epistemic probability function is inversely proportional to the distribution probability function and is directly proportional to the aleatoric probability function. Moreover, we demonstrate the impact of identifying in-domain distribution and out-of-domain distribution in model and data uncertainties. Finally, we perform an array of experiments using lung cancer data to demonstrate that improved accuracy may be obtained by striking a correct balance between the three forms of uncertainty. According to experimental findings, methodical data selection aids in adopting informed decisions, strengthening the system's dependability and enabling it to achieve close to 99% accuracy even with basic machine learning models.

Sourojit Pal, Sandip Roy, Avishek Banerjee, Kaushik Majumdar, Umesh Gupta, Saurabh Rana, Sachin Shetty
Sentiment Analysis of Tweets Associated with Turkey-Syria Earthquakes 2023

The recent world consistency is based on perceptions of people rather than people themselves. These days, the search for reliable information about natural disasters and events has become prevalent on social media sites. Hence, platforms like Twitter act as an essential tool to disseminate information and mobilize support during crises. This research study aims to investigate the sentiments and emotions of the tweets in the context of the Turkey-Syria Earthquakes of 2023. The researchers collected a dataset of tweets using relevant hashtags and keywords, labelled the dataset using stacking classifier with TextBlob, Valence Aware Dictionary, and Flair for Sentiment Reasoning, using Random Forest, Logistic Regression, Decision Tree, XGBoost, and Naïve Bayes as the base estimators. The data was analyzed using various machine learning models and deep learning architectures. All the models were compared in an analysis, thus it may be said that the Convolutional Neural Network has the highest validation accuracy followed by Support Vector Classifier and Naive Bayesian achieving the lowest, i.e., 98.1%, 97.7%, and 84.7%, respectively. The study also found that the most common emotions are ‘unemotional’ and ‘disgust’ in the tweets using stacking classifiers with TextBlob, Valence Aware Dictionary, and Wordnet. The studies reviewed in this literature review demonstrate the effectiveness of machine learning algorithms and features for sentiment analysis on current affairs and highlight the importance of considering the linguistic and cultural context of the text. However, there are still challenges to be addressed, such as dealing with noisy and biased data, adapting to different languages and domains, and handling context-dependent sentiment expressions. In addition, further research is required to improve the accuracy of sentiment analysis and to explore the use of other factors such as context, sarcasm, and irony.

Harkiran Kaur, Pritika Sharma, Sahil Kadiyan
Ensemble Classification with Lazy Predict on Three Diabetes Datasets: A Comparative Study with Resampling Techniques

Millions of people throughout the world suffer from the chronic illness diabetes mellitus. Effective diabetes care and complication avoidance depend on early diabetes prediction and diagnosis. Using the three distinct datasets—the PIMA India dataset, the NHANES dataset, and Mendeley's diabetes dataset—we give a thorough analysis of diabetic prediction in this study. Lazy Predict enables us to efficiently evaluate a wide range of classifiers on each dataset, providing valuable insights into model performance. The top-performing model on each dataset is selected as the best individual model. Furthermore, ensembles are created by combining the predictions of the top ten models without any resampling and with resampling techniques. Random forest achieved the highest accuracy of 79% on the PIMA dataset, XGB achieved the highest accuracy of 99% on Mendeley’s dataset, and the dummy classifier attained the highest accuracy of 88%. for the NHANES dataset. However, the ensembles without oversampling consistently outperformed their counterparts with resampling. Surprisingly, the ensemble without oversampling exhibited the highest accuracy overall, followed by the ensemble with oversampling, challenging the common notion that resampling always leads to improved performance.

Afshan Hashmi, Md Tabrez Nafis, Sameena Naaz, Imran Hussain
Employee Attrition Prediction using Ensemble Methods

Employee attrition, or employees quitting the firm willingly, is a persistent problem for contemporary businesses. This is a serious issue for businesses, especially when critical personnel like qualified technicians depart for more advantageous positions. Financial losses are incurred to replace the skilled workforce as a result. When employees quit their jobs, they typically carry valuable, unspoken knowledge with them that offers the organization a competitive advantage. A corporation should prioritize reducing personnel attrition with the goal to maintain a persistent competitive advantage over its competitors. This study focuses on this, which will help businesses estimate staff turnover and promote economic growth. This study investigates employee attrition prediction analysis to address the pressing issue of voluntary turnover in contemporary businesses. In this work, four ensemble techniques were used, mainly achieving an accuracy of 88.73% for the MLP, Random Forest, and KNN ensemble. This study emphasizes the value of proactive retention methods for fostering a healthy workplace environment and guaranteeing organizational stability.

Chayti Saha, Partha Chakraborty, Prince Chandra Talukder, Md. Tofazzal Hosen, Md. Mohi Uddin, Mohammad Abu Yousuf
Measuring Bias in Generated Text Using Language Models—GPT-2 and BERT

In Natural Language Processing (NLP), a language model is a probabilistic statistical model that estimates the likelihood that a particular sequence of words will appear in a sentence based on the words that came before it. In our experiment, text prompts for evaluating bias were generated using the BERT and GPT-2 language models. Sentiment, toxicity, and gender polarity are the bias measures that we have added in order to measure biases from numerous perspectives. During fine-tuning BERT model, we have achieved 91.48% accuracy on multilabel toxic comment classification. Later, this fine-tuned pretrained model is used for generating text using BOLD dataset prompts. Our work shows a greater percentage of the texts produced by GPT-2 than those produced by BERT which were labeled as toxic. Similar to how it did in the religious ideology sector, BERT's communism prompt resulted in a toxic text. Compared to BERT, GPT-2 produced writings that were more polarized in terms of sentiment, toxicity, and regard.

Fozilatunnesa Masuma, Partha Chakraborty, Al-Amin-Ul Islam, Prince Chandra Talukder, Proshanta Roy, Mohammad Abu Yousuf
A Certain Investigation on Undersea Water Image Object Detection and Classification Using Artificial Intelligence Algorithms

The increase in the usage of marine resources in recent years has drawn attention toward the underwater image processing research field; however, underwater images face some of the challenges like severe absorption and scattering of water, low contrast, monotonous color, etc., and make a complicated process. In this paper, we made a comprehensive study related to underwater image processing in recent years. The study includes both machine learning and deep learning approaches used for the detection of underwater objects. This study also presents the current scenarios along with future trends and provides better insights into the research direction in underwater image processing. Recently, artificial intelligence (AI) techniques are widely used for the detection and classification process of undersea water object detection and they demonstrated good outcomes. This study categorizes AI into machine learning and deep learning techniques. This study analyzed 31 papers related to underwater image object detection from various publishers like IEEE, Springer, Elsevier, Taylor and Francis, etc. More survey analysis is based on the study of the deep learning model. Hence, this study presents the advantages and the disadvantages of each model with open research challenges of undersea water object detection and classification.

Kaipa Sandhya, Jayachandran Arumugam
DDoS Cyber-Attacks Detection-Based Hybrid CNN-LSTM

Protecting software-defined networking (SDN) against cyber-attacks has become crucial in an expanding digital threat environment. Distributed Denial-of-Service (DDoS) attacks are risky since they may seriously interrupt operations. To mitigate these risks, this study introduces an anomaly detection method that utilizes a hybrid convolutional and short-term memory (CNN-LSTM) deep neural network. This model merges the CNN's ability to automatically extract spatial features with the LSTM's proficiency in sequence modeling, thereby enhancing the detection of anomalies in network traffic metadata. The model also integrates an autoencoder structure to facilitate representation learning and reduce dimensionality. The model's effectiveness was tested using publicly accessible SDN datasets, and the results were remarkable. The model identified DDoS attacks with an accuracy rate of over 99%, surpassing the performance of previous shallow learning models. Moreover, the model proved highly adaptable, successfully detecting attacks across various data samples. This deep learning-based detection system is a significant advancement, providing precise and efficient analytics that bolster real-time cybersecurity monitoring. However, it's crucial to continue research in deployment, interpretability, and the potential of combinatorial learning with other advanced technologies. We can only fully harness the great potential of artificial intelligence for adequate cyber protection by looking into these areas.

Thura Jabbar Khaleel, Nadia Adnan Shiltagh
Proposed Multilevel Secret Images-Sharing Scheme

Visual secret-sharing (VSS) applications are essential for guaranteeing the secure and private sharing of sensitive information. The Multilevel Secret Image-Sharing Scheme (MLSIS) is introduced in this paper as an effective method for securely sharing secret images among participants with various access levels. The aim was to give thorough conclusions and comparisons while upholding accuracy in how the data was presented. For pixel-level access control and secrecy, the model of the multilevel secret image-sharing offered in this work employs a bit-level technique and the RC4 encryption algorithm to securely share an image through multilevel participants. The model involves three levels, each with an exact number of participants. The primary objective is to distribute shares in a way that permits each level to reconstruct the secret image by combining at least the lowest required number of participants according to their own thresholds (k, n). The Multilevel Secret Image-Sharing Scheme described in this study provides a reliable and effective solution for users with varying levels of access to share confidential secret images. The bit-level approach and potent encryption algorithms work together to provide secrecy and limited access to the shared images. The model’s successful use in real-world circumstances is demonstrated by its practical implementations.

Nahidah T. Darweesh, Ali Makki Sagheer
Image Captioning System for Movie Subtitling Using Neural Networks and LSTM

With the advent of the Internet, the multimedia business has experienced explosive growth and is now available to consumers worldwide. Particularly important to this growth has been the dominance of over-the-top (OTT) platforms during the COVID-19 epidemic. One important component in this growth has been the implementation of state-of-the-art Machine Learning (ML) methods. These algorithms can generate captions from video frames mechanically, increasing the platform's accessibility. However, it can be difficult to meet the needs of users speaking different languages, as there are many films being streamed in languages that may be inaccessible to consumers in other parts of the world. There is an issue with English being the most spoken language in the world. Therefore, individuals who aren't fluent in English may have trouble finding videos in their native language. By automatically creating English subtitles for any movie, regardless of the language spoken on the original audio track, machine learning plays a significant role in removing this barrier. Neural Networks, used for processing each frame of the video, and LSTM models, used for caption synthesis, are the key models employed here from the realm of machine learning. After the models have been trained, they can be incorporated into the user interface (UI) using a programming language like Python. In the user interface, the created caption can be shown alongside the uploaded image.

K. Vijay, Eashaan Manohar, B. Saiganesh, S. Sanjai, S. R. Deepak
Semantic Application Based on the Bhagavad Gita: A Deep Learning Approach

Recent evolutions in Natural Language Processing involve techniques to convert text into meaningful high-dimensional numerical vector representations that can be used for various applications such as sentiment analysis. However, these vector representations can be computationally heavy and challenging to interpret. This paper proposes using autoencoders with cosine similarity as their loss function to generate encoded vectors that have intentionally discarded some semantic information. After the test, several autoencoder models like this were created to generate advice and question-answering trained on the entirety of the Hindu holy book of Bhagavad Gita by comparing it with the same algorithm working on the original vectors and concluding subjectively that the quality of the output has improved, while significantly reducing the time taken to generate an output. The findings suggest that intentional data loss through autoencoders is a promising technique for dimensionality reduction on word embeddings. The neural network was trained on the entire dataset using cosine similarity as the loss function. The work could explore the potential of this approach in other areas of natural language processing and develop more objective measures for evaluating its effectiveness.

Anand Chauhan, Vasu Jain, Mohd. Mohsin, Manish Raj, Umesh Gupta, Sudhanshu Gupta
RMODCNN: A Novel Plant Disease Prediction Framework

The detection of plant diseases is essential for avoiding reductions in agricultural product production and quantity. Visually observable patterns on the plant are the main focus of plant disease research. Monitoring plant health and spotting diseases is crucial for sustainable agriculture. Monitoring plant diseases manually is very difficult. It necessitates a lot of labor, expertise in plant diseases and prolonged processing. In the presented research, a ratal mellifera based DCNN model is developed for plant disease prediction. At first, the data is acquired out of a plant leaf input dataset, further, it involves pre-processing for removing noise. In the next step, the Region-of-Interest (ROI) extraction strategy is utilized for separating the relevant regions to process further process out of the unessential pixels. Further extracted ROI outcome will be imposed to the process of data augmentation; further GAN-based approach has been utilized for data augmentation for data enhancement for enhancing the accuracy of prediction. Further feature extraction has been accompanied for extracting features involving RESNET 101, LTP and LOOP as well as statistical features. Ratal mellifera optimization is derived out of two optimization strategies namely ratal optimization along with mellifera bee optimization, that is utilized for optimizing the channel boosted deep CNN network. Taking into consideration the TP and k-fold, the performance key indicators accuracy (acc) and sensitivity (sen) for TP for tea leaf prediction and for apple leaf prediction have demonstrated better outcomes in contrast to recent state-of-the-Art has been gained.

Vineeta Singh, Vandana Dixit Kaushik, Alok Kumar, Deepak Kumar Verma
Refined Human–Computer Interaction: Enhancing Efficiency and Collaboration

Optimized human and computer interaction aims to enhance communication and collaboration between humans and machines by creating interfaces that streamline interaction and minimize user cognitive load and physical effort. This paper explores the feasibility of utilizing hand gestures as a means to interact with computers and Internet of Things (IoT) devices, employing a monocular camera system and machine learning algorithms. The study consists of two parts: software interaction and interaction with IoT devices within a private network. A customized architecture has been developed, leveraging Google's Mediapipe Hands library to detect hand gestures, coupled with a machine learning model for gesture classification. The classification output is translated into real values for controlling various components, encompassing ranges, categories, positions, or continuous values. This technology enables users to regulate parameters such as brightness, volume, and cursor movement, as well as perform tasks like software manipulation and control of IoT devices like relays and robot clippers. The model demonstrated satisfactory accuracy in controlling components within games and virtual reality environments.

Shubham Singh, Harsh Pal, Ayan Ambesh, Akshat Singh, Deepali Kamthania, Alpna Sharma
The Impact of Data Valuation on Feature Importance in Classification Models

This paper investigates the impact of data valuation metrics (variability and coefficient of variation) on the feature importance in classification models. Data valuation is an emerging topic in the fields of data science, accounting, data quality, and information economics concerned with methods to calculate the value of data. Feature importance or ranking is important in explaining how black-box machine learning models make predictions as well as selecting the most predictive features while training these models. Existing feature importance algorithms are either computationally expensive (e.g. SHAP values) or biased (e.g. Gini importance in Tree-based models). No previous investigation of the impact of data valuation metrics on feature importance has been conducted. Five popular machine learning models (eXtreme Gradient Boosting (XGB), Random Forest (RF), Logistic Regression (LR), Multi-Layer Perceptron (MLP), and Naive Bayes (NB)) have been used as well as six widely implemented feature ranking techniques (Information Gain, Gini importance, Frequency Importance, Cover Importance, Permutation Importance, and SHAP values) to investigate the relationship between feature importance and data valuation metrics for a clinical use case. XGB outperforms the other models with a weighted F1-score of 79.72%. The findings suggest that features with variability greater than 0.4 or a coefficient of variation greater than 23.4 have little to no value; therefore, these features can be filtered out during feature selection. This result, if generalisable, could simplify feature selection and data preparation.

Malick Ebiele, Malika Bendechache, Marie Ward, Una Geary, Declan Byrne, Donnacha Creagh, Rob Brennan
Evaluating Machine Learning Algorithms for New Indian Parliament Building Sentiment Analysis

The Indian Parliament is a great example of the nation's democratic ideals. The parliament with its upper (Rajya Sabha) and lower house (Lok Sabha) forms the highest law-making body of the nation. The old parliament building, constructed during British era, has structural issues and is insufficient to accommodate the increasing number of members of parliament. Thus, the ruling government of India decided to construct the new building with superior infrastructure which can accommodate the increasing number of members of parliament, architecture and Indian sculptures. The construction of new building led to controversies from various stakeholders of the country and neighbouring countries. The mixed opinions about the construction of new parliament building generated chaos leading to environmentalists, and conservationists filing case at Supreme Court of India to pause the construction of new building. Thus, it is essential to study the various sentiments of the civilians of India and develop a model with machine learning techniques that could classify the sentiments into positive, negative and neutral. To fill up this gap, authors in this research have generated the dataset, namely, New Indian Parliament Dataset-URL (NIPD-U) and New Indian Parliament Dataset-Paragraph (NIPD-P) from authentic web sources related to India’s new parliament building, and have developed machine learning models with five classifiers to classify the text into positive and negative sentiments. It was observed that for URL-based classification Naïve Bayes (NB) classifier outperformed other classifiers with 91% of accuracy. Further, paragraph-wise results showed that Random Forest (RF) showed highest accuracy of 92.34%.

Jatinderkumar R. Saini, Shraddha Vaidya, Shailesh Kasande
Adoption of Artificial Intelligence in Business Operations of Technology Firms

Artificial Intelligence (AI) has brought about significant transformation in the topography of business operations in large technology at a global level. Addressing the growing need of investigating the factors that influence adoption of AI in business processes in technology organizations, this study collects data from 252 middle-level employees about their perception and attitude towards AI in business operations. Statistical analysis of the data shows that while performance expectancy has a significant positive effect on employee attitude and intention to use AI in business operations, effort expectancy does not have any significant effect. Also, it is found that employee attitude mediates the relationship of performance expectancy and effort expectancy with intention to use AI. The study provides us with interesting insights for practitioners about nuances of implementing AI in business operations in large technology organizations.

Spardha Bisht, Santoshi Sengupta, Manish Kumar Bisht
An Artificial Intelligence Approach to Quantifying Exercise Form for Optimal Performance and Injury Prevention

Human body posture estimation remains one of the most challenging problems despite substantial research and improvements in the fields of artificial intelligence and computer vision. Human pose detection has several uses, including assisted living, video surveillance, biometrics, public security, and at-home health monitoring. Nowadays, teenagers between the ages of sixteen and twenty-five are encouraged to engage in rigorous training because they are interested in maintaining a specific degree of physical fitness in their bodies. If they do not maintain good form or posture, this tough training activity could lead to muscular injuries or any other small or serious problems. They should engage a personal trainer or someone who can help them follow the right methods to sustain and prevent injuries if they want to keep up with all of this. This only applies to those who perform these intense exercises in a gym or other public space; at-home exercisers would not be able to afford personal trainers and instructors. One can utilize an AI-trained model to maintain good body posture for intense workouts. The analysis and evaluation of the participants’ exercise movements will be done by the study using a combination of motion sensors, cameras, and machine learning algorithms. The AI-powered system will examine each participant's form, offer real-time feedback, and correct mistakes made while training with various workouts. This AI-trained model serves as a personalized trainer that can be tweaked to the user's specifications. The AI model assigns a preference category to each workout and notifies the user if it detects a posture issue.

K. R. Sowmia, T. Jayaganeshan, F. Mohammed Abraar Khan, S. Madhesh, S. Kabilesh
Predicting Content Popularity on Social Media: An Analytical Approach Using Regression Modeling

The significant emergence of the "popularity" phenomena has been fueled by the quick rise of influential social media platforms like Facebook and YouTube as well as the pervasive integration of electronic gadgets into daily life. This popularity essence essentially entails the rapid accrual of substantial views, frequently reaching into the thousands or millions, across videos, posts, and various content types, serving as a tangible reflection of user inclinations. The task of predicting content popularity is a formidable one due to its reliance on an array of factors, encompassing visual and social attributes such as views, likes, comments, as well as variables like publication time, publisher identity, duration, and content specifics. This manuscript presents a comprehensive exploration of this subject matter, delving into recent applications of machine learning techniques for the prediction of content popularity. It underscores the significance of judiciously selecting predictive attributes and appropriately configuring data models to attain accurate prognostications. The research work encompasses an array of regression models harnessed in machine learning, including decision trees, random forests, support vector machines, ridge regression, and both linear and non-linear regression. Diverse classes of attributes employed for popularity prediction are delineated, encompassing text-based features, visual characteristics, metadata with a social dimension, and the fusion of multiple attributes. The paper further outlines the prevalent assessment metrics employed for evaluating regression models, encompassing mean absolute error, mean squared error, and root mean squared error. Also, it includes a table that summarizes the references, models, content types, features, and results of various studies related to popularity prediction.

Heba Al-Mamouri, Wadhah R. Baiee
Hybrid Encryption Algorithm: AES and PRESENT Lightweight with a Chaotic System (PHEPA-AES)

Cloud computing has gained great popularity recently and has become the focus of attention of companies and institutions, and even exceeded that to reach the level of individuals because of the advantages it enjoys of saving money and managing with a minimum of effort. Despite all the advantages that characterize cloud computing, it remains a crisis of trust between the cloud and its users due to the concern of customers and their fear of leakage and theft of their data, and disclosure by unauthorized persons. Therefore, the best way to secure data on the cloud is to encrypt it before uploading it to the cloud, thus ensuring that this data is not viewed by unauthorized persons or cloud service providers, and even in the event of the collapse of the cloud, this data remains ambiguous. In this paper, more than one encryption algorithm will be combined, a proposed Hybrid encryption algorithm includes new hybrid modified AES and PRESENT (PHEPA-AES). The proposed scheme incorporates the PHEPA-AES hybrid encryption algorithm, which combines the PRESENT-ultra lightweight algorithm, the AES algorithm, and a chaotic map. This combination enhances the efficiency of cryptographic systems while increasing the randomness and security of secret keys. The utilization of the PRESENT algorithm and the chaotic map contributes to the lightweight nature of PHEPA-AES.

Ibtihal Ali Khanjar, Haider K. Hoomod, Intisar Abd Yousif
Improving Network Intrusion Detection with Convolutional Neural Networks and Data Balancing Techniques

In order to maintain network security and detect intrusion attacks, Intrusion Detection Systems (IDS) have become an important technology. Due to the growing usage of the internet, intrusion attacks have become more frequent and can lead to the theft, alteration, or deletion of user data. IDS systems provide an active approach to network security by differentiating between normal and intrusion traffic, which needs to be blocked to protect the network. IDS systems have a wide range of applications in academic and business communities. Researchers have employed deep learning methods for network traffic classification, and conventional neural networks are popular due to their accuracy and versatility in handling various types of data. IoT systems, which are made up of connected devices, are also vulnerable to intrusion attacks, emphasizing the importance of IDS systems. To detect intrusion patterns, researchers have used the NSL-KDD dataset and employed a CNN neural network, which is an imbalanced dataset, meaning that the distribution of classes in the dataset is uneven. The proposed approach includes preprocessing techniques using oversampling by SMOT technique to improve the performance of the CNN-based IDS. The CNN network showed an excellent accuracy rate of 99.87%, and the accuracy rate further increased to 90.12% by incorporating the DLNID model to detect intrusion traffic. This demonstrates the potential of IDS systems and deep learning techniques for maintaining network security and protecting against intrusion attacks.

Yaqot Mohsin Hazzaa, Shahla U. Umar
MbAbI: A Benchmark Dataset for Malayalam Text Understanding and Reasoning

This paper proposes a Malayalam Dataset containing 40000 instances, intended to assist the researchers working on Question Answering systems. This data set is the first of its kind. The Facebook bAbI dataset was translated to create this Malayalam bAbI (or, MbAbI) dataset. It comprises 20 different tasks of varying complexity. The tasks include simple supporting facts to complex path finding tasks. We have machine translated these tasks from English to Malayalam and tested the baseline QA models on these datasets. We have obtained state-of-the-art results with MbAbI dataset using deep learning and transformer models. In order to benefit the low-resource Malayalam research community we have made this dataset publicly available.

K. Reji Rahmath, P. C. Reghu Raj, P. C. Rafeeque
A Novel Approach to Image Restoration and Image Enhancement

Image restoration and image enhancement are fundamental tasks in computer vision and image processing. Image restoration aims to recover the original information from a degraded or corrupted image, while image enhancement aims to improve the visual quality and interpretability of an image. This research paper presents a comprehensive theoretical framework that integrates both image restoration and image enhancement techniques. We explore various methods, algorithms, and mathematical models to address the challenges associated with these tasks. The proposed framework leverages both classical and deep learning-based approaches to achieve superior performance in restoring and enhancing images. The experimental results demonstrate the effectiveness and versatility of our proposed approach in a wide range of applications.

Divya Singh, Bhawna Upadhayay, Pradeep Gupta, Sonam Gupta
Smart Helmet with Crash Detection

Riding a motorcycle is an exhilarating experience, but it comes with its risks. In 2019, the National Highway Traffic. Safety Administration reported that motorcycle riders were 28 times more likely to die in a crash than passenger car occupants. A smart helmet is a wearable that can be used to enhance the safety of motorcycle riders. It includes features such as crash detection and wear detection, which can prevent the bike from starting unless the rider is wearing a helmet. It is also equipped with accelerometers, gyroscopes which help to detect crashes, and also connected with a GSM module chip which sends a message to important contacts and nearby hospital in case of emergency.

K Antony Kumar, S Ravikumar, Angeline Lydia, Rohit Yadav
Digital Payment Systems and Financial Inclusion: Examine How Digital Payment Systems, Such as Mobile Wallets and Digital Currencies, Can Improve Financial Inclusion by Providing Access to Banking Services for the Unbanked and Underbanked Population

For economic development and poverty alleviation, financial inclusion is essential. This study’s overarching goal is to assess the prevalence of financial exclusion across various demographic subsets in both urban and rural settings, and to explore how the widespread use of digital payment systems might influence those figures. Two hundred people were randomly selected to participate in a cross-sectional survey. Questions about financial exclusion and the use of digital payment systems were included in the survey framework used to collect the data. Financial inclusion was shown to be higher in urban regions across all age groups compared to rural areas, although this discrepancy persisted even after controlling for other factors. Individuals with a high adoption of digital payment systems were shown to have a higher likelihood of being fully banked, while those with a moderate adoption level had a lower likelihood. These results highlight the necessity for tailored initiatives to promote financial inclusion among various demographic groups and shed light on the significance of digital payment systems in closing the financial inclusion gap.

Mukul Gupta, Deepa Gupta, Priti Rai
Enhanced Prediction of Breast Cancer Using Machine Learning Ensemble Models and Techniques

Breast cancer was acknowledged as one of the world’s most formidable diseases. The key to improving patient outcomes and general wellbeing had been to make an accurate and timely diagnosis. This research used Machine Learning methods to investigate several ensemble approaches to breast cancer diagnosis prediction. This research made extensive use of the Breast Cancer Wisconsin dataset. The research aimed to provide a comprehensive evaluation of the predictive powers of a variety of five ensemble models, including Random Forest, Gradient Boosting, AdaBoost, Bagging, and Extra Trees. The approach included several criteria for assessment, including accuracy, precision, recall, and F1-score. Further analysis was done with the use of the ROC curve, the precision-recall curve, and other statistical tools. It’s important to note that among all the models tested, AdaBoost performed the best.

E. Chandralekha, S Ravikumar, K Antony Kumar, M. J. Carmel Mary Belinda
Enhancing Adaptive E-Learning with Generative AI: Expanding the Horizon Beyond Recommendation Systems

In this paper, we introduce a pioneering approach to e-learning recommendation systems by harnessing the capabilities of Generative AI. Traditional recommendation methods have been trained on small-scale special-purpose datasets and they often fall short when deployed in e-learning applications, due to their inability to capture evolving user preferences and content diversity. In this work, we instead turn to Generative AI systems such as large language models (LLMs), which are extremely powerful general-purpose systems trained over Internet-scale data. We design specific prompting strategies that enable our Generative AI-based recommendation system to dynamically adapt to learners’ evolving preferences as they progress through course content, transcending the limitations of state-of-the-art systems, which still rely heavily on collaborative and content-based filtering. Beyond mere content suggestions, our system engages learners in real-time, providing personalized explanations and practice materials, and even generating bespoke learning resources. Experimental validation on real-world data demonstrates remarkable improvements in user engagement and learning outcomes compared to conventional systems. This Generative AI-driven adaptive e-learning approach not only enhances recommendations but also redefines the e-learning experience itself, marking a significant leap forward in creating a more adaptive, engaging, and enriching educational environment.

Venkata Bhanu Prasad Tolety, Venkateswara Prasad Evani
A Fusion Framework of Pre-trained Deep Learning Models for Oral Squamous Cell Carcinoma Classification

One of the most serious illnesses in the world is oral cancer, and prompt, effective treatment can significantly increase patient survival and cure rates. The manual diagnosis from Histopathologic Images (HI) is time-consuming and requires an expert doctor. To reduce the burden on a doctor, a computerized technique is helpful for the classification of medical image analysis. In this work, we proposed a deep learning framework for classifying Oral Squamous Cell Carcinoma (OSCC) from HI. Two pre-trained deep learning architectures, MobileNet-V2 and DarkNet-19, have been fine-tuned in the proposed framework. Both architectures have been selected based on recent performance and fewer parameters. Both models were trained using the transfer learning concept and extracted features from the global average pooling layers. A Chaotic Crow Search optimization algorithm has been employed, and the best features have been selected. The selected features are finally classified using machine learning classifiers. A publicly available dataset was utilized for experimental purposes and obtained the highest accuracy of 92%. Compared with some state-of-the-art techniques, the proposed framework shows improved accuracy.

Muhammad Attique Khan, Momina Mir, Muhammad Sami Ullah, Ameer Hamza, Kiran Jabeen, Deepak Gupta
Backmatter
Metadata
Title
Proceedings of Third International Conference on Computing and Communication Networks
Editors
Giancarlo Fortino
Akshi Kumar
Abhishek Swaroop
Pancham Shukla
Copyright Year
2024
Publisher
Springer Nature Singapore
Electronic ISBN
978-981-9708-92-5
Print ISBN
978-981-9708-91-8
DOI
https://doi.org/10.1007/978-981-97-0892-5