Skip to main content
Top

2025 | Book

Proceedings of Fourth International Conference on Computing and Communication Networks

ICCCN 2024, Volume 3

insite
SEARCH

About this book

This book includes selected peer-reviewed papers presented at fourth International Conference on Computing and Communication Networks (ICCCN 2024), held at Manchester Metropolitan University, UK, during 17–18 October 2024. The book covers topics of network and computing technologies, artificial intelligence and machine learning, security and privacy, communication systems, cyber physical systems, data analytics, cyber security for industry 4.0, and smart and sustainable environmental systems.

Table of Contents

Frontmatter
Human-Artificial Intelligence, Social Intelligence on Metacognitive and Technological Skill Enhancement in Active Learning Environments

This study focuses on the collaborative impact of human, social, and artificial intelligence on enhancing metacognitive and technological skills in active learning environments. This study’s contribution presents an AI theoretical framework based on established ideas in active learning, reflective practice, technology skills, and metacognition. By expanding our knowledge of the potential of AI as a teaching tool, this article helps pave the way toward more meaningful and rewarding educational encounters. The influence of artificial, social, and human intelligence on improving metacognitive abilities and technological skills is immense when designing collaborative learning environments. Deep learning and reflection cannot occur without critical thinking, problem-solving, and special human skills. To build a welcoming classroom environment, students need to improve their social intelligence, which helps them become more empathetic, effective communicators, and team players. Students can participate in meaningful interactions, improve their metacognitive abilities through self-regulation, and reflect on their learning process because of these characteristics of social and human intelligence. This study, based on survey results, found that AI can tailor learning experiences and feedback to the unique needs of each pre-service teacher. AI-powered platforms and tools can examine pre-service teachers’ performance data, find areas where their knowledge is lacking, and provide individualized interventions to help them fill those gaps. Combining AI with human and social intelligence enhances the responsiveness and dynamic nature of active learning environments. This integration not only helps students build their digital skills but also increases their capacity to track, assess, and modify their own learning methods. These intelligences work together to create a comprehensive teaching and learning approach that nurtures and enhances the metacognitive and technological abilities of pre-service teachers, complementing each other.

Ahmad Al Yakin, A. Ramli Rasjid, Ali Said Al Matari, Luis Cardoso, Muthmainnah Muthmainnah, Nitish Pathak
A Robust Technique for Survival Prediction on Pre-operative MRI Images Using SE-CNN and Cox Regression

A malignant tumour’s chance of survival is not promising. In these situations, predicting survival becomes crucial for treating the patient. We can use the features in the segmented tumour image to predict survival. On the other hand, manually estimating survival from these images could not provide a precise survival prediction and need for automatic survival prediction arises. We have used 2D U-Net to yield segmented images with a dice coefficient of 88.1 for testing and 93.76 for training. Our proposed survival prediction workflow consists of SE-CNN which combines a convolution, max pooling, and squeeze and excitation layer to extract features that are then integrated with clinical data like age and survival days. Cox proportionality regression analysis is used to choose features with p-index less than 0.05. Ultimately, the gradient boosting algorithm is trained using these features to predict survival that gives us RMSE 264.42 which is better than the state-of-the-art methods.

Urvashi Dhand, Najme Zehra Naqvi
Teachable Machine Model Combining Machine Learning and Deep Learning for Disease Diagnosis

In the context of Industry 4.0 becoming a development priority in many fields, artificial intelligence (AI) has emerged as a promising trend in healthcare. This chapter builds on previous research and combines experiments on various datasets to propose a simple yet highly effective deep learning model for disease diagnosis via X-ray images. The model is implemented on a website that can be accessed remotely. This approach has the potential to reduce examination and screening time, minimize errors, and enable convenient remote implementation. Experimental results indicate that the proposed model has been tested for demonstration in this method’s effectiveness at 95%. Consequently, it can save time, costs, and resources, while mitigating the limitations of traditional methods.

Quoc Hung Nguyen, Xuan Huy BUI
Autism Spectrum Disorder Prediction Using Particle Swarm Optimization and Convolutional Neural Networks

The integration of PSO with CNN provides a promising approach for classifying ASD using sMRI data. ASD is a behavioral disorder that impacts a person’s lifetime tendency to reciprocate with society. The variability and intensity of ASD symptoms, in addition to the fact that they share symptoms with other mental disorders, make an early diagnosis difficult. The key limitation of CNN is selecting the best parameters. To overcome this, we use PSO as an optimization approach within the CNN to choose the most relevant parameters to train the network. In the proposed approach, we initialize a swarm of particles, where each particle represents a unique configuration of CNN hyperparameters, including the number of convolutional layers, learning rates, filter sizes, and batch sizes. To evaluate the swarm in PSO, we use a fitness function, such as accuracy, to measure each particle’s performance. The performance of the proposed approach for ASD prediction outperformed that of the other optimizers with a high convergence rate.

Bhagya Lakshmi Polavarapu, V. Dinesh Reddy, Mahesh Kumar Morampudi
Chaotic Harris Hawks Optimization Based Feature Selection with Attention Deep Learning Driven Intrusion Detection in CPS Environment

A cyber-physical system (CPS) combines several interconnection physical methods networking units, and computing resources, and monitoring the applications and processes of the computing systems. Interconnected cyber and physical worlds start threatening security problems, particularly with the enhanced difficulty of network communications. Although efforts to address these problems, it can be complex to analyze and detect cyber-physical attacks in difficult CPS. Deep learning (DL)-driven techniques are implemented to investigate cyber-physical security systems. Therefore, this study presents a chaotic Harris Hawks optimization-based feature selection with attention-based DL (CHHOFS-ADL) technique for intrusion detection in the CPS platform. The drive of the CHHOFS-ADL technique is to ensure safety in the CPS platform through the intrusion detection technique. In the beginning, the high dimensionality problem can be addressed by the design of the CHHOFS method, which elects an optimum subset of features. Next, the attention‐driven ConvLSTM Autoencoder (ACLSTM-AE) model is employed for intrusion detection. Finally, the detection rate of the ACLSTM-AE method can be devised by the symbiotic organism search (SOS) model. The simulation results of the CHHOFS-ADL method have been validated on benchmark databases. A wide range of experiments highlighted the optimum performance of the CHHOFS-ADL method with recent models under various measures.

Mohammed Maray
Anti-occlusion Multi-object Tracking Based on YOLOv5

With the rapid growth of computer vision technology, the application of multi-object tracking (MOT) is becoming increasingly widespread. However, in practical applications, there are still unresolved issues with target occlusion. This article is based on YOLOv5 (You Only Look Once version 5) anti-occlusion multi-target tracking method, and designs a multi-target tracking model to improve the accuracy and robustness of multi-target tracking in complex environments. This article first provides an overview of the relevant research on multi-target tracking and deeply analyzes the impact of target occlusion on tracking performance. Subsequently, the basic principles of the YOLOv5 object detection algorithm are discussed in detail. Based on this, a multi-objective tracking framework is designed that combines YOLOv5 and re-detection mechanism. This framework fully utilizes the object detection capability of YOLOv5 to accurately identify and locate target objects in video frames. Its innovation lies in the application of a re-detection mechanism, which can timely re-detect and restore tracking even if the target is occluded. The framework also enhances multi-scale detection capabilities by adopting more effective feature pyramid networks. This network structure can better capture targets of different scales, enabling algorithms to detect small and large targets more accurately while maintaining high speed. By incorporating a self-adaptive anchor adjustment mechanism for different target shapes and sizes, the flexibility and adaptability of the algorithm are improved. Afterward, this article conducts testing and analysis on the model, and the test results showed that the model can maintain a tracking accuracy of over 75%, with accuracy data fluctuating around the benchmark value of 85%. This result fully demonstrates the superior performance of our model in anti-occlusion multi-target tracking.

Qin Zeng, Guocai Zuo, Xiuzhi Su
Analysis of Crop Disease Detection Using Image Data with Deep Leaning Techniques

Early and accurate crop disease detection disease is important for effective crop management. Deep leaning contains powerful techniques for automatically detection of crop disease through crop images. This review examines various deep learning models build in crop disease detection considering several key components such as transfer learning, GAN, attention mechanism, image segmentation, hybrid models, and lightweight models. In crop disease detection modeling, we explore various techniques in each step such as image processing, segmentation, feature extraction, model architecture. Modification in pre-trained model can enhances disease detection accuracy, where attention mechanism and hybrid models strengthen model ability to focus on critical features and performance improvement. A comparative analysis for combination of different techniques with deep learning model reveals model improvement which shows the need of integrating these techniques to achieve model’s optimal performance and also identifies a gap in research concerning disease detection in plants. Future research should aim to develop unified model capable to detect disease across all parts of plant and disease progression stage to automate disease diagnosis process.

Ashwini Sudam Patil, Rajesh Kumar Dhanaraj
Spotted Hyena Optimizer with Hybrid Deep Learning Enabled Cybersecurity in IoT Cloud Environment

With ever-increasing demand on Internet of Things (IoT) and cloud services, robust security measures are needed to defend the evolving circumstances of cyberthreats, safeguarding the flexibility of interconnected systems and the security of sensitive data. Effectual Android malware detection (AMD) method develops necessarily to increase the security of mobile applications and support their long-term sustainability. Recent research integrates machine learning (ML) approaches and feature engineering to recognize malicious code and protect users from possible risks. By enhancing the security of Android applications, developers and users are more confident in the sustainability and reliability of the mobile app platform. This manuscript offers the design of a Spotted Hyena Optimizer with a Hybrid Deep Learning Enabled Android Malware Detection (SHOHDL-AMD) technique for sustainable applications. The purpose of the SHOHDL-AMD approach is to exploit feature selection (FS) with an optimal DL model for the AMD process. To accomplish this, the SHOHDL-AMD technique employs the SHO technique for feature subset selection (SHO-FSS) technique for choosing an optimal set of features. Besides, convolutional neural network with sparse autoencoder (CNN-SAE) method for AMD process. Moreover, the reptile search algorithm (RSA) was employed for tuning the hyperparameter values of the CNN-SAE methodology. The simulation rate of the SHOHDL-AMD methodology can be tested on the CICAndMal2017 dataset. The comparative evaluation reported the greater solution of the SHOHDL-AMD methodology with other approaches to the AMD process.

Hamed Alqahtani
Unveiling the Statistical Power of MLE in Visualizing EHR Software Reliability Using Weibull Distribution

Background The Product Classification Database provides a list of medical device names and their corresponding product classification codes. The main purpose of using this database is to assist manufacturers, healthcare professionals, and the general public in identifying and classifying medical devices according to FDA regulations. This classification impacts the approval process, marketing, and post-market surveillance of the devices. Methods Various statistics including the number of sites, sample size, and date of receipt associated with proper distribution involves in analyzing the reliability evaluation factor. The MLE approach for two-parameter WD distribution method is used for evaluation and enabling the conduct of a goodness-of-fitness test. The KM estimate survival curve, hazard curve, and CDF curve calculate the event probability with respect to time. Results The proposed Weibull distribution alternative model in order to analyze the reliability evaluation for EHR software product statistics for its frequency against the cumulative percentage. Compare and contrast the various distributions fit (Lognorm, Gamma, Beta models) in order to evaluate survival analysis, our results show the CI bounds on time is 95% confidence WD-CDF. Conclusions Our experimental results perform well for Beta-Weibull distribution data. SF decreases while SHF increases over being a time period.

Madhuri Sharma, Abhilasha Singh
Usage of Big Data Analytics with Optimized Clustering to Interpret Consumers’ Buying Decision-Making

This paper discusses the use of big data analytics to understand consumers’ buying behavior using the enhanced K-means clustering method/algorithm. This study aims to develop coherent and precise marketing strategies for consumers’ purchase behavior that are better than existing learning models. It creates a user-value model by learning and modeling the information of a particular online enterprise. A model for various categories of consumers is built using the enhanced K-means algorithm, in this, the purchase behavior of the users is divided into clusters, then a customer value matrix is constructed and the links between data points based on clustering are explored. This gives the business managers a reference point from a sales point of view. The e-commerce users are classified into three categories by marking points. It has also been observed that if the sample size increases continuously, feature points and multidimensional matrices can be formed with the sample resulting in better results. This algorithm is also compared to other existing algorithms including DBSCAN, mean shift clustering, and hierarchical clustering. The final results imply that the enhanced K-means algorithm is stable and efficient. It also shows that the analysis of user clustering characteristics helps build more explicit and efficient trade plans.

Mudit Jain, Saishree Chinthakindi, Rahi Shukla, Aryabrat Mishra, Tiansheng Yang, Bharati Rathore
Digital Media Synthesis System on Basis of Network Intelligence

There are problems with the quality and generation time of images synthesized by digital media. In order to boost system performance, this paper contributes network intelligence to the system, and conducts an in-depth study of digital media synthesis systems based on network intelligence by experimenting on image quality, video frame rate, image generation time, and video generation time aspects between GANs and self-encoder. The paper comprehensively analyzes the system design principles, advantages, disadvantages, and the potential improvement and optimization design direction for each model. The experimental results demonstrate that the evaluation value of the self-encoder algorithm model ranges from 0.84 to 0.93, and that of the GANs algorithm model ranges from 0.89 to 0.96. GANs perform better than the auto encoder algorithm model in terms of image quality and video frame rate based on testing practical functionality, generating high-quality, diverse image samples, and constituting continuous video frames that meet the visual interests and requirements of end users.

Ye Tang, Jianwen Zhang
Advanced NLP Techniques for Detecting Online Fake News Using the Fake News Dataset

The increasing concern about the spread of information and its negative impact on trust and democratic processes has led to the need for methods to identify fake news. Humans can only detect 54% of news accurately, with an additional 4% considered uncertain. This is especially crucial, during times such as the pandemic when inaccurate information can quickly spread. While machine learning algorithms have been considered for identifying news their widespread application is still limited. This study introduces a machine learning approach that combines techniques, including decision tree, support vector machines, random forest, logistic regression, Naive Bayes, and KNN. The model’s effectiveness is assessed by using an existing dataset aiming to achieve accuracy, without overfitting. The approach involves dealing with missing information columns and employing text preprocessing techniques. Important tactics such as K-fold cross-validation, TF-IDF vectorization, and principal component analysis (PCA) are utilized to maintain the model’s strength and practicality. The proposed decision tree model achieved a 99% accuracy rate on test data and 100% accuracy on the training set without signs of overfitting. This study contributes, toward automating news detection and offering a tool to combat misinformation in the age.

Roa’a Mohammedqasem, Hayder Mohammedqasim, Bilal A. Ozturk, Isra Ben Brahim, Heba Jadallah, Najwa Alwesabi
Genetic Molecules Sequencing Using Deep Cognitive Networks

DNA sequencing refers to the process of determining the exact order of nucleotides within a DNA molecule, which is crucial for identifying genetic variations and disease associations. DNA sequencing includes collecting and interpreting strands of DNA. It lets scientists read the sequence of bases along a DNA strand, revealing the genetic information encoded within the DNA. Modern sciences rely a lot on DNA sequencing. It promotes advancement in various disciplines, including genetics, meta-genetics, and phylogenetics. This study compares DNA sequencing with predictive analysis techniques such as decision trees, random forest, Naive Bayes, transform learning and CNN. The predictive analysis models in this paper are widely recognized. The algorithms used in this paper for comparison analysis are decision tree, random forest, Naive Bayes, CNN and transform learning.

Sayan Garai, Chandrali Shyam, Aditi Biswas, Sushruta Mishra, Ashit Kumar Dutta
A New Modified Firefly Algorithm for Mining Association Rules

A Firefly Algorithm (FA) is an effective and powerful metaheuristic algorithm that is primarily proposed for solving continuous optimization problems. However, since FA has attractive properties like stability, high convergence characteristics, and easy implementation, several studies have been worked on for modifying FA to accommodate discrete or binary domain problems. By this work, a new revised firefly technique to mine association rules, named Modified Firefly Algorithm accomplishing Association Rules Mining (MFA-ARM), is proposed with significant modifications that have been applied to the original version of FA in order to handle properly regarding the issue of mining association rules. A core modification presented in the proposed algorithm is the new movement technique, which regulates the traveling of fireflies toward the most attractive one. The performance of the MFA-ARM algorithm was tested against three well-known data-sets and compared with the performance of the recently developed metaheuristic, the DCS-ARM algorithm. The experimental results expressed that the performance of the new proposed technique (MFA-ARM) is nearby to that of the DCS-ARM technique since both of the metaheuristic algorithms discover high quality rules with good confidence and support values. Nevertheless, the main difference in the performance of the two algorithms is in the size of the rules extracted by them. Hence, the results proved that the proposed MFA-ARM algorithm discovers rules with a smaller size than those extracted using the DCS-ARM algorithm in the three data-sets. Moreover; these rules are more understandable and easier to interpret. The results revealed that the MFA-ARM algorithm has a somewhat quicker execution time than that of the DCS-ARM algorithm in all experimented data-sets. However, some of MFA-ARM algorithm steps are only suitable for discrete data.

Rasha A. Mohammed, Mehdi G. Duaimi
Estimating Tongue Movements from Speech Using Deep Learning and Machine Learning Techniques

This study presents an innovative method to determine tongue movement during speech. This is accomplished by using ultrasound tongue imaging (UTI) from the TaL80 dataset. Our methodology uses advanced deep learning and machine learning techniques. First, a feature extractor is built by using transfer learning using ResNet50 to extract features. Moreover, we use the Extra Tree Classifier to select features effectively. Finally, we use Support Vector Machines (SVMs) to perform classification tasks. Our experiments indicate the effectiveness of the recommended technique in estimating tongue movements with a high accuracy of 100%, and also the validity of our results is verified by employing evaluation measures such as F1-score, recall, and precision. These measures achieved a 100% success rate by examining multiple cases, including one without feature selection and SMOTE technique, and another with feature selection and SMOTE technique. This allowed us to select the most optimal solution. The proposed technique shows great potential for use in speech therapy, language learning, assistive communication devices, and other related domains, opening up new possibilities for future research.

Safa Emad Sabri, Jamal Mustafa Al-Tuwaijari
From Data to Action Toward Sustainable Marine Conservation: AI-Based Coral Health Assessment

The Coral Reef Health Assessment Study in the Gulf of Aqaba leverages advanced imaging and artificial intelligence (AI) technologies to evaluate coral ecosystems’ health, focusing on stress indicators like bleaching, tissue necrosis, and disease susceptibility. The study integrates Convolutional Neural Networks (CNNs) and decision tree (J48), optimizing the monitoring process with high-resolution underwater imagery and detailed environmental data analysis. Our results indicate a high accuracy in detecting coral health variations, with CNN achieving up to 95.6% accuracy in identifying healthy corals. The integrated approach of machine learning and meticulous data handling offers a promising avenue for enhancing the accuracy and efficiency of environmental monitoring and supporting robust conservation strategies.

Mohammad Wahsha, Heider Wahsheh
Enhancing Error Insight Accessibility for Visually Impaired Programmers

Programmers often favor Integrated Development Environments (IDEs) like Visual Studio (VS) Code, Eclipse, NetBeans, or IntelliJ IDEA over simple text editors due to their ability to integrate key features and functionalities. Generally, visually impaired (VI) programmers encounter difficulties in using these IDEs because of their complex interfaces. These difficulties can impede blind professionals from completing their work. This paper focus on the accessibility challenges faced by VI individuals in programming, where tools designed for sighted users do not fit to the requirements of VI programmer. In this paper, we will present the solution to one specific difficulty faced by VI programmers at the time of using IDE is reading error messages from the output terminal window. To address this problem, a custom NVDA extension called “Error Insight Narrator” has been created. This extension improves the comprehension and interpretation of error messages and integrates into VS Code, supporting a wide range of coding languages. The primary goal of this solution is to improve accessibility and inclusivity in programming targeting VI users, allowing them to participate more efficiently in programming activities. Feedback provided by VI programmers verifies the effectiveness of the Error Insight Narrator plugin. A comparative evaluation with the NVDA Screen Reader reveals notable advancements in understanding error messages and navigating for users with visual impairments. This innovation represents a step forward in making programming tools more accessible and user-friendly for all individuals, regardless of visual ability.

K. M. Kajal, Vivekanand Jha
Deep Neural Networks for Multiclass Brain Tumor Detection and Classification

Brain tumor is the abnormal growth and proliferation of cells in the tissues of the brain. When a particular gene in a chromosome of a cell is either damaged or is not functioning properly, then an intracranial tumor or growth occurs which is called brain tumor. It can be either benign (non-cancerous) or malignant (cancerous). Conventionally, diagnosis of a brain tumor begins with radiology performed by a specialist on the outpatient; it includes MRI scans followed by analysis of the report and followed by a conclusion; application of computer-aided diagnosis (CAD) for detecting tumors in MRI images is significantly efficient and provides accurate results. Computer-assisted brain tumor diagnosis consists of tumor identification followed by classification. Despite much valuable research in the past availing traditional machine learning methods, still we lack promising result-oriented outcomes which can be considered more reliable and accurate in diagnosis of brain tumor. The current approach is debating toward employing deep learning approaches in the diagnosis of brain tumors. The proposed model achieved a final accuracy of 96.9% for the four different classes of brain tumor.

Swati Rawat, Himanshu Mittal
An Unknown Risk Analysis and Assessment Method for Computer Networks at the Distribution Edge

As the construction of the Internet of Things (IoT) for power distribution continues, millions of power distribution devices, electrical quantity sensors, and status sensors will be connected to the IoT network. This also gives rise to a huge number and variety of heterogeneous distribution data in the IoT, whose security and stability are of increasing concern. However, due to the complexity and dynamics of the computer network environment at the distribution edge, traditional risk assessment methods are often difficult to comprehensively and effectively identify and assess potential unknown risks. Therefore, this paper constructs a method for analyzing and assessing the unknown risks of computer networks at the distribution edge, aiming to guarantee the information security of the distribution system. Edge computing technology is introduced to redefine the relationship between cloud, management, and end, and an edge computing platform is deployed on the end side to achieve real-time and efficient lightweight data processing in situ, and to collaborate with a new generation of distribution automation cloud master to achieve distribution station autonomy in terms of network, data, and business. The method first constructs a multi-dimensional risk characterization indicator system by collecting multi-source data from the distribution edge computer network to comprehensively reflect the changing trends of the network state. Then, machine learning and data mining techniques are used to perform correlation analysis and pattern recognition of risk features to discover potential risk factors and their correlations. The communication transmission losses of the edge computing network are all below 1.5%.And the risk analysis accuracy of the algorithm is 88.08%, which is much higher than the network risk analysis accuracy of other methods.

Xiangjun Duan, Yunshuo Li, Yuanyuan Xu
Hybrid Deep Learning Model for Skin Cancer Detection Based on CNN and LSTM

Skin cancer is classified into 2 types: melanoma and non-melanoma. Melanoma lesions are the most dangerous and aggressive, accounting for the recent increase in mortality and morbidity rates. Skin cancer detection methods typically involve several stages, including preprocessing, segmentation, feature extraction, and classification. While various transfer learning models have been proposed for skin cancer detection in recent years, they have struggled to achieve high accuracy. In this research, a hybrid deep learning model combining CNN and LSTM is introduced. The model is implemented in Python, and its performance is evaluated based on accuracy, precision, and recall. Results indicate that the proposed model outperforms existing approaches in terms of accuracy.

Nandini Rajput, Shano Solanki
On Importance of Code-Mixed Embeddings for Hate Speech Identification

Code-mixing is the practice of using two or more languages in a single sentence, which often occurs in multilingual communities such as India where people commonly speak multiple languages. Classic NLP tools, trained on mono-lingual data, face challenges when dealing with code-mixed data. Extracting meaningful information from sentences containing multiple languages becomes difficult, particularly in tasks like hate speech detection, due to linguistic variation, cultural nuances, and data sparsity. To address this, we aim to analyze the significance of code-mixed embeddings and evaluate the performance of BERT and HingBERT models (trained on a Hindi-English corpus) in hate speech detection. Our study demonstrates that HingBERT models, benefiting from training on the extensive Hindi-English dataset L3Cube-HingCorpus, outperform BERT models when tested on hate speech text datasets. We also found that code-mixed Hing-FastText performs better than standard English FastText and vanilla BERT models.

Shruti Jagdale, Mihir Inamdar, Gauri Takalikar, Omkar Khade, Raviraj Joshi
Harvest Commodities Analyzer: Precision in Crop Market Evaluation

This analysis model is an innovative way for agricultural decision-making and uses the machine learning approach decision tree regression. Considering the past history of market performance, weather patterns, soil quality indicators, and geographical factors in a long list of variables, this model provides crop yield forecasts with precision and accuracy. This model has been developed using the class Decision Tree Regressor underneath from the scikit-learn library. This gives the model flexibility and scalability. To come up with flexible strategies in cultivation, and reduce risks cautiously. Thus, it forms a deliverable of actionable insight that stakeholders can utilize to raise productivity while improving profitability with sustainable agricultural practices.

C. Shyamala Kumari, T. V. Mohan Rao, R. Navi Sekhar Reddy, N. Nitheesh Reddy, V. Ashok Kumar
A Smart Regression Test Suite Prognosis Using Bagging Model with Intelligent Test Case Prioritization

Regression test suite growth in software development is a problem that this study attempts to solve. Test case prioritization, regression test suite analysis, and bagging are all combined using machine learning. Also samples are collected through sensory nodes for validation purpose. Dependency analysis, code coverage, and historical failure statistics are some of the criteria used by the method to assess the regression test suite. Utilizing machine learning methods such as Random Forest, the chosen test cases are ranked according to how likely they are to fail. By combining the predictions from various machine learning models, the bagging method improves the prioritization process. Higher error rate test cases are prioritized, which reduces the size of the suite and improves regression testing effectiveness, according to experiments conducted on real-world software projects.

Shruti Dutta, Parija Raul, Patatri Goswami, Sushruta Mishra, Saif O. Husain
Enhancing Heart Disease Prediction Based on Stacking Classifiers and Hyperparameter Optimization

Machine learning has recently drawn the attention of medical fields, with several cutting-edge machine learning frameworks that have been developed for heart disease prediction using diversified datasets. The ability of machine learning models to achieve more accuracy in predicting heart diseases and making them resilient to performance improvement is not yet achieved fully. This paper introduces a novel way to predict heart disease based on machine learning techniques and investigates the effectiveness of stacking classifiers. This topical research analyzes the data from popular datasets on heart disease to enhance prediction by using the combination of feature selection, management of outliers, and hyperparameters. In the current work, the performance of some key machine learning algorithms, such as Random Forest, XGBoost, and Gradient Boosting, is checked with the optimization of model performance increased using grid search and cross-validation, RF and Gradient Boosting models obtained the accuracy of 97%, respectively, also obtained accuracy of XGBoost reached 96% compared to other models. Our findings suggest that the stacking classifier illustrates superiority, and its accomplished accuracy is very high, close to nearly 98%, in comparison with traditional ML models. Our current work accentuates the potential of advanced ensemble methods in earlier detection and screening of heart disease and provides insights for more methodological advances and clinical applications, the proposed approach demonstrates robust predictive capability, paving the way for future research into more diverse and comprehensive datasets to validate and enhance the model’s efficacy.

Abdulrahman Ahmed Jasim, Hayder Mohammedqasim, Roa’a Mohammedqasem, Bilal A. Ozturk
Improving the Prediction of Obstacles in a Living Room and Naming It to Blind People in Audio Using Particle Swarm Optimization Classifier and AlexNet

This study offers a novel technology strategy to help those who are blind or visually impaired. The objective is to identify and locate things within a living room setting; then convey this data via auditory cues. The approach used AlexNet for precise object detection in addition to a novel technique known as Novel Particle Swarm Optimization. Thirty algorithm iterations and a sample size of forty were employed in the investigation. With 80% power, the significance threshold was established at α = 0.05. An independent sample t-Test revealed a statistically significant difference (p < 0.05) between the two algorithms’ accuracy and loss. Experimental investigation carried out using a Python compiler demonstrated that the Novel Particle Swarm Optimization classifier achieved an accuracy of 95.48%, surpassing the accuracy of AlexNet which was 92.75%. This technology offers a high level of confidence in object detection, and pretrained models are shown to be more efficient in terms of computing time and cost. The system allows for the simultaneous detection of multiple objects, with their identities displayed as text before being spoken. This has the potential to greatly benefit visually impaired individuals by providing additional auditory assistance.

M. Premalatha, R. Bhavani, R. Mahaveerakannan, K. Sudhakar
Improving the Accuracy of Glaucoma Detection in Retinal Image Analysis by Utilizing a Convolutional Neural Network (CNN) in Comparison to the Support Vector Machine (SVM) Algorithm

Glaucoma has been identified as a significant factor in the permanent loss of vision that occurs all around the world, according to certain reports. As a consequence of this, it is of the utmost importance to diagnose this disease at an early stage, in addition to providing an accurate diagnosis, in order to assist patients in receiving the appropriate treatment. However, in more recent years, deep learning, and more specifically convolutional neural networks, or CNNs, has exhibited promising findings in the field of medical image management, particularly in the diagnosis of glaucoma. This is particularly the case in the field of managing medical images. When it comes to improving the detection of glaucoma, which is a condition that affects the eyes, there is a revolutionary way that makes use of convolutional neural networks (CNNs). The research presented here demonstrates this alternative approach. After that, the effectiveness of this method is evaluated in comparison to that of the conventional strategy, which is known as the support vector machine (SVM) strategy. This evaluation is carried out in addition to the previous step. The purpose of this paper is to present a method for detecting glaucoma that entails training a CNN model with a dataset that is comprised of retinal pictures that have been specifically annotated. The following are some of CNN’s functionalities: The power to independently extract unique attributes from raw data image input is required in order to eliminate the requirement for manually designing features using algorithms. This is necessary in order to do away with the necessity to manually design features. The support vector machine (SVM) methodology is applied in this work for the purpose of comparison with the other approaches that were utilized before. The results of some of the research indicate that the CNN-based technique is superior to the SVM algorithm in terms of accuracy, sensitivity, and specificity when it comes to the detection of glaucoma. As a result of this, it has been proved that the CNN model is capable of functioning well as an automated screening tool. Because the retinal pictures that are displayed in the test comprise a variety of examples that do not relate glaucoma in any way, this is the reason why this is the case. Additionally, the research strives to identify the underlying aspects that contribute to the enhanced performance of CNN in order to better help the organization.

Thukkaram Umashree, S. Narendran, R. Mahaveerakannan, K. Sudhakar
Artificial Cognition for Applications in Smart Agriculture

Imagine agriculture not as fields of sweat, but as fertile ground for technological revolution. This hidden giant, contributing a staggering 6.4% to the global economy and driving economies in some nations, faces unprecedented challenges: climate chaos, population surges, and the specter of food insecurity. Here’s where agriculture intelligence (AI) blooms, transforming the landscape from toil to tech-driven triumph. This abstract delves into AI's burgeoning applications, each a seed sown for a more sustainable future. Precision farming blossoms with pinpoint accuracy, delivering water and nutrients exactly where needed, minimizing waste and maximizing yields. Disease detection, through the watchful eye of AI-powered cameras, identifies threats early, safeguarding crops and reducing harmful pesticide use. Crop phenotyping, like an intelligent dialogue between plant and technology, unlocks insights into individual plant responses to their environment, paving the way for resilient and optimized breeding. This isn’t just about robots and sensors; it's about harnessing the power of data to illuminate smarter decisions. Imagine: reduced expenses, improved soil health, and ultimately, an abundance of food for a world teeming with life. Agriculture intelligence, the future of farming, is not just about growing crops; it's about cultivating a brighter future for us all.

Prantik Kumar Mahata, Soham Basuri, Sushruta Mishra, Ammar Hameed Shnain
AI Server Selection Mode of Internet Companies Based on Digital Intelligence RL Algorithm

With the rapid development of artificial intelligence technology in recent years, Internet companies are increasingly demanding for the allocation and optimization of server resources. This article proposes an intelligent algorithm based on reinforcement learning (RL) to improve the selection and configuration process of artificial intelligence servers. By analyzing and comparing existing server selection modes, the RL algorithm model proposed in this chapter can dynamically optimize the allocation of server resources, in order to improve computational efficiency and reduce energy consumption. The experimental results show that the RL algorithm model is superior to the traditional method in many key performance indicators, providing a more efficient server configuration strategy for Internet companies. In the benchmark performance comparison experiment, the average response time of the RL algorithm was 200 ms, and the processing capacity reached 400 requests per second. In the dynamic load adaptability experiment, the RL algorithm can stabilize at 400 requests per second when the load fluctuates. In the fault recovery experiment, in the case of a sudden server shutdown, the RL algorithm only takes 20 min to recover from the fault to the fully functional state. In the final long-term operational efficiency experiment, the RL method showed a monthly improvement in performance and a decrease in operating costs over three months of operation. From the above experimental conclusions, it can be seen that the server configuration method based on RL algorithm outperforms traditional methods in various performance indicators, demonstrating its broad potential and advantages in practical applications.

Zhufeng Ye, Lei Chen, Lihui Wang
Improving Plant Disease Detection Accuracy Using Optimized Convolutional Neural Networks (CNN) Compared to Residual Networks (ResNet)

Mainly maintaining food security and increasing the production rate of crops and agricultural products greatly depends on the accurate identification and categorization of the diseases in plants. This research seeks to what extent does convolutional neural network (CNN) algorithms with ResNet approaches in plant disease identification systems. CNN and ResNet, the two modern deep learning approaches, are used for diagnosing pictures of a diseased plant leaf and classifying them according to their disease type. Our main concern regarding the images is to apply preprocessing to enhance the quality of the images and apply feature extraction that will help capture the patterns of diseases most efficiently. In our experimental analysis, both the methods are compared in terms of accuracy, precision, recall, and F1-score on benchmark datasets. According to the findings, algorithms of CNN are found to outperform ResNet modalities in most of the occasions for increasing reliability of plant disease identification jobs, such as convolutional neural networks (CNNs) could provide very good results because it is capable of learning the hierarchical features from raw photos that should lead to better classification of diseases. Finally, it is necessary to acknowledge the role of Resnet50 and CNN-based approaches in general with regard to the issues of plant diseases diagnosis and management, which may lead to the improvement of agricultural practices and food production.

Saquib Nawaz Khan, S. Narendran, R. Mahaveerakannan, K. Sudhakar
Efficient Prediction of Heart Disease Using Heartbeat-Based Audio Signals with ResNet-50: A Comparison with an Ensemble Decision Tree Algorithm

Cardiovascular disease continues to be a prominent cause of death globally, highlighting the need for precise and effective prediction techniques to identify and address it at an early stage. This paper introduces a new method for predicting cardiac disease by producing audio signals based on heartbeats and using ResNet-50, a deep learning framework, for classification. The performance of this strategy is compared to an ensemble decision tree algorithm. Given the fact that more and more wearables may potentially record sounds including heart sounds, audio signals can be viewed as a source of information for model building. It involves the preprocessing of unprocessed heart sound data to determine some important attributes and feed them to a ResNet-50 model for case appropriate diagnosis of various instances of heart diseases. Furthermore, we use the ensemble decision tree method to compare the results. This way, performing experiments with actual samples of heart sound, we demonstrate the efficiency of the proposed technique in predicting illness in the hearts. Furthermore, we evaluate the performance of ResNet-50 to ascertain how well it performs when compared with the ensemble decision tree approach in conditions of accuracy, sensitivity, specificity and computing efficiency. In the light of the present study, it may be concluded that the proposed approach which is developed based on the ResNet-50 model outperforms the ensemble decision tree algorithm in the aspect of accuracy. This shows that our approach can be a reliable technique used in estimating heart disease. To sum up, the present work contributes to the development of heart disease prediction by up to date technology and methodologies and optimizes the model for efficient and accurate prediction.

Muthaiah Murali, K. Sashi Rekha, R. Mahaveerakannan, K. Sudhakar
Enhancing Lacquer Image Efficiency Using with a Comparative Study of Linear Regression and Centroid Algorithms

This study aimed to assess and contrast the predictive power of two machine learning methods, novel linear regression and centroid, in estimating the efficiency of an image lacquering procedure. The goal was to determine which algorithm forecasts the lacquering procedure's effectiveness more correctly. When it came to the study, the sample size was ten, and the G power was zero. This was the case regardless of the method that was utilized. Seven was the quantity that was considered to be necessary in order to have adequate statistical power. Eight was the number that was taken into consideration. In the course of carrying out a meta-analysis, the results showed that the p-value was substantially lower than zero. The statistical investigation indicated that the two algorithms were significantly different from one another, that they provided an outstanding level of statistical significance, and that the changes that were seen were definite and not a function of variance. In addition, the study demonstrated that the two algorithms provided an exceptional degree of predictive power. It was necessary to do this task in order to achieve the desired result. In order to produce a forecast about the efficiency of the lacquering process, it was possible to make use of the specific weights and configurations that are utilized by the centroid approach. In the more nuanced element of accurately estimating the effectiveness of the lacquering process from photographs, it was determined that the novel linear regression strategy is significantly superior to the centroid approach. This was established through the process of data analysis.

V. Sirinischal, A. Shri Vindhya, R. Mahaveerakannan, R. Yuvarani
An Optimized and Robust Mobile Application Enabled Deep Learning Framework for Laryngeal Cancer Detection Using Voice and Image Data Samples

Laryngeal cancer has been identified as a very critical disease in the modern era. Laryngeal cancer identification in the beginning stage is one of the most challenging tasks for clinicians which may hinder the patient’s diagnosis in the initial stage. In recent years, multifarious computer-aided (CAD) screening approaches have been suggested by investigators for the timely prognosis of laryngeal cancer patients. Nevertheless, these developed screening instruments still have multiple limits namely low accuracy in laryngeal cancer analysis, high computing cost, intricacy in generalization, time complexity, and many more. In this research, an enhanced mobile application-based deep learning (DL) hybrid framework is proposed for laryngeal cancer analysis using voice and image data. The key aim of the mobile application-based DL framework is to determine the laryngeal cancer existence in the beginning phase with high accuracy for effective diagnosis of laryngeal cancer patients. Our proposed DL framework is validated through multiple image and voice benchmark datasets namely ImageNet, CT imaging dataset, Massachusetts Eye and Ear Infirmary (MEEI), and Saarbrucken Voice Dataset (SVD). The observed performance of our mobile application-based deep learning framework is very optimal and improved for laryngeal cancer patient screening. The evaluation metrics namely accuracy, precision, recall, and F1-score for the suggested laryngeal cancer detection model are observed at 99.29%, 98.76%, 98.39%, and 98.26%, respectively. There is promising scope for additional investigations in the future to explore novel methodologies based on DL applications for laryngeal cancer identification effectively, using extensive voice and image datasets.

Pravat Kumar Sahoo, Sushruta Mishra, V. Sanjay
Improvement of PSNR for Digital Image Watermarking Using Discrete Wavelet Transform in Comparison with Discrete Cosine Transform Algorithm

Robustness and imperceptibility are essential in digital picture utilizing a watermark to guarantee that the data underneath is not corrupted, while also achieving a high Peak Signal-to-Noise Ratio (PSNR). The purpose of this present study for the objective of digital photo watermarking is therefore to discuss the impact of two broadly used transformation algorithms, the Discrete Wavelet Transform and the Discrete Cosine Transform pertains to PSNR improvement. The method of research entails a comparative analysis of the impacts of the Discrete Wavelet Transform (DWT) and Discrete Cosine Transform (DCT) algorithms in regard to the boosting of PSNR. The objective of doing a systematic investigation is to assess a variety of characteristics, and the way in which this is accomplished is through the execution of experiments. The capability to embody capacity, resistance to standard attacks, and the perceptual quality of the images that have been watermarked are some of the variables that are taken into consideration in this context. According to the findings of the study, it was determined that watermarking techniques that are based on discrete wavelet transform (DWT) have greater performance in terms of boosting PSNR when applied to the collection of images that was provided. This was discovered as a result of the inquiry's findings. While this is in contrast to methods that are based on DCT, it is important to note that this is a comparison. Furthermore, the capability of the DWT technique that has been proposed for multi-resolution analysis is helpful for the identification of the watermark signal in the frequency domains, as well as for the reduction of perceptual distortion and the augmentation of the PSNR. These are all purposes that can be accomplished via the utilization of the technique. This is due to the fact that the DWT method is capable of functioning simultaneously at a number of different resolutions. On top of that, the discrete wavelet transform (DWT) provides a better level of resistance when compared to the typical attacks that are applied in the field of image processing. Included in this category are assaults such as compression, noise injection, and cropping, among other sorts of manipulation. Consequently, the watermark that is added will remain to be visible for a considerable amount of time after it has been added. This is because of the fact that… Additionally, the article covers the trade-offs that exist between PSNR and other metrics, as well as the true implications that are involved with deciding between DWT and DCT techniques for digital photo watermarking. These are all topics that are covered in the study. A second point that is brought up in the document is the fact that the PSNR is lower than other metrics that are available. In the context of watermarking digital photographs, the findings of this study, when taken as a whole, give substantial information that can be exploited to obtain the highest possible PSNR. When it comes to a wide range of multimedia applications, this has major consequences for enhancing the quality of perception, as well as for ensuring the integrity and safety of the digital content.

B. Harshavardhan Sai, D. Sheela, R. Mahaveerakannan, K. Sudhakar
Action Recognition Method Based on Multi-branch Attention and Spatiotemporal Graph Convolutional Network

In light of the challenges associated with effectively leveraging interactive information across channels and spatial dimensions derived from skeleton key point data, a continuous action recognition method based on multi-branch attention and spatiotemporal graph convolutional network was proposed. Firstly, the interactive information between channel attention and spatial dimension is extracted by embedding multi-branch attention. The features obtained by spatial graph convolution are weighted by multi-branch attention, and then the subsequent temporal graph convolution operation is executed to enhance the representation ability of the model for spatial relations. Finally, the ability of a spatiotemporal graph convolutional network to capture spatiotemporal relationships was used to realize the recognition of continuous actions. The findings from the experiments indicate that the proposed model outperforms current methods in terms of recognition accuracy on the NTU-RGBD and Kinetics datasets.

Niankuan Chen, Bengan Su, Liang Li, Xiaolei Meng, Changren Hou, Yanmin Shi, Yang Liu, Zhen Zhao
Predicting Car Prices Using Machine Learning Algorithms

The COVID-19 pandemic has disrupted the functioning of the car market in Turkey. Consequently, various innovative car-buying and selling practices, including online inspections, remote appraising, and others, have emerged. Machine learning implementation is particularly beneficial for predicting prices due to the easy writing of code. Therefore, the present paper concerned Random Forest Regressor, Linear Regression, Decision Tree, and XGBRegressor algorithms use in the determination of the car price. A dataset for the 2020 Turkish car sold market was considered in this paper. The research objectives for this project are the following: conducting the literature review related to machine learning and used car prices prediction and preprocessing x-determinants data; comparing target variables formatted to analyze techniques like Linear Regression, Decision Tree Regressor, Random Forest Regressor, and XGBRegressor; and deciding which model is more beneficial based on evaluation metrics. The hyperparameter tuning optimized the performance of 4 models. XGBRegressor shows the highest R2 score (93%) and the lowest (MSE, MAE, RMSE) among other models. Linear Regression, Decision Tree Regressor, Random Forest Regressor, respectively, performed an R2 score of (69%), (83%), and (91%). The results show the performance of machine learning models to provide an accurate car price prediction, while helping the customer to make good decisions within automobiles industries.

Hayder Mohammedqasim, Roa’a Mohammedqasem, Bilal A. Ozturk, Habsa Omar Ibrahim
Development of a Robust Convolutional Neural Network for Automated Lung Cancer Detection

Lung cancer continues to be one of the prominent types of cancer leading to mortalities globally, and early and accurate diagnosis is crucial. This paper discusses the creation of a stable CNN that can be used for the diagnosis of lung cancer or presence of tumor. Based on the large number of labeled lung images, the CNN structure is designed to be highly accurate and fast. The model receives comprehensive training and cross-checking where the deep learning technologies such as data augmentation, transfer learning, and regularization methods are employed to improve the model’s performance. For the proposed CNN, results show high accuracy, sensitivity, and specificity, while also outperforming popular methods and other proposed models. Also, the visual representation of the network helps in the analysis of the findings and making decisions based on clinical considerations. Lastly, there is a focus on how deep learning presents a great opportunity for the screening and also the diagnosis of lung cancer, especially if lung cancer is at its early stages.

A. Anto Sagaya Priscilla, R. Balamanigandan
A Scalable and Fault-Tolerant Smart Home Multi-user Model for Urbanized Space

This article explores the development of an advanced smart home system which includes integration of IoT-enabled technologies and smart materials. This enhances the user interaction and environmental adaptability. The focus is based on creating a living environment for the user where light, temperature, and space configuration can be adjusted dynamically in response to external conditions and peoples’ preferences. A system is designed which utilizes sensors, electrochromic windows, phase change materials (PCMs), and motorized furniture. This optimizes space utilization, comfort, and energy efficiency for the people living in a smart home. Sensors are placed throughout the home both indoor and outdoors which monitors sunlight intensity and temperature. It helps in controlling the PCM placed in the walls and also the electrochromic windows as they automatically adjust their properties for optimal thermal and lighting conditions. Motorized furniture reconfigured living spaces according to the user's demand. A central hub which contains Wi-Fi, ZigBee, or Z-Wave protocols is used in coordinating all devices like the sensors, the actuators, and different materials used. It processes real-time data coming from the sensors so that it can make intelligent adjustments to the materials.

Aloukik Pati, Ayushi Praharaj, Sushruta Mishra, Mohammed Al-Farouni
Comparison of Novel Convolutional Neural Network and Recurrent Neural Network Algorithms for Skin Nevus Detection with Improved Accuracy

Aim: The research focuses on detecting cutaneous nevi using an innovative convolutional neural network and compare its accuracy with Techniques for Recurrent Neural Networks. Methods and Resources: We employ novel techniques involving Convolutional and Recurrent Neural Networks to predict the performance of skin nevus detectors with a total iteration value of N = 10 for skin nevus detection and its types like malignant, benign. The iteration value was obtained with G-power values of 0.8 with 95% confidence interval. The train dataset consists of a total of 1438 of benign and 1199 of malignant samples. The test dataset consists of 2 directories of 320 images in malignant, 280 images in benign. The iteration was measured as 20 per group. Result: Here, the total iteration of the algorithms is 20 and discrepancy of 0.023 with a p-value lower than 0.05 is considered statistically significant was seen between the two algorithms in terms of accuracy and loss, According to the results of a T-Test conducted on a separate group of employees. Thus, the real-time skin nevus detection has been implemented by using the Novel While Recurrent Neural Networks attained an accuracy of 73.22%, whereas Convolutional Neural Networks reached 85.80%. In summary, the research shows that when it comes to predicting skin nevus detection, the Novel Recurrent Neural Networks aren’t as effective as Convolutional Neural Networks.

R. Sri Sathya, R. Bhavani, R. Balamanigandan, R. Mahaveerakannan
Geographical Region Classification: A Performance Analysis Using Machine Learning Metrics

The classification of geographical areas is important in many areas including urban planning, allocation of resources, and analysis of socio-economic factors. This paper evaluates how well a model for classifying regions by specific metrics performs, to evaluate the performance of our dataset from West Bengal, Assam, Bihar, Orissa, and Uttar Pradesh with metrics such as certainty rate, recall, F1 score, instance count (IC), instance ratio (IR), and accuracy. Our study shows that classification is successful: high F1-scores were achieved for all categories along with high certainties rates which indicate good performance in terms of correctness. The most remarkable point here is that states like Bihar or Orissa have shown comparably better recalls than other states—this proves our method worked better than any other used before it. This study helps to understand how well classification works and thus could be useful for future progressions within geographical region classifications.

Madan Mohan Tito Ayyalasomayajula, Pritha Singha Roy, Sathishkumar Chintala, Vinay Kukreja
Barnacles Mating Optimizer with Machine-Learning-Assisted Tweets Classification for Sustainable Urban Living

Currently, social media platforms include numerous, namely Facebook, Twitter, Instagram, blogs, reviews and news websites that permit people to share their opinions and reviews. Twitter is one of the vital sources of information for obtaining people’s attitudes, feelings, opinions and feedback. Within this context, Twitter sentiment analysis models are proposed in order to select whether textual tweets prompt a positive or negative opinion. The use of machine learning (ML) in tweets classification updates procedure of data retrieval on social media platforms as well as simplifies insights into public trends, opinions, and emerging subjects. This manuscript offers a new Barnacles Mating Optimizer with Machine-Learning-Assisted Tweets Classification (BMOML-TC) technique for Sustainable Urban Living. The BMOML-TC technique intended to boost sustainable urban living via effective investigation of social networking data. BMOML-TC technique leverages preprocessing and Bidirectional Encoder Representations from Transformers (BERT), to extract complex features relevant to sustainable urban living. Moreover, beta Variational Autoencoder (VAE) utilized for classification of the tweets. Finally, BMO algorithm applied for automated and accurate parameter tuning. The performance evaluation of BMOML-TC model authorized on benchmark dataset. An experimental outcomes stated significance of BMOML-TC model on recognition and identification of tweets on social media platforms.

Saad Alahmari
Gray Wolf Optimizer with Machine Learning Enabled Diabetes Mellitus Recognition Model

Diabetes mellitus (DM) is a persistent metabolic disorder typically diagnosed through various methods such as blood glucose tests, oral glucose tolerance tests, and HbA1c measurements. Early identification is pivotal for efficient management and the prevention of complications like kidney issues, heart diseases, and nerve damage. Machine learning (ML) and feature selection (FS) techniques play a crucial role in DM detection, employing data-driven approaches to identify pertinent features from datasets comprising medical histories, clinical measurements, and patient demographics. This research introduces the gray wolf optimizer with machine learning enabled diabetes mellitus recognition (GWOML-DMR) model. The GWOML-DMR model is developed to identify DM by merging FS and parameter tuning methods. Initially, variable stability scaling (VSS) is implemented to standardize the data uniformly. The model utilizes the gray wolf optimizer (GWO) for efficient feature selection, while a feedforward neural network (FNN) is employed for DM detection. Moreover, the hyperparameter tuning method, particle swarm optimization (PSO), enhances DM recognition within the FNN. GWOML-DMR augments detection rates, simplifies recognition processes, and contributes to risk assessment, ultimately leading to enhanced patient outcomes. Validation utilizing the PIMA Indians Diabetes dataset substantiates its superior performance in DM detection.

Indresh Kumar Gupta, Awanish Kumar Mishra, Shruti Patil, Rahul Mishra, Joel J. P. C. Rodrigues
Study the Effect of Ibuprofen Loaded with Green Nanocomposite on Blastocyst Implantation in Female Albino Rats

The objective of this study was to prepare and characterize the green nanocomposite from Moringa oleifera and improve its biological activity to reach the target cells. The Moringa nanocomposite was diagnosed by atomic force microscopy (AFM), which indicated the success of the loading process, as the average particle size of the M. oleifera extract was 165.4 nm. As for the Moringa nanocomposite, 11.10 nm was the average particle size. As for the average particle size of the M. oleifera leaf extract in which ibuprofen was dissolved and the nanocomposite loaded with ibuprofen, the results were 166.6 nm and 155.8 nm, respectively. The experiment was conducted in the laboratories of the College of Education for Pure Sciences, Department of Life Sciences, where 40 rats were used, of which 10 males were used for fertilization only, and 30 female rats. Pregnant rats were placed in six groups, with five mice in each group, and the duration of the experiment was from October to February, which were dosed from the first day of pregnancy to the seventh day of it to know the implantation sites. Where the study concluded that the use of non-influential concentrations of free ibuprofen will have a clear effect on implantation in pregnant female rats after loading them on the green nanocomposite, both the control and treatment groups had similar outcomes on the seventh day of pregnancy due to the implantation and formation of the decidual tissue in the endometrium first in the anti-mesenteric region, which is a sign of the success of the implantation at the time the decidual tissue is at its highest level of development and advancement.

Rajwan Hasan Al-Karawi, Alaa Hussein Al-Safy, Kiaser Abdulsajjad M. Hussain
Short Video Intelligent Recommendation Algorithm Technology Based on Artificial Intelligence in the Context of Digitalization

With the rapid growth of user base, the scale of short videos is also exploding. When faced with a large amount of video content, users often feel confused and find it difficult to find the content they want. Real-time collection of user browsing, liking, commenting, sharing, and other behaviors on short video platforms was conducted using data collection tools such as site logs or application interfaces. Image features were extracted from video frames and convolutional neural networks were used to extract visual features from each frame of the image. A personalized recommendation list based on user interests and the characteristics of video content was built. On this basis, using deep learning-based multi task learning methods or attention mechanism models, joint modeling of user interests and video content was carried out to improve recommendation accuracy and personalization level. When the user sample size was 1000, the actual number of clicks was 850; the recommended content was 1200; the user’s actual interest rate was 70.83%. This article presented an artificial intelligence (AI)-based intelligent short video recommendation algorithm technology in the context of digitalization. Through intelligent recommendation methods, it helps users find video content that meets their preferences.

Ruiqiang Li
Effect of Volatile Oil Extract of Two Plants (Lavandula angustifolia, Cupressus sempervirens) on the Vitality of Visceral Leishmaniasis

Visceral leishmaniasis, which is caused by the parasite Leishmania donovani, is one of the most deadly diseases for humans and records serious infections all over the world. It needs two hosts to complete its life cycle: an invertebrate host represented by a female sandfly and an invertebrate host represented by humans or mammals. Chemotherapy treatments for leishmaniasis have side effects and a long course of treatment. Therefore, most studies tended to use medicinal plants as an alternative because of their therapeutic properties or for fear of deterioration of some pathological conditions. It has been widely used to treat many diseases such as leishmaniasis, malaria, or diseases of the immune system, and has proven to be highly effective because of the biologically effective compounds it contains. The aim of the study is to know the inhibitory effects of the L. donovani parasite by using the MTT test and the volatile oil concentrations of each of Lavandula angustifolia and Cupressus sempervirens and the synergistic effect of L. angustifolia oil C. sempervirens, at concentrations of 1000, 500, 250, 125, 31.25, and 62.5. The results showed a clear difference for each drug spectrum treated during the experiment, and the two highest concentrations of lavender were given, which were 1000 and 500.

Mariam Salah Abdul Hassan Al-Tae, Yarub Modhar Al-Qazwini, Hanan Zwair Mikhlif
Green Synthesis of Tin Oxide Nanoparticles for the Removal of Mefenamic Acid from Aqueous Solutions

This study focuses on producing SnO2 nanoparticles using environmentally friendly methods (green method) with the help of Portulaca oleracea (P.O) extract work as a reducing and protecting agent instead of using chemical materials. Afterward, field emission scanning electron microscopy (FESEM), transmission electron microscopy (TEM), and the X-ray diffraction pattern were used to infer the grain size and crystal structure. The band gap energy of (4.2 eV) was determined using the UV–V is spectrum; this redshift in comparison with (3.6 eV) bulk SnO2 may be due to the quantum effect. The mefenamic acid (MFA) in aqueous solutions was used to examine the adsorption behavior of SnO2 nano crystals. To identify the optimal adsorption circumstances, a number of variables were examined, including pH, MFA concentration, temperature, studying duration, and adsorbent dose. It was discovered that when the MFA concentration and SnO2. P.O dosage were 30 mg/L and 0.6 g, respectively, within 30 min, the maximum removal efficiency was 97%. The MFA adsorption mechanism is consistent with the Freundlich isotherm models.

Hiba Ali Hamzah, Aula M. Al Hindawi, Fouad Fadhil Al-Qaim
Removal of Sulfate Pollutants from Water Using Zinc/Aluminum Layers

The nanostructures were prepared with a ratio of 3:1 of zinc chloride and aluminum chloride by co-precipitation method. These layers were used to adsorb sulfate ions (SO4−2) from the contaminated water. These layers were identified with FTIR, XRD and AFM. The results showed that the increase in temperature increases the amount of adsorption, where at the temperature of 45 °C it gave the highest amount of adsorption. As for the results that led to an increase in acidity, the amount of adsorption increased, as (pH = 2, 4, 7, 9, 12) and the highest amount of adsorption was at pH = 2. S2 type adsorption isotherms were defined for glaze classification. The percentage of removing sulfate ions from the water was calculated for the geographical change, and the most prominent results are that the highest percentage in time is 250 min. The thermodynamic functions (∆H, ∆G, ∆S) were calculated for the performance of sulfur ions and the results. Keywords—component, formatting, style, styling, insert showed that the reaction is endothermic and non-spontaneous.

Hamida Idan Salman, Nahlah Jaber Hussein
Water Purification from Phosphate Ions Using Zinc Aluminum Nanoparticle

Nanostructures were prepared in a ratio of 1:2 of zinc nitrate and aluminum nitrate by co-precipitation method. These layers were used to adsorb phosphate ions from polluted water. The effect of temperature was studied. The results showed that with increasing temperature, the amount of adsorption increased. Also, the pH was studied on the adsorption of phosphate ions. It was found that Adsorption increases with increasing acidity, and AFM, FTIR, and XRD were used to identify these ions. The adsorption isotherm was of the L1 type, according to Glaze’s classification. Thermodynamic functions were calculated, and the results showed that the reaction.

Hawraa Salman Kadhim, Hamida Idan Salman
Combining Particle Swarm Optimization with Visual Odometry for Autonomous Vehicle Path Planning

The research proposes an integrated methodology for vehicle navigation and motion analysis using computer vision and path planning techniques that ensures the automation of mechanical and labor-oriented task of hand weed management. To reduce agricultural dependency on herbicide, improving its sustainability and reducing its environmental impact, this research will show an innovative pathway to work as feature detection, visual odometry, Inertial Measurement Unit (IMU) integration, Differential Global Positioning System (DGPS) fusion, 3D mapping, and Particle Swarm Optimization (PSO)-based path planning are the main Key components. A mobile robot is designed to provide the best path solution for Uttarakhand’s hilly terrain by making use of robotic spraying technology combined with enhanced manual functions. To improve vehicle localization accuracy, optimize navigation courses, and ease obstacle avoidance in dynamic situations, the information is taken from previous and future frames. By mapping coordinate positions within the camera’s coordinate system and employing PSO-based path planning, the system ensures accurate decision-making to find the optimal route for weed spraying.

Ritika Mehra, Govind Panwar, Vikas Thapa
Early Diagnosis of Parkinson Disease Approaches: Methods and Challenges

The insidious onset and multifaceted symptomatology of Parkinson’s disease—a crippling neurological disorder—place a significant challenge in the domain of early detection and management. The current research will investigate these gaps by employing machine learning techniques to analyze advanced data. In this study, we are going to design a model that is user-friendly and more effective for the early detection of Parkinson’s disease by using voice data as the primary source of diagnosis. The methodology includes careful preprocessing of data, followed by feature extraction, model training, and rigorous evaluation, thereby ensuring that a tool so designed for diagnosis is highly robust and accurate. As it brings the knowledge acquired by research into these severe studies together with the embracement of modern technology, our work proactively detects Parkinson’s. We promise the methodology and a vision for the future where timely intervention and correct diagnosis lead to improved lives for the sufferers of this order. These interdisciplinary efforts will pave the way to a brighter, healthier tomorrow for the individuals who are in battle with Parkinson’s disease.

Rosevir Singh, Abhishek Kumar, Paurav Goel
Predictive Modeling for Early Detection of Lung Cancer: A Machine Intelligence Approach

Early detection of this disease can readily help to increase the mortality and therapeutic outcomes in patients with lung cancer. However, the disease is complex and mostly detected when it is already in the advance stages which causes high mortality rates. Radiotherapy is commonly used to treat lung cancer, which is a highly prevalent kind of cancer. However, dose-limiting toxicity, including RP, remains a problem. It should be noted that conventional analytical methods for RP prediction with a focus on individual indicators are unsuitable in terms of the interdependent nature of the cancer and treatment characteristics. It is for determining that which of the machine learning algorithms has been efficient in predicting the lung cancer probability most accurately. The following classifiers, namely Naive Bayes, decision trees, bag of trees, logistic regression, and artificial neural network-multilayer perceptron, were used to carry out the prediction on lung cancer. Concerning the accuracy, the results show good potential by providing accuracy of 97% for each algorithm; however, the most accurate one proved to be the multilayer perceptron algorithm with 98.4% accuracy. In summary, this analysis emphasizes the significance of the use of machine learning for early diagnosis and forecasting of lung cancer. Using accurate computational techniques, like the multilayer perceptron algorithm, the clinicians can identify the disease at an early stage thus improving the patient’s outcome and may even reduce fatalities.

A. Anto Sagaya Priscilla, R. Balamanigandan
Leveraging a Smart AI-Controlled gRNA in Genome Editing for Identification and Replacement of Genetic Mutations

The field of genetic engineering has witnessed a paradigm shift with the advent of Clustered Regularly Interspaced Short Palindromic Repeats (CRISPR) technology. CRISPR empowers researchers with the remarkable ability to precisely modify genomes, offering immense potential for treating genetic diseases. However, achieving high levels of accuracy and specificity in gene editing remains a critical challenge. This paper explores the burgeoning field of AI-controlled guide RNA (gRNA) design, a revolutionary approach that harnesses the power of artificial intelligence (AI) to optimize gRNA selection and enhance CRISPR genome editing outcomes. By integrating AI algorithms for gRNA design, researchers aim to revolutionize personalized medicine and targeted therapies through a more precise and efficient approach to identifying and replacing detrimental genetic mutations.

Debanksh Guha, Divya Kumar Avtaran, Rahul Lenka, Tiansheng Yang, Lu Wang, Rajkumar Singh Rathore
Leveraging GANs for Adaptive Network Intrusion Detection

This study tackles the urgent issue of network safety in today’s ever-changing cybersecurity scene. Regular intrusion detection systems (IDS) often can’t keep up with the quick progress of cyberthreats. To make IDS stronger against these attacks, this research brings in a new way by adding generative adversarial networks (GANs) to the detection setup. Using the KDD Cup 1999 dataset, a common standard in this field, the team does thorough data prep and digging. They build a complex GAN structure with a generator and discriminator letting the system learn to spot both real and fake patterns in network traffic. They check how well this GAN-based IDS works by testing it on the dataset looking at key measures like F1-score, precision, accuracy, and recall at different decision points. The study helps the field by showing how GANs can be used in network safety and stresses the need to think outside the box to beef up intrusion detection systems. What they found paves the way to make cybersecurity better in the future highlighting the need to have defense systems that can adapt to new sneaky tactics.

Ruchi Bhatt, Gaurav Indra
Streamlining Task Management: Building a Modern To-Do List Application with MERN Stack

This paper presents the improvement of a plan-for-the-day web application using current web innovations including Respond JS, Express JS, Node JS frameworks, and MongoDB Atlas. The to-do list app is a digital tool for effective task management that provides users with a unified experience across various platforms and devices. Utilizing the capacities of React JS, the application gives a responsive and instinctive UI, empowering clients to make, alter, focus on, and coordinate undertakings easily. On the backend, the Express JS and Node JS systems work with the improvement of a powerful server-side design, guaranteeing smooth correspondence between the client-side and server-side parts. To store and manage task data, MongoDB Atlas, a fully managed cloud database service, is used. It offers flexibility and scalability to meet growing user demands (What a to-do: studies of task management toward the design of a personal task list manager (2004) [1]). Through the joining of these advancements, the plan for the day web application intends to smooth out work process cycles and improve efficiency for clients across assorted spaces.

N. Manasa, P. Bhavya Sri, S. Suhasini
Shielding the Vaults: Cyber Monitoring Strategies for Mitigating Banking Threats, Frauds, and UPI Hacking Risks

As banking systems move rapidly into the digital age, the need for cyber security measures has become very high. This white paper highlights the need for a comprehensive online monitoring strategy to combat multiple threats, including fraud, hacking, and the risks associated with the Unified Payments Interface (UPI). By examining current trends, legal frameworks, and the emergence of new technologies, this paper aims to provide insight into effective monitoring mechanisms and highlight the importance of enforcement efforts to empower financial institutions to transform cyber threats.

Sayandeep Dey, Srinath Neogi, Tiansheng Yang, Rajkumar Singh Rathore, Soumayan Ghosh, Nilamadhab Mishra
Design and Implementation of an Intelligent Training System Based on Computer Vision and Machine Learning Algorithms

Intelligent training systems have shown great potential in improving the efficiency and quality of battlefield rescue. This article aims to design and implement an intelligent training system based on these advanced technologies, specifically designed for the specific needs of battlefield rescue. The system design follows the principles of modularity, user interaction, and scalability, ensuring the efficiency, usability, and future development potential of the system. This article adopts computer vision technology for high-precision simulation of battlefield environment, and combines support vector machine (SVM) algorithm to provide personalized training scenarios and decision support. The performance evaluation of the system shows that the accuracy of computer vision technology and the efficiency of SVM algorithm have achieved good results, especially SVM algorithm has performed well in training and prediction. The experimental results show that the proposed intelligent training system can effectively improve the trainees’ battlefield adaptability and decision-making ability. The task completion rate of trainees increased from the highest 75% to 93.9%. At the same time, the students’ user satisfaction is generally high, and the average satisfaction of the training system exceeds 7.5 points. It can be seen that the intelligent training system designed in this article has a good effect in improving the efficiency of battlefield rescue training. The construction and application of the system not only improves the skills and reaction ability of rescuers, but also helps to improve their decision-making ability under different complex conditions on the battlefield.

Daning Wang, Wen Liu, Qiang Zhou, Shufang Yuan, Bowen Cui, Yongsheng Qin
Metadata
Title
Proceedings of Fourth International Conference on Computing and Communication Networks
Editors
Akshi Kumar
Abhishek Swaroop
Pancham Shukla
Copyright Year
2025
Publisher
Springer Nature Singapore
Electronic ISBN
978-981-9632-44-2
Print ISBN
978-981-9632-43-5
DOI
https://doi.org/10.1007/978-981-96-3244-2