Skip to main content

2024 | Buch

High Performance Computing, Smart Devices and Networks

Select Proceedings of CHSN 2022

herausgegeben von: Ruchika Malhotra, L. Sumalatha, S. M. Warusia Yassin, Ripon Patgiri, Naresh Babu Muppalaneni

Verlag: Springer Nature Singapore

Buchreihe : Lecture Notes in Electrical Engineering

insite
SUCHEN

Über dieses Buch

This book comprises the proceedings of the 3rd International Conference on Computer Vision, High-Performance Computing, Smart Devices, and Networks (CHSN 2022). This book highlights high-quality research articles in machine learning, computer vision, and networks. The content of this volume gives the reader an up-to-date picture of the state-of-the-art connection between computational intelligence, machine learning, and IoT. The papers in this volume are peer-reviewed by experts in related areas. The book will serve as a valuable reference resource for academics and researchers across the globe.

Inhaltsverzeichnis

Frontmatter
Analysis of Prevalence of Flat Foot in Primary School Children

Flat foot is a common health condition that prevails among children as well as adults. This paper analyses the relationship between age and gender with the prevalence of flat feet among primary school children. The results are validated on 424 primary school children (254 males and 170 females) between the age of 6 years to 10 years in Delhi, India. The foot imprinter plate as well as physical and photographic assessment was used to diagnose the presence of flat feet among primary school children. The number of children diagnosed with completely flat feet was 118, and children diagnosed with partial flat feet were 176. The results show out of every five children, three children were either completely or partially flat feet (69.3%). The results showed that there was a significant association between gender and flat foot. It was concluded that assessment of flat feet should be made available to children and parents at an early age to prevent the condition to be converted into a serious health problem and hindrance in various sports activities.

Subodh Mor, Shikha N. Khera, G. C. Maheshwari
Evolutionary, Protein–Protein Interaction (PPI), and Domain–Domain Analyses in Huntington’s Disease

Mutations play a vital role in causing human neurodegenerative diseases such as Huntington's, Alzheimer's, and Parkinson's. A DNA section known as a CAG trinucleotide repeat is involved in the Huntingtin (HTT) mutation that leads to Huntington's disease (HD). Evolutionary, protein–protein interaction (PPI), and domain–domain interaction analysis provide new perceptions into neurodegenerative disorders and enable the prediction of the new causing proteins and their functionality associated with neurodegenerative diseases. The main objective of this study is to identify the top and associated proteins for HTT. This study is divided into three phases. In the first phase implemented the evolutionary analysis to identify the similarities at the cellular and molecular levels of Huntington’s disease-causing proteins. In the second phase applied the PPI analysis to build an HTT network which is used to identify existing associations and predict the new associations for this HTT. Finally, domain–domain analysis has been used to analyze the domains of similar proteins with newly identified associated proteins of HTT. This study reveals that brain-derived neurotrophic factor (BDNF), α-Adaptin, and Butyrylcholinesterase (BChE) are the new proteins directly and indirectly associated with Huntington’s disease. Also, this integrated analysis finally leads to personalized and precision medicine for other chronic diseases.

Sai Gopala Swamy Gadde, Kudipudi Pravallika, Kudipudi Srinivas
A Novel Res + LSTM Classifier-Based Tomato Plant Leaf Disease Detection Model with Artificial Bee Colony Algorithm

World economy is mainly depending on agriculture. In recent days, many of the plants have been affected by various types of diseases, which will affect the economic growth of the country. Early detection of plant diseases may help to increase the profit rate for farmers. Tomatoes are the most consumable vegetable around the world. Catastrophic influence on food production safety is caused by plant diseases, and it decreases the quantum and eminence of agricultural products. The tomato plants are affected with different types of leaf spot diseases like grey spot, mosaic, leaf spot, brown spot, Alternaria, and also the rust and pests that may affect the growth of tomato plants at their various growth stages. Hence, the identification of leaf spot diseases that occur in the tomato plant is important to increase the survival rate of the tomato plants. Manual inspections of tomato leaf diseases are secured more time, and it mainly depends on the expert’s knowledge. To address this problem, a novel hybrid deep learning technique is developed for the detection of leaf spots in the tomato plant. The tomato leaf images with normal and affected tomato leaves are collected from real world and traditional benchmark datasets. After the tomato leaves are segmented using the U-net model, the resultant segmented images are subject to the classification phase. Finally, leaf spots in tomato leaves are classified with the help of residual network and long short-term memory (ResNet-LSTM), and the parameter in the Resnet and LSTM is tuned by utilizing artificial bee colony (ABC) algorithm to offer effective leaf spot classification rate. Finally, the severity level is computed. The experimental results demonstrate the effectiveness of the proposed model in detecting the leaf spot in the tomato plant.

Alampally Sreedevi, Manike Chiranjeevi
The Development of Advanced Deep Learning-Based EoR Signal Separation Techniques

The weak EoR signal is submerged in the strong foreground radiation interference. The classic the foreground removal methods assume that the foreground spectrum is smooth, but the complex instrumental effect will affect the spectral structure, resulting in the failure to accurately detect the signal. In this paper, a deep learning network called CNN-LSTM model is proposed to separate the EoR signal to improve computing resource utilization for massive observational data. Based on the simulation data of SKA1-LOW, a CNN-LSTM fusion model is constructed to reconstruct the EoR signal. Experimental results show that compared with the traditional methods including polynomial fitting and continuous wavelet transform, the EoR signals detected by the proposed deep learning model have better quantitative evaluation indexes of SNR and Pearson correlation coefficient. This property provides a new way to explore the research field of EoR.

S. Pradeep, C. V. P. R. Prasad, Ch Ruchitha
Pediatric Pneumonia Diagnosis Using Cost-Sensitive Attention Models

Pediatric pneumonia is a medical condition in which air sacs of the lungs get filled with fluid. In recent years, chest X-rays have proved to be a better alternative to traditional diagnosis methods. Medical experts examine the chest X-ray images to detect the presence of pneumonia; however, the low radiation levels of X-rays in children have made the identification process more challenging leading to human-prone errors. The increasing use of computer-aided diagnosis in the medical field, especially deep learning architectures like Convolutional Neural Networks (CNNs) for images, helped tackle this issue. Our work proposes a Convolutional Block Attention Module (CBAM) attached to the end of pretrained ResNet152V2 and VGG19 with cost-sensitive learning. The weighted average ensemble uses weights which are calculated as a function of the precision, recall, f1-score, and AUC of each model. These values are concatenated as a vector and passed through a Tanh activation function. The sum of elements in this vector forms the weights. These weights when used in the weighted average classifier results in an accuracy of 96.79%, precision of 96.48%, recall of 98.46%, F1-score of 97.46%, and an AUC curve of 96.24% on the pediatric pneumonia dataset. The proposed architecture outperforms existing deep CNN models when trained with and without cost-sensitive training for the task at hand. We expect our proposed architecture to assist in real-time pediatric pneumonia diagnosis.

J. Arun Prakash, C. R. Asswin, K. S. Dharshan Kumar, Avinash Dora, V. Sowmya, Vinayakumar Ravi
An Integrated Deep Learning Deepfakes Detection Method (IDL-DDM)

Deepfakes have fascinated enormous attention in recent times ascribable to the consequences of threats in video manipulation. Consequently, such manipulation via intelligent algorithm contributes to more crucial circumstances as electronic media integrity become a challenging concern. Furthermore, such unauthentic content is being composed and outstretched across social media platforms as detecting deepfakes videos is becoming harder nowadays. Nevertheless, various detection methods for deepfakes have been schemed, and the accuracy of such detection models still emerges as an open issue, particularly for research communities. We proposed an integrated deep learning deepfakes detection model namely IDL-DDM to overcome ongoing criticism, i.e., difficulties in identifying the fake videos more accurately. The proposed IDL-DDM comprises side-by-side deep learning algorithms such as Multilayer Perceptron and Convolutional Neural Network (CNN). In addition, the Long Short-Term Memory (LSTM) approach is applied consecutively after CNN in order to grant sequential processing of data and overcome learning dependencies. Using this learning algorithm, several facial region characteristics such as eyes, nose, and mouth are extracted and further transformed into numerical form with the intention to identify video frames more precisely. The experiments were performed via different datasets such as the Deepfakes Detection Challenge Dataset (DFDC) and Unseen (YouTube Live) videos which comprise a wealth of original and fake videos. The experimental results represent a higher achievement for the IDL-DDM in contrast to other previous similar works.

Warusia Yassin, Azwan Johan, Zuraida Abal Abas, Mohd Rizuan Baharon, Wan Bejuri, Anuar Ismail
Melanoma Detection Using Convolutional Neural Networks

The prevalence of skin cancer is a huge social issue. Melanoma is one type of skin cancer which is known as malignant melanoma. It is the most dangerous skin cancer which is spreading more vastly. Melanoma makes up the majority of skin cancer deaths roughly 75% of them. Detecting melanoma cancer as early as possible and receiving therapy with little surgery are the best ways to beat it. This model quickly categorizes melanoma disease by utilizing efficient higher resolution convolutional neural networks. By using the efficient MobileNetV2 architecture model, the automated melanoma detection model can be developed to identify the skin lesion images. The MobileNetV2 architecture is incredibly lightweight and can be utilized to extract more functionality. The HAM10000 dataset has been used for the evaluation. It uses the global average pooling layer which is connected with the fully connected layers. The proposed system can be used to detect whether the disease is melanoma or not. The model has an accuracy rate of 85%.

Venkata Sai Geethika Avanigadda, Ravi Kishan Surapaneni, Devika Moturi
Reinforcement Learning Based Spectrum Sensing and Resource Allocation in WSN-IoT Smart Applications

There is a tremendous demand for resources due to the different emerging IoT applications that must be fulfilled. The large-scale resource-constrained IoT ecosystem needs a novel and robust technique to manage resources. An optimal resource provisioning in IoT ecosystem has to handle the mapping of request-resource and is hard to achieve as the IoT requests and resources are dynamic and heterogeneous. Therefore, artificial intelligence (AI) has a vital role in the IoT network. As changes are dynamic, global optimization cannot be achieved due to classifiers of the wireless signals and high interference. In this research, spectrum selection is integrated with spectrum access by the use of reinforcement learning with stochastic-based reward measures for efficient resource allocation in WSN-IoT applications. Here, state–action–reward–state–action approach with the Gittins index named SARSA-GI is involved. The role of the state–action–reward–state–action model is developed with an energy-efficient approach for optimizing the channel. Next, the Gittins index is designed to reduce the delay and enhance the accuracy of spectrum access. The simulation results are compared with two state-of-the-art methods which achieve 91.4% of reliability, 88.4% of transmission probability, 93.4% of throughput, 66.8% of collision performance, and 57.2% of signal-to-interference noise ratio.

J. V. N. Raghava Deepthi, Ajoy Kumar Khan, Tapodhir Acharjee
Planning and Construction of a Quantum Model of the SHA-1 Algorithm Using IBM’s Qiskit

The hash produced by a cryptographic hash function is inherently impossible to reverse-engineer or duplicate. You can protect yourself from hacking attempts more effectively with this. In a quantum environment, attacks on information systems would look significantly different, calling for a complete overhaul of current cryptographic practices. For the purpose of constructing a quantum-based Secure Hash Algorithm (SHA-1) from the ground up, a complete guide is provided to the design of quantum addition along with the associated circuits of XOR, AND, NOT, OR, MOD, CARRY, and SUM. To boost the efficiency of the standard SHA-1 algorithm, the q-SHA-1 is created in a Qiskit-based quantum environment. Experimental results from a wide range of conditions will be compared to evaluate the relative efficacy of the classical and quantum SHA-1 algorithms. However, it has been discovered that the execution time is highly dependent on the particular CPU type that is being used. A more powerful processor ought to cut down on the amount of time needed for the overall execution.

Sandip Kanoo, Sumit Biswas, Prodipto Das
Deep Learning-Based Automatic Speaker Recognition Using Self-Organized Feature Mapping

Automatic speaker recognition (ASR) plays the major role in many applications including forensics, dictionary learning, voice verification, biometric systems, and so on. The performance of these application depends on efficiency of ASR system. However, the conventional ASR systems were developed using standard machine learning algorithms, which resulted in low recognition performance. Therefore, this work is focused on development of deep learning-based ASR system. Initially, voice features are extracted using Mel-frequency cepstral coefficients (MFCC), which analyzed the spectral properties of various voice samples. Then, self-organized feature map (SOFM) is applied to reduce the number of available features, which selects the best features using Euclidian similarity between features. Further, deep learning convolutional neural network (DLCNN) model is used to train the features and forms the feature database. Finally, a test voice sample is applied to the trained DLCNN model, which recognizes the speaker detail. The simulations carried out on Anaconda (TensorFlow) showed that the proposed ASR-Net system resulted in superior recognition performance as compared to conventional systems.

K. Preethi, C. V. P. R. Prasad
Machine Learning-Based Path Loss Estimation Model for a 2.4 GHz ZigBee Network

Wireless sensor networks (WSNs) and Internet of Things (IoT) have received remarkable attention from the past few years in various applications. Modeling of path loss (PL) for the deployment of a developed WSN system is a crucial task owing to the time-consuming and elegant operation. However, radiofrequency (RF) engineers adopted either deterministic or stochastic empirical models to estimate the PL. In general, empirical models utilize predefined influenced parameters including path loss (dB), path loss exponent ( $$\Upsilon $$ Υ ), and other significant parameters. Although, empirical models differ significantly from original measurement due to consideration of different terrains. In this study, an endeavor has been made to develop a machine learning-based model to estimate the path loss for a standard ZigBee communication network operating on a 2.4 GHz carrier frequency deployed in an urban area. An experimental setup was designed and tested in line-of-sight (LOS) and non-line-of-sight (NLOS) conditions to collect the influence parameters such as received signal strength indicator (RSSI), frequency, distance, and transmitter antenna gain. Besides that, environmental parameters such as temperature and humidity are also included. In this context, a three-layer, feed-forward back-propagation multilayer perception neural network (BPNN) machine learning and Log-Distance empirical models were employed to estimate the PL. The obtained results reveal that the BPNN model noticeably enhanced the coefficient of determination (R2) and reduced root mean square error (RMSE) compared with the empirical model. The R2 and RMSE metrics were obtained as 0.97220 and 0.03630 in the NLOS scenario as well as 0.99820 and 0.00773 in the LOS environment, respectively.

Prashanth Ragam, Guntha Karthik, B. N. Jagadesh, Sankati Jyothi
Comparative Analysis of CNN Models with Vision Transformer on Lung Infection Classification

Lung infections are the most frequently observed medical conditions that affect the respiratory system. These infections can have meager effects like cough and if ignored can lead to the cause of death. Hence, a classification model that helps in the early detection of lung infections can help in avoiding further complications. This paper focuses on techniques to classify lung infections based on different convolution neural network model architectures in comparison with advanced deep learning techniques like Vision Transformer trained on chest X-ray dataset. One of the most powerful imaging techniques for diagnosing lung infections is chest X-ray. So, our model is built on chest X-rays collected from 30,805 individuals that were classified into 15 labels (including no finding). The dataset consists of 112,120 samples of images that are utilized for developing the model for predicting and analyzing lung infections by using InceptionV3, ResNet50, VGG16, InceptionResNet, and Vision Transformer. Each model is evaluated on the basis of mean accuracy error (MAE), binary accuracy, and loss. Among all the models, InceptionResNet has obtained a best accuracy of 93.33%. This study signifies that Vision Transformers are yet to be developed in order to catch up with CNN models.

G. S. S. V. Badrish, K. G. N. Prabhanjali, A. Raghuvira Pratap
Classification of Alzheimer’s Disease Using Stacking-Based Ensemble and Transfer Learning

Dementia is collection of traits that are linked with a reduced memory power, thinking abilities or other cognition-related skills. Dementia exists in many different forms and can be caused by many different conditions. However, the most common cause of dementia is Alzheimer’s. Alzheimer’s can be described as one specific disease (Beer et al in The Merck manual of diagnosis and therapy. Merck Research Laboratories, 1999 [1]). In recent years, many deep learning approaches to classify brain images with respect to the Alzheimer’s disease are being proposed, and a tremendous amount of research is being done in this area. However, these approaches are still not being used actively in the medical field due to apprehensions about their accuracy and due to a general lack of appropriate medical data. This paper aims at introducing an approach to classify the Alzheimer’s MRI image data into four different stages. The approach produces efficient and accurate results and is designed to encourage the implementation of deep learning in day-to-day medicine without the need for much human involvement. In the proposed method, we use transfer learning to employ three pre-trained deep learning models to perform the task of classification and combine them through a stacked ensemble, which can then be used for the purpose of predictions. To carry out this approach, the Alzheimer’s data set (four classes of images) from Kaggle, containing MRI brain images, has been used, and the proposed methodology has produced an accuracy of 97.8%. The results have been visualized and presented in this paper.

T. Madhumitha, M. Nikitha, P. Chinmayi Supraja, K. Sitakumari
Heart Device for Expectation of Coronary Illness Utilizing Internet of Things

As of late, there has been a quick development of medical care administrations for giving remote correspondence media among specialist and patient through wearable innovations, which is alluded to as “telemedicine.” The reason for the curio is to offer continuous checking of constant circumstances like cardiovascular breakdown, asthma, hypotension, hypertension, and so on that are situated a long way from clinical offices, like provincial areas or for a briefly not individual getting clinical consideration. Because of a change in way of life that influences all age gatherings, coronary illness turns into the primary driver of mortality in every single such case. Writing reports that practically 2.8 billion people overall pass away from coronary illness because of being overweight or corpulent, which ultimately influences circulatory strain swings, cholesterol levels, and in particular the effect of pressure chemicals on hidden heart sicknesses. Numerous wearable innovations screen commonplace heart working pointers like circulatory strain, glucose level, blood oxygen immersion, and ECG. The proposed framework might gather the fundamental information while barring commotion unsettling influences by at the same time checking a few boundaries integrated into wearable gadgets utilizing a solitary chip. It endeavors to keep its precision by limiting human collaboration. A constant demonstrative gadget involves biomedical sensors to gauge different boundaries in heart-weak patients who are somewhat found. Doctors can hold and view various accumulated perceptions sometime in the future for a precise determination. Consequently, the proposed structure will lay out an emotionally supportive network that will permit the specialist to electronically screen the patients’ wellbeing boundaries while he is away. The actual impression of a specialist cardiologist would clearly be pivotal for approving the aftereffects of the determination.

P. Kumar, S. Vinod Kumar, L. Priya
Parallel Programming in the Hybrid Model on the HPC Clusters

Support for an ever-expanding range of distributed and parallel strategies and support for performance analysis are constantly evolving. We propose message passing interface (MPI) and open multi-processing (OpenMP) standards that provide various software development tools covering a wide range of techniques not strictly limited to shared memory. The theoretical part focuses on a general overview of the current capabilities of MPI and OpenMP as application programming interfaces. The authors researched the high-performance computing (HPC) cluster environment, and this work provides results and representative examples. Also, in this study, it has been proven that to obtain favorable processing efficiency and write the code correctly—regardless of the programming paradigm used—it is necessary to know the basic concepts related to a given distributed and parallel architecture. Additionally, in this study, we investigated the need to adapt the tools used to solve a specific problem regarding improving processing efficiency. Finally, we presented that failure to apply a critical section may result in unpredictable results.

Tomasz Rak
An Extensive Study of Frequent Mining Algorithms for Colossal Patterns

During the last decade of research, a lot of focus has been placed on the subject of frequent pattern mining (FPM). A profitable data set with a large sum of transactions and only a few items in each transaction has been used to develop numerous FPM algorithms. Because of the rise of bioinformatics, a new sort of data set called a high-dimensional data set has emerged, with fewer transactions but a greater sum of elements in each. The execution time of classical algorithms grows with deal length. High-dimensional data sets can’t be processed by existing algorithms. But when applied to large data sets with a lot of dimensions, mining algorithms generate a huge amount of data, much of which is useless to scientists because of the little and medium-sized patterns they include. As a way to lessen the number of output patterns for mining patterns, colossal pattern mining is discussed. Since small and mid-sized patterns aren’t mined, mining algorithms for enormous patterns run faster. In this work, an extensive study of colossal patterns, existing mining algorithms with its drawback is mentioned. The definitions of FPM, high utility mining and relation of colossal patterns with others are also explained. Pattern-Fusion is the first algorithm, which is developed for colossal patterns that is described briefly in this work.

T. Sreenivasula Reddy, R. Sathya
A Review Paper on Progressive Approach to Reduce Context Switching in Round Robin Scheduling Algorithm

Processes/tasks are scheduled in order to finish the task on time. CPU Scheduling is a technique that permits one process to utilize the CPU while other is delayed (on standby) due to a lack of resources such as I/O allowing the CPU to be fully utilized. The goal of CPU scheduling is to improve the system's efficiency, speed, and fairness. When the CPU is not being used, the operating system chooses one of the processes in the queue to start. A temporary CPU scheduler performs the selecting process. The scheduler chooses one of the memory processes that are ready to run and assigns CPU to it. Every system software must have scheduling, and practically, all virtual machines are scheduled over before use. To enhance CPU efficiency, CPU utilization, delay, and CPU cycles is the primary objective of all presently available CPU scheduling techniques. There are various ways to tackle this, for example, algorithms like FCFS, SJN, priority scheduling, and many more, but in this paper, we chose to work with Round Robin (RR) Scheduling algorithm. RR algorithm solves the stated challenges as it’s method's reasoning is significantly influenced by the length of the timeslot. In comparison, time slices should be huge than of the context switch time, as it enhances the performance by lowering the load on CPU. In this study, we review an existing technique to reduce context switching and break the fixed. With optimization, the range of the time quantum is utilized through the RR scheduling algorithm. This paper is researching mainly on the employment of context switching, and RR scheduling. Scheduling is discussing about how time slice of one process is completed, processor is allocated to next process, and saving of the state of process is needed because for the next time, it can run the process from the place it was halted. The review of this paper signifies the comparison of context switching based on different scheduling algorithm and the past work significance of study.

Kuldeep Vayandade, Ritesh Pokarne, Mahalakshmi Phaldesai, Tanushri Bhuruk, Prachi Kumar, Tanmay Patil
Syn Flood DDoS Attack Detection with Different Multilayer Perceptron Optimization Techniques Using Uncorrelated Feature Subsets Selected by Different Correlation Methods

Cyber attackers widely used Distributed Denial of Service (DDoS) attacks to saturate servers with network traffic, preventing authorized clients to access network resources and ensuing massive losses in all aspects of the organizations. With the use of ADAM, SGD, and LBFGS optimization techniques, this paper evaluates a Multilayer Perceptron (MLP) classification algorithm for Syn flood DDoS attack detection using various uncorrelated features chosen with Pearson, Spearman, and Kendall correlation methods. Dataset for a Syn flood DDoS attack was taken from the CIC-DDoS2019 dataset. Experiment results conclude that among optimization techniques, ADAM optimization gives better results and among uncorrelation feature sets and Pearson uncorrelated feature subset produce the best results. Multilayer Perceptron produces the best classification results with ADAM optimization and Pearson uncorrelation subset on Syn flood DDoS attack.

Nagaraju Devarakonda, Kishorebabu Dasari
Ensemble Model Detection of COVID-19 from Chest X-Ray Images

The rapid advancement of deep learning techniques usage in treating several medical issues and search for innovative methods in predicting COVID-19 is the leading cause for this finding. Feature-level ensemble model is proposed to distinguish COVID-19 cases from other similar lung infections. Three different pre-trained CNN models, namely VGG16, DenseNet201, and EfficientNetB7 are tested and finally combined to form the proposed ensemble model. Ensemble approach synergizes the features extracted by deep CNN models to deliver accurate predictions and further improve classification. This approach not only enhances the model performance but also reduces generalization error as compared to a single model. To show the efficacy of this proposed model, it has been compared with the existing pre-trained models and tested for 3-class, 4-class, and 5-class on public available datasets. The proposed models’ performance is estimated in terms of accuracy, precision, recall, and f1-score parameters and achieved better results for detection purpose. Hence, the proposed model is a promising diagnostic tool for accurate screening of COVID-19 disease.

Lavanya Bagadi, B. Srinivas, D. Raja Ramesh, P. Suryaprasad
Transfer Learning-Based Effective Facial Emotion Recognition Using Contrast Limited Adaptive Histogram Equalization (CLAHE)

Recognition of facial emotions are the recent research area computer vision’s field and human computer interactions. Profound models of deep learning are being widely employed to study the rate of acknowledging facial feelings. For images with noise and poor visibility, deep learning models may perform poorly. The field of emotion recognition faces challenges because to things like facial decorations, unlevel lighting, different stances, etc. Feature extraction and classification are the key drawbacks of emotion detection using conventional methods. A new method called Transfer Learning-based Effective Facial Emotion Recognition Using Contrast limited adaptive histogram equalization has been developed to address this issue (CLAHE). The obtained dataset is initially sent through a combined trilateral filter to remove noise. To improve image visibility, the filtered images are next treated to CLAHE. Jobs requiring classification require the use of techniques of deep learning. Transfer learning techniques are employed in this study to address emotion recognition. This research is related to one of the important networks pre-trained Resnet50, and Inception V3 networks are used. It removes the complete associated layer’s from the pre-trained ConvNet’s and replaces them with fully associated layers that are appropriate for the guidance count of the proposed assignment. Here, technique was carried out using CK+ database, and it recognizes emotions.

D. Anjani Suputri Devi, D. Sasi Rekha, Mudugu Kishore Kumar, P. Rama Mohana Rao, G. Naga Vallika
Transformer Model for Human Activity Recognition Using IoT Wearables

Human activity recognition plays a vital role in the modern life of its immense applications in medical care, sports, detection and prediction of unpredicted events and in biometrics field. HAR is the field of study for recognizing human activities from time-series data collected from video, images or from multimodal sensors equipped with smartphone, IoT wearables. A wide area of research in the field of HAR has been presented till now from machine learning and deep learning. Deep learning methods utilize the temporal view of the data, while shallow methods utilize the handcrafted features, statistical view. Most of the deep learning methods utilized CNN, LSTM architecture. Now its time to use the power of transformers in this area. For sequence analysis tasks, transformers were shown to outperform the deep learning methods. Transformers are proved to be best in finding relevant time steps and has the potential to model long term dependencies. In this work proposed, a novel method that employs a transformer-based model to classify human activities from data collected using Intertial measurement Units (IMU). This approach works by leveraging self-attention mechanisms in transformers to explore complex patterns from time-series data. In this paper, a transformer-based approach is experimented for HAR and its evaluation parameters are presented for two data sets USCHAD and WISDM. From the results, we can analyse the improvement in performance matrix and efficiency of the transformer-based model.

S. Sowmiya, D. Menaka
Design and Development of a Chatbot for Personalized Learning in Higher Education

Technology now plays a more significant role in our everyday lives, impacting how students learn and gain knowledge. The use of artificial intelligence allows educators to provide students with a personalized learning environment. Systematically detecting whether or not students are grasping the material has been created. Based on the feedback, we can also determine the student's starting point. Conversational agents or chatbots could be used to supplement teacher instruction and reduce the costs of informal education. The development and training of an intelligent conversational agent have long been a goal of humanity and a significant challenge for scientists and programmers. It was still just a dream until recently. On the other hand, language teachers have a new hope thanks to cutting-edge technologies like deep learning and neural networks, which provide intelligent chatbots that can learn through communication just like humans. His research aims to create an emotionally realistic chatbot system using artificial intelligence to improve the chatbot's believability. This study aims to create a chatbot that can be integrated into a personalized teaching–learning process. The goal of the study is to determine the efficacy of the developed chatbot in personalized learning. Its developed system enables it to provide a suitable personalized learning solution.

Hayder Kareem Algabri, Rajanish K. Kamat, Kabir G. Kharade, Naresh Babu Muppalaneni
Performance Evaluation of Concentric Hexagonal Array for Smart Antenna Applications

Many of the antenna array geometries considered so far are linear and circular arrays. In this paper, a different geometrical array, Concentric Hexagonal Array (CHA) is reported for the beamforming of Smart Antenna applications. The advantage of this array over other geometrical configurations is it gives less side lobe level (SLL) and also it achieves pattern symmetry for the non-uniform amplitude excitation. Since the radiation pattern becomes symmetric, and the element’s amplitude carry even symmetry about the array’s center, the computing time and the number of attenuators turn to be halved for the amplitude—only synthesis which in turn reduces the cost of the system. Using the 12-element hexagonal array, a 24-element and 36-element concentric hexagonal array is constructed and their performances are evaluated. The 24-element CHA allows less side lobe level as with 36-element. The performance of 24-element is also analyzed with circular and hexagonal arrays with same elements. Firefly algorithm is used for beamforming and also to obtain the excitation coefficients.

Sridevi Kadiyam, A. Jhansi Rani
A Comprehensive Study on Bridge Detection and Extraction Techniques

Identification of bridges had a major role in providing the status of constructions. Generally, satellite images include information about geographical capabilities including bridges and those capabilities are very beneficial for military and civilian people. The identity of bridges in main infrastructure works is critical to offer data approximately the fame of those structures and guide feasible decision-making processes. Typically, this identity is achieved with the aid of using human marketers that need to hit upon the bridges into large-scale datasets, reading pictures with the aid of using pictures, a time-eating task. Bridge scrutinizing is vital to reducing the safety concerns caused by aging and deterioration of the bridges. Usually, there are traditional methods for the inspection and identification of bridges by using IOT sensors and lasers, but these can be identified only if the object is within the medium range of distance and at a minimum time, this can only detect them in succession. This identification can be done by using convolution neural networks and deep learning techniques. Also, the Geographic Information System helps to analyze, gather, capture, and manage geographical features. GIS is used to control and combine disparate assets of spatial and characteristic records for tracking bridge health.

P. Rishitha, U. Venkata Sai, S. Dyutik Chaudhary, G. Anuradha
Vegetation Change Detection of Multispectral Satellite Images Using Remote Sensing

Change Detection is depicted to compare the spatial representation of two points in time, with variations in the variables of interest causing changes. Remote sensed data of Landsat 8 satellite imagery can be used to detect changes in multispectral images. Vegetation change detection plays a prominent role in tracking the alteration of vegetation in selective areas. The vegetation change analysis of a specific area for a given time period involves a lot of information that can be used for predicting the impact of change over years. Multispectral images for vegetation change of Landsat 8 satellite for a specific location can be obtained by stacking selective bands together. Over the years, due to urbanization, the vegetative index of cities dropped drastically. To take necessary measures, the impact must be analyzed. To overcome this problem, we are using vegetation change detection analysis. The image stacking is performed using QGIS (Quantum Geographic Information System) software which is facilitated using various image enhancement options. The location Vijayawada is tracked for detecting change over a time period of eight years using the NDTS (Normalized Difference Between Time Series) algorithm followed by comparison with the PCA and K-means algorithm. This paper gives a detailed visualization of the results acquired in this project using metrics like RMSE and PSNR.

G. Sai Geethika, V. Sai Sreeja, T. Tharuni, V. Radhesyam
Performance Evaluation of Neural Networks-Based Virtual Machine Placement Algorithm for Server Consolidation in Cloud Data Centres

Cloud computing, an on-demand model that provides IT services as a utility, has become increasingly popular in recent years. To the contrary, the cloud's resources are housed in data centres that have a major impact on the environment due to their high energy consumption. As a result, consolidating servers into fewer physical locations is a significant method for increasing data centre efficiency and reducing energy waste. In this work, we propose a technique for selecting and deploying servers and virtual machines using artificial neural networks. It forecasts user demand, evenly distributes work if the server is overloaded, trains a feed forward neural network with back propagation learning, uses a selection algorithm to determine which virtual machine to use, and finally uses a cross-validation algorithm to ensure that the placement was accurate. Cloud computing providers like Amazon (EC2), Microsoft Azure, and Google Cloud Storage provide the data used to measure the efficacy of server consolidation efforts. The algorithm's experimental results are compared to those of other algorithms published by the same author, both in exact and heuristic optimization methods, that aim to achieve the same goals.

C. Pandiselvi, S. Sivakumar
Low-Resource Indic Languages Translation Using Multilingual Approaches

Machine translation is effective in the presence of a substantial parallel corpus. In a multilingual country like India, with diverse linguistic origins and scripts, the vast majority of languages need more resources to produce high-quality translation models. Multilingual neural machine translation (MNMT) has the advantage of being scalable across multiple languages and improving low-resource languages via knowledge transfer. In this work, we investigate MNMT for low-resource Indic languages—Hindi, Bengali, Assamese, Manipuri, and Mizo. With the recent success of massively multilingual pre-trained models for low-resource languages, we explore the effectiveness of using multilingual pre-trained transformers—mBART and mT5 on several Indic languages. We perform fine-tuning on the pre-trained models in a one-to-many and many-to-one approach. We compare the performance of multilingual pre-trained models with multiway multilingual translation trained from scratch using a one-to-many and many-to-one approach.

Candy Lalrempuii, Badal Soni
DCC: A Cascade-Based Approach to Detect Communities in Social Networks

Community detection in social networks is associated with finding and grouping the most similar nodes inherent in the network. These similar nodes are identified by computing tie strength. Stronger ties indicate higher proximity shared by connected node pairs. This work is motivated by Granovetter’s argument that suggests that strong ties lie within densely connected nodes and the theory that community cores in real-world networks are densely connected. In this paper, we have introduced a novel method called Disjoint Community detection using Cascades (DCC) which demonstrates the effectiveness of a new local density-based tie strength measure on detecting communities. Here, tie strength is utilized to decide the paths followed for propagating information. The idea is to crawl through the tuple information of cascades toward the community core guided by increasing tie strength. Considering the cascade generation step, a novel Preferential Membership method has been developed to assign community labels to unassigned nodes. The efficacy of DCC has been analyzed based on quality and accuracy on several real-world datasets and baseline community detection algorithms.

Soumita Das, Anupam Biswas, Akrati Saxena
Fault Classification and Its Identification in Overhead Transmission Lines Using Artificial Neural Networks

In modern power systems, fault classification and placement are critical for improving protection schemes, service reliability, and reducing line outages. However, due to the mutual coupling effect, it confronts usual issues in fault identification in double-circuit lines. Although several methods have been shown to be accurate in locating double-circuit line faults, they have limitations in certain situations such as a lack of synchronization between the measuring ends, identification of the type of fault, data before and after a fault, three-phase faults, and so on. This study offers an artificial neural network technique that uses a probabilistic neural network (PNN) for fault classification and a generalized regression neural network (GRNN) for fault localization to address some of these challenges. To categorize and detect the fault, the proposed technique leverages the fundamental current phasor magnitudes obtained at both circuit endpoints of the double circuit. To validate the proposed method, a 200 km overhead transmission line is simulated using MATLAB/SIMULNK for all sorts of faults by varying location of the fault, resistance of the fault, and inception angle of the fault.

Kathula Kanaka Durga Bhavani, Venkatesh Yepuri
An Improved Way to Implement Round Robin Scheduling Algorithm

The Scheduling algorithm is primarily responsible for the system’s effectiveness. The Round Robin (RR) algorithm is considerably more effective than the currently available ones. Every task in the Round Robin algorithm is given a set amount of time to complete the part of execution. The process needs to be taken out of ready queue if it is finished, otherwise it’s added to the end of ready queue, and waits for the process’s execution turn. In this chapter, we propose a technique related to the dynamic time quantum approach of Round Robin that works well. The dynamic time quantum (TQ) approach autonomously figures out the time quantum for each cycle based on a prescribed formula. In the proposed technique, TQ is calculated in each cycle and is compared to each process burst time for its execution. Further, the dynamic TQ reduced the Average Waiting Time, Average Turnaround Time, and Number of Context Switches.

Kuldeep Vayadande, Aditya Bodhankar, Ajinkya Mahajan, Diksha Prasad, Riya Dhakalkar, Shivani Mahajan
E-Learning Paradigm in Cloud Computing and Pertinent Challenges in Models Used for Cloud Deployment

The pedagogical science has improved immensely in each section of learning and education due to adoption of recent developments in Information and Communication Technologies (ICT) domain. Adopting web-based tools has aided to new thrust in imparting education through remote means. The deployment of cloud network is quickly aided to all disciplines of learning and at all levels. A user can access, from any remote location, a data stored over cloud platform using their computers, tablets or smartphones, and can also make perturbations in data on need basis. The method of E-learning becomes seamless when cloud-based techniques are employed. This also helps in cost cutting and cumbersomeness of data processing as the major chunk of services is operated by third party infrastructures. Both cloud-based and conventional E-learning when combined offers a greater advantage to teachers, students and institutes albeit with a clutch that the security becomes a major issue of concern. The present review is an analysis of perks of deploying cloud-based learning and the security concerns that come with it, as observed through various studies.

Dhaval Patel, Sanjay Chaudhary
Parkinson’s Disease Detection: Comparative Study Using Different Machine Learning Algorithms

Detection of Parkinson’s disease is often established on clinical monitoring and evaluation of medical signs, which include an indication of different motor signs. Furthermore, the ordinary point of view encounters rationality as it is based on analysis of motions that are specifically not recognized, tough to categorize, and steers miscategorization. Since Parkinson’s disease sufferers have distinct voice characteristics, vocal recordings are needed and highly regarded as a diagnostic tool. If Machine Learning techniques can truly note this disease with the help of a voice recording dataset, this would be a useful screening phase before seeing a doctor. This chapter shows comparison of some ML models like XGboost, random forest, Naive Bayes, logistic regression, SVM, decision tree, and K-NN to get the best result for detection of disease.

Vijaykumar Bhanuse, Ankita Chirame, Isha Beri
Implementation of Blockchain in Automotive Industry to Secure Connected Vehicle Data: Study and Analysis

The automotive industry is undergoing rapid transformation because of the collaboration between technology companies and automakers. To support the present trend of connected cars, several stakeholders like insurance firms, car buyers and sellers and government agencies need different kinds of vehicle data. Data gathering is currently either manual or unverified, which raises trust, validity, and exactness concerns such as tampered safety checks and false or duplicate vehicle data records, among others. A robust instrument that is capable of safeguarding vehicle data is essential. It is necessary to record the changes for the purpose of auditing and ultimately establishing faith in the system. To meet these requirements, we propose utilizing blockchain technology. Healthcare, transportation, finance, cybersecurity, and supply chain management are just a few of the fields in which blockchain technology has recently gained popularity and been implemented. The determination of this survey is to deliver a comprehensive investigation of the advantages and uses of the Blockchain application in the automotive segment.

Yedida Venkata Rama Subramanya Viswanadham, Kayalvizhi Jayavel
A Short Survey on Fake News Detection in Pandemic Situation Towards Future Directions

In December 2019, numerous news posts regarding the situation of COVID-19 in electronic media, traditional print, and social media emerged. These media sources are getting information from both non-trusted and trusted medical healthcare systems. The fake news regarding the pandemic situation is spread rapidly in multimedia. The wide spread of small deceptive information can cause unwanted exposure and anxiety for taking medical remedies and gives tricks for digital marketing. These unwanted medical remedies may lead to deadly factors. Therefore, a model to detect fake content and misinformation from the multimedia news pool during the pandemic is essential. The purpose of this systematic literature review is to review the significant studies about misinformation and fake news during COVID-19 on social media. In this survey, the currently used deep structured architectures to detect FND are utilized in the post and pre-pandemic situation. This review also explores the algorithmic categorization, dataset details, performance metrics, and tools utilized for implementation. Finally, this review gives the research gaps towards the future direction that has been aroused in the FND approaches, and it provides the appropriate pathway for researchers to implement several improvements in the future for getting better outcomes with a limited number of input data.

Rathinapriya Vasu, J. Kalaivani
Periocular Biometrics and Its Applications: A Review

The periocular region is the area under the proximity of the eyes. The periocular region provides a significant number of features that can be used in situations of the occluded face. Due to face occlusion, a very large portion of the face does not contribute to the features, which may lead to unsatisfactory performance of the system. Focusing only on the periocular region will enhance the performance of the system as only these features will be considered for the development of the application. This paper provides an extensive survey of the different applications that use the periocular region for biometrics and soft biometrics. The paper provides an overview of the different preprocessing steps the researchers are using over recent years. Along with this, the paper also presents possible research opportunities in the domain.

Aishwarya Kumar, K. R. Seeja
Salt Segment Identification in Seismic Images Using UNet with ResNet

Salt segmentation is the process of identifying whether a subsurface target is a salt or not. There are several places on Earth where there are significant amounts of salt as well as oil and gas. For businesses engaged in oil and gas development, finding the exact locations of significant salt deposits is crucial. Also, lands that have been impacted by salt are not useful for farming. The absorption capacity of the plant reduces due to the presence of salt in the soil solution, which results in a reduction in growth rate. So, to identify the land that contains salt, salt segmentation is being done. The seismic image of a particular pixel is analyzed to classify it either as salt or sediment. TGS Salt Identification Challenge dataset is used which consists of 4,000 seismic image patches of size (101 × 101 × 3) and corresponding segmentation masks of size (101 × 101 × 1) in the training set. 18,000 seismic image patches are present in the test set which are used for the evaluation of the model. The model used is a combination of UNet with ResNet-18 and ResNet-34. Using this architecture, the salt region can be determined from the seismic data and display the value of the entire salt region. IoU is used as a performance metric to evaluate the model. The outcomes demonstrate that the ensemble model outperforms individual network models and achieves better segmentation results.

P. Venkata Uday Kiran, G. Anuradha, L. Sai Manohar, D. Kirthan
A Study of Comparison Between YOLOv5 and YOLOv7 for Detection of Cracks in Concrete Structures

Mechanical failures of man-made structures pose risk to human life and property. Location and mitigation of cracks before it propagates through critical regions of a mechanical/civil structure prevents the occurrence of accidents and loss of life. Crack propagation, also known as sub-critical crack propagation or stress corrosion, frequently happens under low stress and is characterized by gradual propagation. By releasing the elastic strain energy brought on by an external load creates the formation of new surfaces. Cracks spread to reduce the energy of the system. Surface cracks can be found using various non-destructive testing methods: Visual Optical Testing, Eddy Current Testing, Liquid Penetrant Testing, and Magnetic Particle Testing. This study is currently limited to the use of visual testing using computer vision, feature extraction from captured data using multiple image processing algorithms to identify cracks using an object detector model build using data points collected from a user data set. In a systematic manner, we tried to develop object detector models separately using YOLOv5 and YOLOv7 and performed a study on different standard evaluation metrics obtained from the two frameworks. YOLOv5 and YOLOv7 are the two recent additions to the YOLO family. YOLO-based object detector uses deep learning to train models for object detection using pre-trained weights from COCO data set. Results indicate further application of the detector model to be assertive on any physical structures pertaining to risk of surface cracks.

Ajay Anoop, Jeetu S Babu
Machine Learning-Based Identification as Well as Classification of Functional and Non-functional Requirements

In software engineering, it has become necessary to divide needs into functional and non-functional categories. This article identifies and categorizes functional (F) and non-functional (NF) needs and subclasses of NF requirements. Additionally, the subclasses of NF requirements will be discussed. A chunk of the data was gathered from various resources found on the internet. In this investigation, we began by cleaning the data by employing the normalization processes, and we then moved on to the succeeding steps, which included text preparation and vectorization. The topics covered included confusion matrix, Bag of Words, Term Frequency-Inverse Document, Featurization and Machine Learning Models, ROC and AUC curves, Bi-Grams and n-Grams in Python, and Word2Vec. The early discovery of NFRs allows us to make preliminary design choices. We used machine learning algorithms to detect and categorize both functional and non-functional needs for application software development, based on the user requirements provided to us. This article aims to assist in the application development process to various software professionals, including software developers, software designers, software testers, and so on. This paper is also beneficial for developing software more quickly and delivering it to customers. Taking this step makes it easier to create an SRS document and helps the requirement analysis phase proceed more smoothly by reducing the amount of unneeded labor and complexity.

R. D. Budake, S. D. Bhoite, K. G. Kharade
Fake News Detection in Dravidian Languages Using Transformer Models

Nowadays, fake news is spreading rapidly. Many resources are available for fake news detection in high-resource languages like English. Due to the lack of annotated data and corpora for low-resource languages, detecting fake news in low-resource languages is difficult. There is a need for a system for fake news detection in low-resource languages like Dravidian languages. In this research, we used Telugu, Kannada, Tamil, and Malayalam languages and tested with four transformer models: mBERT, XLM-RoBERTa, IndicBERT, and MuRIL. MuRIL gives the best accuracy in these models compared to the remaining models.

Eduri Raja, Badal Soni, Samir Kumar Borgohain
A Review on Artificial Intelligence Techniques for Multilingual SMS Spam Detection

With social networks’ increased popularity and smartphone technology advancements, Facebook, Twitter, and short text messaging services (SMS) have gained popularity. The availability of these low cost text-based communication services has implicitly increased the intrusion of spam messages. These spam messages have started emerging as an important issue, especially to short-duration mobile users such as aged persons, children, and other less skilled users of mobile phones. Unknowingly or mistakenly clicking the hyperlinks in spam messages or subscribing to advertisements puts them under threat of debiting their money from either the bank account or the balance of the network subscriber. Different approaches have been attempted to detect spam messages in the last decade. Many mobile applications have also evolved for spam detection in English, but still, there is a lack of performance. As English has been completely covered under natural language processing, other regional languages, such as Urdu and Hindi variants, have specific issues detecting spam messages. Mobile users suffer greatly from these issues, especially in multilingual countries like India. Thus, this paper critically reviews the artificial intelligence-based spam detection system. The review lists out the existing systems that use machine and deep learning techniques with their limitations, merits, and demerits. In addition, this paper covers the scope for future enhancements in natural language processing to efficiently prevent spam messages rather than detect spam messages.

E. Ramanujam, K. Shankar, Arpit Sharma
Fusion of LBP and Median LBP for Dominant Region Based Multimodal Recognition Using Imperfect Face and Gait Cues

The paper provides a novel approach to recognize imperfect face and gait cues by using only the dominant region of such cues which contain more information than that possessed by the other regions. To enhance features in such dominant region, a fusion of Local Binary Pattern (LBP) and Median Local Binary Pattern (Median LBP) procedure is proposed. Initially, the given imperfect face and gait probe images are divided into six overlapped half regions. After partition, the dominant overlapped half regions of face and gait are selected by using content based image retrieval process. Subsequently, the features of dominant overlapped imperfect face and gait regions are enhanced by fusing the feature vectors obtained by using fusion of LBP and Median LBP methods. Next the Eigen feature vectors followed by Fisher’s vectors are constructed using Principal Component Analysis (PCA) followed by Linear Discriminant Analysis (LDA) dimensionality reduction algorithms. The decisions from face and gait biometric systems are obtained separately by using Euclidean distance measure. Finally, the decisions of face and gait classifiers are fused at decision level for recognition using AND/OR rule. The method is verified on freely available ORL face and CASIA B gait datasets.

K. Annbuselvi, N. Santhi, S. Sivakumar
A Deep Convolutional Neural Network for Breast Cancer Detection in Mammograms

Breast cancer is the most prevalent ailment among women worldwide. The screening for detecting breast cancer is mammograms. The anticipated methodology uses preprocessing, feature extraction and classification. The main objective is early detection of the diseases so that the lifetime can be increased. A popular deep convolutional neural network (DCNN) model is used for mammogram classification that classifies into normal or abnormal. Initially, the mammograms are preprocessed using contrast limited adaptive histogram equalization (CLAHE). These images are fed to the DCNN model which contains five convolutional layers that use rectified linear unit (ReLU) as the activation function, and max-pooling is used to select the best features. The fully connected layer uses softmax as the activation function and classifies the selected features, and these are done with several optimizers, namely stochastic gradient descent (SGD), RMSprop, and Adam. The experiments are done on two benchmark datasets, namely the Mammographic Image Analysis Society (MIAS) and Digital Database for Screening Mammography (DDSM), and performance is measured with sensitivity, specificity, and accuracy. The results show that DCNN with SGD optimizer gives better results compared to the traditional methods.

B. Naga Jagadesh, L. Kanya Kumari, Akella V. S. N. Murthy
Malicious Social Bots Detection in the Twitter Network Using Learning Automata with URL Features

By pretending to be a follower or making a huge number of false registrations through malicious activities, malicious social bots automate their social interactions and send out bogus tweets. The most well-known form of malware in online entertainment environments is social bots. They will disseminate rumors, convey false information, and even influence public opinion. Social bots are utilized to deliver better customer service by automating logical procedures. Additionally, spiteful social bots utilize malicious shortened URLs in tweets to send users' requests for peer-to-peer online communication to malicious servers. As a result, Twitter company’s main goal is probably to distinguish damaging social bots from actual users. Compared to dangerous social bots that use social graph-based highlights, URL-based social bots (such as URL redirection, shared URL recursion, and URLs with spam content) require less investment (depending on the social associations of the clients). Vengeful social bots are also inappropriate for quickly inspecting URL redirect strings. By merging a trust calculation model with URL-based milestones, we suggest a learning automata-based Social Bot Recognition (LA-MSBD) calculation to identify trustworthy members (customers) in the Twitter organization. The employment of vindictive social bots to propagate false information has had negative real-world effects. One of the most challenging aspects of spotting bots in web-based entertainment is comprehending what social bots are capable of and looking at the quantitative highlights of their behavior. This concept made a distinction between typical clients and social bots. The strategy for spotting Twitter bots using AI computations is suggested in this article. There is a detailed analysis of the K-NN algorithm, SVM algorithm, Naive Bayes algorithm, Random Forest algorithm, decision tree algorithm, and custom computing algorithm. The most effective learning model will be applied to the test data.

R. Kiran Kumar, G. Ramesh Babu, G. Sai Chaitanya Kumar, N. Raghavendra Sai
Metadaten
Titel
High Performance Computing, Smart Devices and Networks
herausgegeben von
Ruchika Malhotra
L. Sumalatha
S. M. Warusia Yassin
Ripon Patgiri
Naresh Babu Muppalaneni
Copyright-Jahr
2024
Verlag
Springer Nature Singapore
Electronic ISBN
978-981-9966-90-5
Print ISBN
978-981-9966-89-9
DOI
https://doi.org/10.1007/978-981-99-6690-5

Premium Partner