Skip to main content
Top

2021 | Book

Applications of Artificial Intelligence and Machine Learning

Select Proceedings of ICAAAIML 2020

Editors: Ankur Choudhary, Arun Prakash Agrawal, Rajasvaran Logeswaran, Dr. Bhuvan Unhelkar

Publisher: Springer Singapore

Book Series : Lecture Notes in Electrical Engineering

insite
SEARCH

About this book

The book presents a collection of peer-reviewed articles from the International Conference on Advances and Applications of Artificial Intelligence and Machine Learning - ICAAAIML 2020. The book covers research in artificial intelligence, machine learning, and deep learning applications in healthcare, agriculture, business, and security. This volume contains research papers from academicians, researchers as well as students. There are also papers on core concepts of computer networks, intelligent system design and deployment, real-time systems, wireless sensor networks, sensors and sensor nodes, software engineering, and image processing. This book will be a valuable resource for students, academics, and practitioners in the industry working on AI applications.

Table of Contents

Frontmatter

Artificial Intelligence and Its Applications in Smart Education

Frontmatter
Building a Language Data Set in Telugu Using Machine Learning Techniques to Address Suicidal Ideation and Behaviors in Adolescents

Taking one’s own life is a tragic reaction to stressful situations in life. There is a noticeable increase in the ratio of number of suicides every year in Telangana [1]. Most of them are adolescents and youngsters and others too. So there is an urging need of research to be done on suicidal ideation and preventive methods to support mental health professionals and psychotherapists. So this paper aims in developing technological solutions to the problem. Suicides can be prevented if we could identify the mental health conditions of a person with ideations and predict the severity in earlier [2]. So in this paper, we applied machine learning algorithms to categorize persons with suicidal ideations from the data that is maintained or recorded during visit of an adolescent with a mental health professional in textual form of questionnaires. The data is recorded in native Telugu language during the session, as most of cases are from illiterates [1, 3]. So in order to classify the patient test data with more accuracy, there is a need of language corpus in Telugu with ideations. So this paper would give a great insight into creation of suicidal language or ideation corpora in native language Telugu.

K. Soumya, Vijay Kumar Garg
Feature Selection and Performance Comparison of Various Machine Learning Classifiers for Analyzing Students’ Performance Using Rapid Miner

Information technology revolution and affordable cost of storage devices and Internet usage tariff have made it easy for educational bodies to collect data of every stake holder involved in. This collected data has many hidden facts, and, if extracted, it can give new insights to every concerned contributor. The educational bodies can use educational data mining to examine and predict the performance of students which helps them to take remedial action for weaker students. In education data mining, classification is the most popular technique. In this paper, emphasis is on predicting students’ performance using various machine learning classifiers and the comparative analysis of performance of learning classifiers on an educational dataset.

Vikas Rattan, Varun Malik, Ruchi Mittal, Jaiteg Singh, Pawan Kumar Chand
Internet of Things (IoT) Based Automated Light Intensity Model Using NodeMcu ESP 8266 Microcontroller

Internet of Things (IoT) is a ubiquitous technology for connecting anything from anywhere impacting the life drastically by expanding its reach in economical, commercial and social areas. In this paper authors used NodeMcu ESP 8266 microcontroller for modelling automated system by embedding wifi moduled. The light intensity is continuously captured and is transmitted over cloud network. The captured data is then used for analysing voltage fluctuations using python. The automated system works according to the intensity of light, if the light intensity falling on light dependent resistor (Ldr) is low then light emitting diode will switch on and if the light intensity falling on light dependent resistor (Ldr) is high then the light emitting diode will remain off making system energy efficient which works automatically according to the light intensity.

Shyla, Vishal Bhatnagar
Handwritten Mathematical Symbols Classification Using WEKA

Machine learning tools have been extensively used for the prediction and classification of mathematical symbols, formulas, and expressions. Although the recognition and classification in handwritten text and scripts have reached a point of commensurate maturity, yet the recognition work related to mathematical symbols and expressions has remained a stimulating and challenging task throughout. So, in this work, we have used Weka, a machine learning tool, for the classification of handwritten mathematical symbols. The current literature witnesses a limited amount of research works for classification for handwritten mathematical text using this tool. We have endeavored to explore the potential classification rate of handwritten symbols while analyzing the performance by comparing the results obtained by several clustering, classification, regression, and other machine learning algorithms. The comparative analysis of 15 such algorithms has been performed, and the dataset used for the experiment incorporates selective handwritten math symbols. The experimental results output accuracy of 72.9215% using the Decision Table algorithm.

Sakshi, Shivani Gautam, Chetan Sharma, Vinay Kukreja
Enhancing Sociocultural Learning Using Hyperlocal Experience

In today’s scenario of a global pandemic and stay-at-home orders for 1.5 billion people worldwide, the future will be hyperlocal. As per predictions by experts on how different the world will be in five years, it is believed that a change model established in mutual benefit evolving in collective action will be led by hyperlocality. It has gradually led to a revival of a type of human interaction that had been driven to near extinction by technological and social shifts—hyperlocal collaboration. Hyperlocal is information oriented across a community with its principal focus targeted toward the interests of the people in that community. It refers to all businesses in your vicinity, the neighboring general store, market, mall, restaurant, and other products and service providers. Hyperlocal platforms resolve the challenge of equaling immediate demand with the nearest available supply in the most optimized manner. Thus, it can be said that hyperlocal content has two main aspects: geography and time. These two aspects when combined help us identify types of hyperlocal content. Nowadays, the hyperlocal content includes GPS enabled mobile applications which further accentuate the geographic and time aspects. Geolocation services have evolved as one of the major sectors in our country and have transformed our lifestyle, health, and relationships. The legacy offline systems have been re-engineered with the help of technology to create a fresh and unforgettable experience. Hyperlocal has become the hottest business trend for many e-commerce firms setting the pathway for concepts like hyperlocal delivery, hyperlocal marketing, and hyperlocal forecasting with a sole objective of giving the user an enhanced hyperlocal experience. This paper presents a sociocultural learning model which is enhanced by providing a hyperlocal experience.

Smriti Rai, A. Suhas
Subsequent Technologies Behind IoT and Its Development Roadmap Toward Integrated Healthcare Prototype Models

Internet of things (IoT) is a propelled research region that gives further automate, examination, and combination of the physical world into electronic devices and PC frameworks through system foundation. It permits collaboration and participation between a massive assortment of unavoidable objects over remote and wired associations to accomplish explicit objectives. As IoT has some imperative properties like dissemination, receptiveness, interoperability, and dynamicity, their creating presents an incredible test. So, it is an advancement of the Internet and has been increasingly expanded consideration from scientists in both scholarly and modern situations. Progressive mechanical improvements make the advancement of savvy frameworks with a high limit with regard to correspondence and information assortment conceivable giving a few chances to various IoT applications. To permit everybody to encounter the IoT by observing and feeling the possibilities of primary use cases through iterative prototyping. At last, to empower people, networks, and associations to think to envision, and the question “What's Next?” This explicitly concerns the administrations and applications made on the head of these regular use cases, to construct a significant IoT for humans. This paper presents the current status of the quality of IoT structures with attention on the innovations, possibilities, technologies behind IoT, a roadmap of IoT improvement, gateways of IoT, and manageability for model design, particularly in the recent healthcare sector. Moreover, this archive integrates the current assortment of information and distinguishes consistent ideas for new immense significance and direction of future research.

Priya Dalal, Gaurav Aggarwal, Sanjay Tejasvee

Big Data and Data Mining

Frontmatter
Bug Assignment-Utilization of Metadata Features Along with Feature Selection and Classifiers

In open source software bug repository, lots of bugs are filed or reported in a single days and to handle these with manually is difficult and time consuming process. To build an automatic bug triager is a good way to resolve these bugs efficiently with the use of machine learning based classifiers. This proposed work is built the triager by utilizing the bug metadata features like product name, keywords, bug summary and component name which are extracted after applying the feature selection algorithms that play the important role in triaging process. To measure the prediction accuracy, ML based classifiers like NB, SVM and DT are applied on the dataset. Four open source projects based datasets (Eclipse, Netbeans, Firefox and Freedesktop) are castoff to experiment the En-TRAM triager and achieved the improved outcomes from the state-of-art that goes to approximately 19%. The proposed work has empirically proved that selective inclusion of highly ranked metadata fields improves the bug triaging accuracy.

Asmita Yadav
Role of Artificial Intelligence in Detection of Hateful Speech for Hinglish Data on Social Media

Social networking platforms provide a conduit to disseminate our ideas, views, and thoughts and proliferate information. This has led to the amalgamation of English with natively spoken languages. Prevalence of Hindi-English code-mixed data (Hinglish) is on the rise with most of the urban population all over the world. Hate speech detection algorithms deployed by most social networking platforms are unable to filter out offensive and abusive content posted in these code-mixed languages. Thus, the worldwide hate speech detection rate of around 44% drops even more considering the content in Indian colloquial languages and slangs. In this paper, we propose a methodology for efficient detection of unstructured code-mix Hinglish language. Fine-tuning-based approaches for Hindi-English code-mixed language are employed by utilizing contextual-based embeddings such as embeddings for language models (ELMo), FLAIR, and transformer-based bidirectional encoder representations from transformers (BERT). Our proposed approach is compared against the pre-existing methods and results are compared for various datasets. Our model outperforms the other methods and frameworks.

Ananya Srivastava, Mohammed Hasan, Bhargav Yagnik, Rahee Walambe, Ketan Kotecha
From Web Scraping to Web Crawling

The World Wide Web is the largest database comprising information in various forms from text to audio/video and in many other designs. However, most of the data published on the Web is in unstructured and hard-to-handle format, and hence, difficult to extract and use for further text processing applications such as trend detection, sentiment analysis, e-commerce market monitoring, and many others. Technologies like Web scraping and Web crawling cater to the need of extracting a huge amount of information available on the Web in an automated way. This paper starts with a basic explanation of Web scraping and the four methodologies—DOM tree parsing, semantic–syntactic framework, string matching, and computer vision/machine learning-based methodology—developed over time based on which scraping solutions and tools are formulated. The paper also explains the term Web crawling, an extension of Web scraping and introduces Scrapy, a Web crawling framework written in Python. The paper describes the workflow behind a Web crawling process initiated by Scrapy and provides with the basic understanding on each component involved in a Web crawling project, built using Scrapy. Further, the paper dives into the implementation of a Web crawler, namely confSpider that is dedicated to extract information related to upcoming conferences and summits from the Internet and may be used by educational institutions to promote student awareness and participation in multi-disciplinary conferences.

Harshit Nigam, Prantik Biswas
Selection of Candidate Views for Big Data View Materialization

Big data is a large volume of heterogeneous data, which can be structured, semi-structured, or unstructured, produced at a very rapid rate by several disparate data sources. Big data requires processing using distributed storage and processing frameworks to answer big data queries. Materializing big data views would facilitate faster, real-time processing of big data queries. However, there exist large numbers of possible views and, from among these, computing a subset of views that would optimize the processing time of big data queries is a complex problem. This paper addresses this problem by proposing a framework that reduces the large search space of all possible views by computing comparatively smaller set of candidate views for a given query workload of a big data application. The proposed framework uses the big data view structure graph, which represents the structure of big data views and their dependencies, to compute a set of candidate views and alternate query evaluation plans for big data queries.

Akshay Kumar, T. V. Vijay Kumar
A Machine Learning Approach to Sentiment Analysis on Web Based Feedback

The advent of this new era of technology has brought forward new and convenient ways to express views and opinions. This is a major factor for the vast influx of data that we experience every day. People have found out new ways to communicate their feelings and emotions to others through written texts sent over the Internet. This is exactly where the field of sentiment analysis comes into existence. This paper focuses on analyzing the reviews of various applications on the Internet and to understand whether they are positive or negative. For achieving this objective, we initially pre-process the data by performing data cleaning and removal of stop words. TF-IDF method is used to convert the cleaned data into a vectorised form. Finally, the machine learning algorithms: Naïve Bayes, Support Vector Machine and Logistic Regression are applied and their comparative analysis is performed on the basis of accuracy, precision and recall parameters. Our proposed approach has achieved an accuracy of 92.1% and has outperformed many other existing approaches.

Arnav Bhardwaj, Prakash Srivastava
Forecasting of Stock Price Using LSTM and Prophet Algorithm

Advancement in new era of computational techniques has offered a wide opportunity to develop and deploy efficient and faster algorithmic solutions to the extensive research problem in various application domain. Although, they are wide application domain accessible but financial forecasting is among the most desirable area of research due to its broad attainment around the globe. In, context with same stock price prediction is quite essential for any organization in respect to financial gain. Mostly, all financial assets are intense to identify the next move of the share market to attain the maximum profit from it. In past, various machine learning and regression techniques have been applied to detect stock price prediction but they are unable to publish the significant results. In current study of approach, we have implemented the LSTM (Long Short-Term Memory) and Prophet algorithm over the stock market data. The financial time series data has been analyzed for last six years to perform the future forecasting and comparative results shows that LSTM out performs the Prophet algorithm for stock market prediction.

Neeraj Kumar, Ritu Chauhan, Gaurav Dubey
Towards a Federated Learning Approach for NLP Applications

Traditional machine learning involves the collection of training data to a centralized location. This collected data is prone to misuse and data breach. Federated learning is a promising solution for reducing the possibility of misusing sensitive user data in machine learning systems. In recent years, there has been an increase in the adoption of federated learning in healthcare applications. On the other hand, personal data such as text messages and emails also contain highly sensitive data, typically used in natural language processing (NLP) applications. In this paper, we investigate the adoption of federated learning approach in the domain of NLP requiring sensitive data. For this purpose, we have developed a federated learning infrastructure that performs training on remote devices without the need to share data. We demonstrate the usability of this infrastructure for NLP by focusing on sentiment analysis. The results show that the federated learning approach trained a model with comparable test accuracy to the centralized approach. Therefore, federated learning is a viable alternative for developing NLP models to preserve the privacy of data.

Omkar Srinivas Prabhu, Praveen Kumar Gupta, P. Shashank, K. Chandrasekaran, D. Usha

Challenges of Smart Cities Future Research Directions

Frontmatter
Analysis of Groundwater Quality Using GIS-Based Water Quality Index in Noida, Gautam Buddh Nagar, Uttar Pradesh (UP), India

The current research aims to establish GIS-based water quality index by analysing 51 groundwater samples collected from various sectors of the Gautham Buddh Nagar district of Noida, U.P., India. To determine WQI the groundwater samples were exposed to a detailed physico-chemical experimentation of seven parameters such as pH, Total Hardness, Total Alkalinity, Chlorides, Turbidity, Carbonates and Bicarbonates. The values of the examined samples were compared with the water quality requirements of the Bureau of Indian Standards (BIS). The result of the present study indicates that 84.4% of water samples fell in good quality and that only 15.6% of water samples fell under the poor water category exceeding the acceptable and permissible limits of BIS approved drinking water quality standards (Indian Standard Drinking Water IS:10500–2012). This determined the WQI index for the same and the values ranged from 47.12 to 192.104. Along with that machine learning techniques helped us understand the relation between pH and WQI index, where pH fluctuation is compared to water quality. The study shows that treatment is needed in the region’s groundwater before it is used for drinking and other uses.

Kakoli Banerjee, M. B. Santhosh Kumar, L. N. Tilak, Sarthak Vashistha
An Artificial Neural Network Based Approach of Solar Radiation Estimation Using Location and Meteorological Details

This paper proposes a methodology to estimate monthly average global solar radiation using Artificial Neural Network. Due to abundancy, solar energy is at the cutting edge for a long time among other types of renewable energy. It has several applications in several fields. Solar energy devices completely depend on the amount of solar radiation that is to be received. So, optimization of solar energy is possible only when solar radiation is estimated well in advance. This is challenging since solar radiation is the location and seasonal-dependent. The current study addresses the issue of scarce meteorological stations which in turn limits the radiation measuring devices at the location of interest of the researchers. In this study, an ANN-based solar radiation estimation model is proposed using Levenberg–Marquardt training algorithm here. The study is performed on the six stations of Bihar, India. The Neural Fitting Tool (NF Tool) of MATLAB R2016a is used for simulation purposes. The data set is collected from the FAO, UN. The proposed model shows the R values of 0.9974, 0.97909, 0.90589, 0.9925, slope (m) values of 0.99, 1.0, 0.87 and 0.99, and intercept (c) values 0.0086, 0.021, 0.021 and 0.0066, for ‘training’, ‘validation’, ‘testing’, and ‘all’, respectively. The mean square error (MSE) is found to be 0.07727, 0.08092, and 0.08076 for ‘training’, ‘validation’, and ‘testing’, respectively.

Amar Choudhary, Deependra Pandey, Saurabh Bhardwaj
Applications of Machine Learning and Artificial Intelligence in Intelligent Transportation System: A Review

Due to the tremendous population growth in the country, the use of vehicles and other transportation means has increased which has led to traffic congestion and road accidents. Hence, there is a demand for intelligent transportation systems in the country that can provide safe and reliable transportation while maintaining environmental conditions such as pollution, CO2 emission, and energy consumption. This paper focuses on providing an overview and applications of how Artificial intelligence (AI) and Machine Learning (ML) can be applied to develop an Intelligent Transportation system that can address the issues of traffic congestion and road safety to prevent accidents. We will then re-view various ML approaches to detect road anomalies for avoiding obstacles, predict real-time traffic flow to achieve smart and efficient transportation, detect and prevent road accidents to ensure safety, using smart city lights to save energy, and smart infrastructure to achieve efficient transportation. Next, we review various AI approaches such as safety and emergency management system to provide safety to the public, autonomous vehicles to provide economical and reliable transportation. We then propose smart parking management and how it can be used to find parking spaces or spots conveniently, incident detection which detects the traffic incidents or accidents in real-time provides a report. Finally, we conclude with predictive models and how the algorithms utilize sensor data to develop an Intelligent Transportation System.

Divya Gangwani, Pranav Gangwani
Analyzing App-Based Methods for Internet De-Addiction in Young Population

With recent advancements in technology and the excessive use of smartphones, all internet-based applications like WhatsApp, Facebook, Netflix, etc. are one tap away, thereby resulting in increased internet usage on an average, especially among the young population. This has affected the cognitive and affective processes of the users and has caused various problems like loss of focus, fatigue, and burning sensations in the eyes, severe harm to mental health, reduction in response to events happening around, and many more. An unconventional method of recovering from internet addiction could be the use of mobile applications that help users monitor their usage and motivate them to have better self-control. There are a number of such applications, henceforth called apps, available that claim to help recover from internet addiction. However, their efficacy in curbing internet use has not been studied previously. This study is primarily based on assessing the efficiency of these app-based recovery methods from internet addiction. Using statistical analysis and polynomial regression, it was found that these apps do help in lowering internet use. This effect is largely seen in the first week of app use, after which significant reduction is not observed.

Lakshita Sharma, Prachi Hooda, Raghav Bansal, Shivam Garg, Swati Aggarwal
Revolution of AI-Enabled Health Care Chat-Bot System for Patient Assistance

Chat-Bot is like our personal virtual assistants which can conduct a conversation through textual methods or auditory methods. One of the important tools of AI is Chabot’s which can interact directly with humans and provide them with a considerate solution to their problem. These kinds of AI programs are designed to simulate how a human behaves as a conversational partner, after passing the Turing test. Chabot’s are generally accessed via public virtual assistants such as Google Assistant, Microsoft Cortana, Apple Siri, or via various individual organizations’ apps and websites. The process of building a Chat-Bot involves two tasks: understanding the user’s intent and predicting the correct solution. In the development of Chat-Bot, the first task is to understand the input entered by the user. The prophecy is made depending on the first task. The main feature one can use of Dialog Flow API is follow-up intent. By using follow-up intent, one can create a decision tree that will help in the prediction of a disease by the Chat-Bot. Through this project, one can get insights into the actual understanding of Chat-Bot, explore various NLP techniques, and understand how to harness the power of NLP tools (Bennet Praba et al in Int J Innov Technol Explor Eng 9:3470–3473, (2019) [1]). One can thoroughly enjoy building their Chat-Bot from scratch and learning the advancements in the domain of machine learning and general AI.

Rachakonda Hrithik Sagar, Tuiba Ashraf, Aastha Sharma, Krishna Sai Raj Goud, Subrata Sahana, Anil Kumar Sagar
Air Quality Prediction Using Regression Models

Due to the urbanization, the cities are under pressure to stay livable all over the world. The quality of air in modern cities has become a remarkable concern nowadays. Air pollution is defined as the presence of harmful substances in the atmosphere that are adverse affected to the health of humans and other living beings. Thus, it is necessary to monitor air quality constantly of a city to provide a smart and healthy environment to citizens. Any air quality monitoring system first collects information about the concentration of air pollutants from the environment then evaluates those raw data. After evaluation, the system provides an assessment or prediction of air pollution of that particular area. In this work, different regression models have been used on the pollutants to find out the suitable model for predicting the air quality. Here, multiple linear regression, support vector regression, and decision tree regression are used for prediction. The simulation results showed that decision tree regression is the best model amongst these three models. It generates a good quality output, and it can be used for predicting the air quality in any air quality monitoring system.

S. K. Julfikar, Shahajahan Ahamed, Zeenat Rehena
Anomaly Detection in Videos Using Deep Learning Techniques

People’s concern for safety in public places has been increasing nowadays and anomaly detection in crowded places has become very important due to this aspect. This paper provides an approach towards identifying suspicious behaviors automatically in a crowded environment. In light of this, we have validated two powerful deep learning based models namely CNN and VGG16. So, for this purpose, we have collected a number of CCTV videos for detecting and differentiating between both normal and anomalous activities. Now based on videos collected, they have been trained using VGG16 and CNN model towards achieving the best accuracy

Akshaya Ravichandran, Suresh Sankaranarayanan
Unsupervised Activity Modelling in a Video

In today’s modern era, human activity recognition is widely used in video surveillance for various purposes like safety and security. This proposed work is also used in health care system, entertainment environment. In the unsupervised activity recognition, it recognize the activity in current frame and compares it to the previous frame. If there is any sudden change in activity then it gives the current frame number. The proposed system is basically divided into three phases: Pre-processing, Feature extraction, and Recognition. Gaussian Mixture Model is used to recognise moving object by using background subtraction. Then I apply some rules to recognize the different activities. I create two types of datasets i.e. two activity and three activity dataset. This approach shows better results in both the datasets.

Aman Agrawal
Performance Comparison of Various Feature Extraction Methods for Object Recognition on Caltech-101 Image Dataset

Object recognition system helps to find the label of the object in an image. The identification of the object depends on the features extracted from the image. Features play a very important role in the object recognition system. The more relevant features an object has, the better the recognition system will be. Object recognition system mainly works in two major phases—feature extraction and image classification. Features may be the color, shape, texture, or some other information of the object. There are various types of feature extraction methods used in object recognition. These methods are classified as handcrafted feature extraction methods and deep learning feature extraction methods. This article contains a comprehensive study of various popular feature extraction methods used in object recognition system. Various handcrafted methods used in the paper are scale invariant feature transformation (SIFT), speeded-up robust feature (SURF), oriented FAST and rotated BRIEF (ORB), Shi-Tomasi corner detector, and Haralick texture descriptor. The deep learning feature extraction methods used in the paper are ResNet50, Xception, and VGG19. In this article, a comparative study of various popular feature extraction methods is also presented for object recognition using five multi-class classification methods—Gaussian Naïve Bayes, k-NN, decision tree, random forest, and XGBoosting classifier. The analysis of the performance is conducted in terms of recognition accuracy, precision, F1-score, area under curve, false positive rate, root mean square error, and CPU elapsed time. The experimental results are evaluated on a standard benchmark image dataset Caltech-101 which comprises 8677 images grouped in 101 classes.

Monika, Munish Kumar, Manish Kumar
Leukemia Prediction Using SVNN with a Nature-Inspired Optimization Technique

Blood smear examination is a basic test that helps us to diagnose various diseases. Presently, this is done manually, though automated and semiautomated blood cell counters are also in vogue. Automated counters are very costly, require specialized and proper maintenance to work, and trained manpower. Thus, in small laboratories and in periphery blood smears are mostly being done manually, followed by microscopic evaluation by trained medicos. Though this method is easily available and cost-effective, but there are always chances of variation in the result due to differences in the methods of preparation of slides and experience of the pathologist. To overcome the manual methods of blood smear examination, various studies are being undertaken which are targeted not only to identify different blood cells but also to specifically identify blast cells which are the cornerstone of diagnosis of acute leukemia by automated methods. This study proposes a leukemia detection method using salp swarm optimized support vector neural network (SSA-SVNN) classifier to identify leukemia in initial stages. Adaptive thresholding on LUV transformed image was used to perform segmentation of the preprocessed smear. From the segments, the features (shape, area, texture, and empirical mode decomposition) are extracted. Blast cells are detected using the proposed method based on the extracted features. The accuracy, specificity, sensitivity, and MSE of the proposed method are found to be 0.96, 1, 1, and 0.1707, respectively, implies that compared to other methods—KNN, ELM, Naive Bayes, SVM, there is improvement in leukemia detection.

Biplab Kanti Das, Prasanta Das, Swarnava Das, Himadri Sekhar Dutta
Selection of Mobile Node Using Game and Graph Theory for Video Streaming Application

Solving the bandwidth scarcity problem is the need of the hour. In this paper, we propose an architecture that offloads the computational resources from cloud server to edge node to reduce the power consumption and data traffic consumption of the receiving mobile node. The edge node may be local edge server of the ISP or a mobile node. A cooperative game-theoretic framework is being proposed to identify the mobile nodes receiving same video streaming content. Identified mobile nodes create different clusters based on their similarity and then selects a cluster head that acts as the edge node. The edge node receives the video streaming locally from the ISP and distributes it among the other nodes in the cluster. Analytically, the proposed architecture reduces the data traffic consumption significantly.

Bikram P. Bhuyan, Sajal Saha
Attentive Convolution Network-Based Video Summarization

The availability of smart phones with embedded video capturing mechanisms along with gigantic storage facilities has led to generation of a plethora of videos. This deluge of videos grasped the attention of the computer vision research community to deal with the problem of efficient browsing, indexing, and retrieving the intended video. Video summarization has come up as a solution to aforementioned issues where a short summary video is generated containing important information from the original video. This paper proposes a supervised attentive convolution network for summarization (ACN-SUM) framework for binary labeling of video frames. ACN-SUM is based on encoder–decoder architecture where the encoder is an attention-aware convolution network module, while the decoder comprises the deconvolution network module. In ACN-SUM, the self-attention module captures the long-range temporal dependencies among frames and concatenation of convolution network and attention module feature map result in more informative encoded frame descriptors. These encoded features are passed to the deconvolution module to generate frames labeling for keyframe selection. Experimental results demonstrate the efficiency of the proposed model against state-of-the-art methods. The performance of the proposed network has been evaluated on two benchmark datasets.

Deeksha Gupta, Akashdeep Sharma
Static Video Summarization: A Comparative Study of Clustering-Based Techniques

The spectacular increase of video data, due to the availability of low cost and large storage enabled video capturing devices, has led to problems of indexing and browsing videos. In the past two decades, video summarization has evolved as a solution to cope up with challenges imposed by big video data. Video summarization deals with identification of relevant and important frames or shots for efficient storage, indexing, and browsing of videos. Among various approaches, clustering-based methods have gained popularity in the field of video summarization owing to their unsupervised nature that makes the process independent of the need for the expensive and tedious task of obtaining annotations for videos. This study is an attempt to comprehensively compare various clustering-based unsupervised machine learning techniques along with evaluation of performance of selective local and global features in video summarization. Quantitative evaluations are performed to indicate the effectiveness of global features—color and texture as well as local features—SIFT along with six clustering methods of different nature. The proposed models are empirically evaluated on the Open Video (OV) dataset, a standard video summarization dataset for static video summarization.

Deeksha Gupta, Akashdeep Sharma, Pavit Kaur, Ritika Gupta
A Review: Hemorrhage Detection Methodologies on the Retinal Fundus Image

Diabetic retinopathy (DR) is a microvascular symptom where retina is affected by fluid leaks of the fragile blood vessels. Clinically, retinal Hemorrhages are one of the earliest indications of diabetic retinopathy disease. In this contrast, the Hemorrhage count is used to indicate the severity of this disease. The early detection of retinal Hemorrhages obviously prevents the incurable blindness of the DR patients. But, retinal Hemorrhage detection is still a challenging task. Highly reliable, accurate, platform independent retinal Hemorrhage detection method is still an open field. In this research article, we have reviewed the principal methodologies which are used to diagnose the retinal Hemorrhages under the diabetic retinopathy screening operations. This review article helps the researchers to develop a high quality retinal Hemorrhage screening method in future.

Niladri Sekhar Datta, Koushik Majumder, Amritayan Chatterjee, Himadri Sekhar Dutta, Sumana Chatterjee
A Study on Retinal Image Preprocessing Methods for the Automated Diabetic Retinopathy Screening Operation

Recent days, diabetic retinopathy (DR) is a principal cause of incurable blindness to the diabetic patients. Manual screening of DR is time-consuming and resource demanding activity. Thus, today, setup of the reliable automated screening is an open issue for the researchers. In that concern, the automated analysis of retinal images, preprocessing stage plays a vital role. The overall success of screening operation is dependent on it completely. Preprocessing the retinal image prior to screening is a common task as noisy image degrades the screening performance. This paper reviews the different DR screening methods and points out the vastly used preprocessing scheme on that field. Finally, it indicates the most effective preprocessing scheme for DR screening as per the data analysis.

Amritayan Chatterjee, Niladri Sekhar Datta, Himadri Sekhar Dutta, Koushik Majumder, Sumana Chatterjee
FFHIApp: An Application for Flash Flood Hotspots Identification Using Real-Time Images

Extreme climate changes have become the new norm in today’s world. As a result, flash flood disasters continue to increase. It has become imperative to devise a quick disaster response system for minimizing the magnitude of damage and reducing the difficulties of human life. In this paper, we propose an application using android technology that provides real-time updates and prompt flow of authentic information of flood-ravaged areas to the rescue personnel or common people. The application accepts images belonging to flood-affected regions from rescue personnel, volunteers, etc. It authenticates and then filters those images using deep learning techniques. The severity of floods is estimated and plotted on a map using the associated location information of the images. The data is analyzed by using clustering techniques and visualized on the map. Subsequently, the affected areas of the flash flood are identified. Their peripheries are mapped so that these hotspots can be targeted for immediate relief operations.

Rohit Iyer, Parnavi Sen, Ashish Kumar Layek

Infrastructure and Resource Development and Management Using Artificial Intelligence and Machine learning

Frontmatter
An Optimized Controller for Zeta Converter-Based Solar Hydraulic Pump

Due to advancements in renewable technology, the agricultural sector can independently harvest its energy for running its respective equipments. One such equipment being hydraulic pump that waters the field can be run by solar array. This whole mechanism can be controlled through many present-day control techniques like soft computing techniques. The main aim of the present work is to obtain the best controller technique among fuzzy and genetic algorithm for the fast response of set value according to the climatic conditions and nature of field area. A mathematical model of zeta converter has been provided for studying the performance of the control techniques. This mathematical modelling of the respective converter has been done through state space averaging technique (SSA). This work even contributed a comparative study of zeta and SEPIC converter for its respective performance. These converters are chosen so that the output will be maintained at constant voltage for the range of input voltage. The work has been simulated in MATLAB/Simulink software for the respective study.

K. Sudarsana Reddy, B. Sai Teja Reddy, K. Deepa, K. Sireesha
Automated Detection and Classification of COVID-19 Based on CT Images Using Deep Learning Model

Medical image classification is one of the important areas of application of deep learning. In CT scan images, the structures overlapping against each other are eliminated, thus providing us quality information which helps in classification of images accurately. Diagnosing COVID-19 is the need of the hour and its manual testing consumes a lot of time. Deep learning approach toward COVID CT image classification can reduce this time and provide us with faster results compared to conventional methods. This paper proposes a fine-tuning model, containing a dropout, dense layers, and pretrained model which is validated on publicly built COVID-19 CT scan images, containing 544 COVID and NON-COVID images. The obtained result is compared with different other models like VGG16 and approaches like transfer learning. The experimental result provides us with fine-tuning of the VGG-19 model, which performed better than other models with an overall accuracy of 90.35 ± 0.91, COVID-19 classification accuracy or recall of 92.55 ± 1.25, overall f1-score of 88.75 ± 1.5, and an overall precision of 88.75 ± 1.5.

A. S. Vidyun, B. Srinivasa Rao, J. Harikiran
Comparative Study of Computational Techniques for Smartphone Based Human Activity Recognition

Human activity recognition (HAR) has been popular because of its diverse applications in the field of health care, geriatrics care, the security of women and children, and many more. With the advancement in technology, the traditional sensors are replaced by smartphones. The mobile inbuild accelerometer detects the orientation or acceleration, and the gyroscope detects the angular rotational velocity. In this study, computational techniques-based comparative analysis has been carried out on publicly available dataset on human activity recognition using smartphone dataset. Traditional and contemporary computational techniques (support vector machine, decision tree, random forest, multi-layer perceptron, CNN, LSTM, and CNN-LSTM) for HAR are explored in this study to compare each model’s accuracy to classify a particular human activity. Support vector machine outperforms in most of the activity recognition tasks.

Kiran Chawla, Chandra Prakash, Aakash Chawla
Machine Learning Techniques for Improved Breast Cancer Detection and Prognosis—A Comparative Analysis

Breast cancer prevails as the most widespread and second deadliest cancer in women. The typical symptoms of breast cancer are minimal and curable when the tumor is small, hence screening is crucial for timely detection. Since delayed diagnosis contributes to considerable number of deaths, a number of cross-disciplinary techniques have been introduced in medical sciences to aid healthcare experts in the swift detection of breast cancer. A vast amount of data is collected in the process of breast cancer detection and therapy through consultation reports, histopathological images, blood test reports, mammography results etc. This data can generate highly powerful prediction models, if properly used that can serve as a support system to assist doctors in the early breast cancer diagnosis and prognosis. This paper discusses the need of machine learning for breast cancer detection, presents a systematic review of recent and notable works for precise detection of breast cancer, followed by a comparative analysis of the machine learning models covered in these studies. Then, we have performed breast cancer detection and prognosis on three benchmark datasets of Wisconsin, using seven popular machine learning techniques, and noted the findings. In our experimental setup, K-Nearest Neighbor and Random Forest perform with the highest accuracy of 97.14% over Wisconsin Breast Cancer Dataset (original). Moreover, random forest displays best performance over all the three datasets. Finally, the paper summarizes the challenges faced together with conclusions drawn and prospective scope of machine learning in breast cancer detection and prognosis.

Noushaba Feroz, Mohd Abdul Ahad, Faraz Doja
Multiclass Classification of Histology Images of Breast Cancer Using Improved Deep Learning Approach

Breast Cancer has become a serious threat to women life in the overall word. It is necessary to diagnosis for Breast cancer early to avoid mortality rate. In the era of medical imagining with the advent of Artificial Intelligence systems, it has become easy to detect breast cancer at an early stage. Histopathology image modality is one of the best ways for breast cancer detection as it is easy to store in digital format for a long time. Breast cancer is classified into two main categories one is benign and malignant. These two classes are further divided into subclasses. In this paper, we have proposed an improved deep learning model for breast cancer multiclass classification. This proposed model uses a two-level approach one is blocked based and image-based. The blocked based approach is used to reduce overhead with lower-cost processing. Here ensemble learning approach is used for feature extraction which is a combined approach of the various pre-trained model without negotiating accuracy of the system. Final classification is done based on image based improved deep learning approach into eight classes.

Jyoti Kundale, Sudhir Dhage
Enhancing the Network Performance of Wireless Sensor Networks on Meta-heuristic Approach: Grey Wolf Optimization

The sensing technology has brought all advancements in the human lives. Wireless sensor network (WSN) has proven to be a promising solution to acquire the information from the remote areas. However, the energy constraints of the sensor nodes have obstructed the widely spread application zone of WSN. There has been a great magnitude of efforts reported for acquiring the energy efficiency in WSN, these efforts varying from conventional approaches to the meta-heuristic method for enhancing the network performance. In this paper, we have presented a comparative evaluation of state of art meta-heuristic approaches that helps in acquiring energy efficiency in the network. We have proposed grey wolf optimization (GWO-P) algorithm with the empirical analysis of the existing methods PSO, GA and WAO that will help the readers to select the appropriate approach for their applications. It is similarly exposed that in different other execution measurements GWO-P beats the contender calculations for length of stability, network lifetime, expectancy and so on.

Biswa Mohan Sahoo, Tarachand Amgoth, Hari Mohan Pandey
Deep Learning-Based Computer Aided Customization of Speech Therapy

Video frame interpolation is a computer vision technique used to synthesize intermediate frames between two subsequent frames. This technique has been extensively used for the purpose of video upsampling, video compression and video rendering. We present here an unexplored application of frame interpolation, by using it to join different phoneme videos in order to generate speech videos. Such videos can be used for the purpose of speech entrainment, as well as help to create lip reading video exercises. We propose an end-to-end convolutional neural network employing a U-net architecture that learns optical flows and generates intermediate frames between two different phoneme videos. The quality of the model is evaluated against qualitative measures like the Structural Similarity Index (SSIM) and the peak signal-to-noise ratio (PSNR), and performs favorably well, with an SSIM score of 0.870, and a PSNR score of 33.844.

Sarthak Agarwal, Vaibhav Saxena, Vaibhav Singal, Swati Aggarwal
Face Mask Detection Using Deep Learning

With the spread of coronavirus disease 2019 (COVID-19) pandemic throughout the world, social distancing and using a face mask have become crucial to prevent the spreading of this disease. Our goal is to develop a better way to detect face masks. In this paper, we propose a comparison between all available networks, which is an efficient one-stage face mask detector. The detection scheme follows preprocessing, feature extraction, and classification. The mask detector has been built using deep learning, specifically ResNetV2, as the base pre-trained model upon which we have our own CNN. We use OpenCV’s ImageNet to extract faces from video frames and our trained model to classify if the person is wearing a mask or not. We also propose an object removal algorithm to reject prediction below absolute confidence and accept only predictions above it. For the training purpose, we are using the face mask dataset, which consists of 680 images with mask and 686 images without mask. The results show mask detector has an accuracy of 99.9%. We have also used other pre-trained networks like MobileNetV2 as our base network and compared our results. ResNet50 gives us the state-of-the-art performance of face mask detection, which is higher than other face detectors.

Sandip Maity, Prasanta Das, Krishna Kumar Jha, Himadri Sekhar Dutta
Deep Learning-Based Non-invasive Fetal Cardiac Arrhythmia Detection

Non-invasive fetal electrocardiography (NI-FECG) has the possibility to offer some added clinical information to assist in detecting fetal distress, and thus it offers novel diagnostic possibilities for prenatal treatment to arrhythmic fetus. The core aim of this work is to explore whether reliable classification of arrhythmic (ARR) fetus and normal rhythm (NR) fetus can be achieved from multi-channel NI-FECG signals without canceling maternal ECG (MECG) signals. A state-of-the-art deep learning method has been proposed for this task. The open-access NI-FECG dataset that has been taken from the PhysioNet.org for the present work. Each recording in the NI-FECG dataset used for the study has one maternal ECG signal and 4–5 abdominal channels. The raw NI-FECG signals are preprocessed to remove any disruptive noise from the NI-FECG recordings without considerably altering either the fetal or maternal ECG components. Secondly, in the proposed method, the time–frequency images, such as spectrogram, are computed to train the model instead of raw NI-FECG signals, which are standardized before they are fed to a CNN classifier to perform fetal arrhythmia classification. Various performance evaluation metrics including precision, recall, F-measure, accuracy, and ROC curve have been used to assess the model performance. The proposed CNN-based deep learning model achieves a high precision (96.17%), recall (96.21%), F1-score (96.18%), and accuracy (96.31%). In addition, the influence of varying batch size on model performance was also evaluated, whose results show that batch size of 32 outperforms the batch size of 64 and 128 on this particular task.

Kamakshi Sharma, Sarfaraz Masood

Security and Privacy Challenges and Data Analytics

Frontmatter
Minimizing Energy Consumption for Intrusion Detection Model in Wireless Sensor Network

The security is one of the major concerns in today’s existing technology. Wireless Sensor Network (WSN) can be deployed in critical areas and network can be compromised by the malicious attack. Due to its unattended deployment strategy in remote places, security plays a major role and thus the primary line of defense is Intrusion detection system (IDS). The existing IDS cannot perform efficiently due to the mechanisms applied. Thus, a novel approach is designed and modeled to obtain high performance of WSN. In our proposed work, the probabilistic model which provides the direct way to visualize the model using joint probability, referred as Bayesian Network is combined with the stochastic process model called as Hidden Markov Model. This combined novel approach is a graphical model represented with nodes and edges. The evaluated results when obtained by applying the novel approach is observed and high detection rate is obtained when compared with the existing algorithms like weighted support vector machine (WSVM), K-means classifier and knowledge-based IDS (KBIDS). Maximum throughput and less transmission delay are obtained. The experiments are carried out for different attacks with various trained and test data. Thus, the novel approach gives overall high performance in WSN.

Gauri Kalnoor, S. Gowrishankar
A Blockchain Framework for Counterfeit Medicines Detection

The emergence of counterfeit medicines has given rise to a significant setback globally. These drugs may be contaminated, contain the wrong ingredient, or have no active ingredient at all, endangering approximately a million lives per year. The World Health Organization (WHO) estimates that 73 billion euros (79.26 billion USD) worth counterfeit medicines are traded annually. The imperfect supply chain is one of the primary causes of this issue. The present system does not have a proper record of the drug or vaccine that has been manufactured, produced, distributed, and finally reached to the consumer. There is no record of change in ownership of the drugs from manufacturer to consumers. Due to the lack of transparency of the system, the data are not shared between the systems. These loopholes play a significant role in producing and distributing counterfeit medicines. Blockchain is an emerging technology that can help in solving this issue of counterfeit medicines. Using Blockchain, tracking the drugs from its manufacturing until its delivery to the consumer can be possible. Trusted people can use an authorized Blockchain to store transactions and allow trusted parties to join the system and push all the records to Blockchain.

Tejaswini Sirisha Mangu, Barnali Gupta Banik
Static and Dynamic Learning-Based PDF Malware Detection classifiers—A Comparative Study

The malicious software are still accounting up as a substantial threat to the cyber world. The most widely used vectors to infect different systems using malware are the document files. In this, the attacker tries to blend the malevolent code with the benign document files to carry out the attack. Portable document format (PDF) is the most commonly used document format to share the documents due to its portability and light weight. In this modern era, the attackers are implementing highly advance techniques to obfuscate the malware inside the document file. So, it becomes difficult for the malware detection classifiers to classify the document efficiently. These classifiers can be of two main type, namely, static and dynamic. In this paper, we surveyed various static and dynamic learning-based PDF malware classifiers to understand their architecture and working procedures. We also have presented the structure of the PDF files to understand the sections of PDF document where the malevolent code can be implanted. At the end, we performed a comparative study on the different surveyed classifiers by observing their true Positive percentages and F1 score.

N. S. Vishnu, Sripada Manasa Lakshmi, Awadhesh Kumar Shukla
MOLE: Multiparty Open Ledger Experiment, Concept and Simulation Using BlockChain Technology

Key sectors like medicine, finance, education, IoT use BlockChain based applications to derive many benefits, this technology has to offer. The foundation of Bitcoin that is blockchain technology has received extensive attraction in research recently. The BlockChain Technology provides benefits in collaboration, trustability, identification, credibility and transparency. This technology acts as an untampered ledger which gives end user permission for various operations to be managed in decentralized manner. Various areas are centered around this, covering domains-like financial services, hospitality sector, healthcare management, E-Governance and so on. However, there are several technical hindrances of this emerging technology such as security and scalability issues, that need to be taken care of. This paper presents an insight into concept of blockchain, highlights the initiatives undertaken by Government of India and a real time simulation showcasing MOLE: Multiparty open ledger experiment, based on BlockChain Technology using MHRD’S Virtual lab.

Rahul Johari, Kanika Gupta, Suyash Jai
Intrusion Detection Based on Decision Tree Using Key Attributes of Network Traffic

As computer usage is increasing, network security is becoming a huge problem. As time passes, attacks are also increasing on the network, these attacks are nothing else, but it is like the intrusions that are causing a lot of much damage to our entire system. IDS is used to protect our data and network from these attacks and to secure our systems from intruders. The technology of data mining is used a lot in order to examine and analyze enormous network data. Data mining is an efficient method that is applied to IDS to detect a large amount of network data, and reduces the pressure of compilation done by humans. This paper compares the different techniques of data mining used to implement IDS. Information gain and rankers’ algorithm are used for attribute selection and J48 and random forest classify the data for NIDS and the dataset used is KDDCup99. We have selected 9 attributes from the KDDCup dataset and the experiment is done on the WEKA tool. Moreover, the results show that the detection accuracy with only 9 attributes is almost the same as it was with all 41 attributes.

Ritu Bala, Ritu Nagpal
An Extensive Review of Wireless Local Area Network Security Standards

During recent times, wireless local area network is very important and changed to a vital topic since the World Wide Web was introduced in 1995; we have many types of wireless network such as wireless local area network, wireless communication, wireless application protocol (WAP), wireless transaction and WMAN. Among all these types, WLAN gained much popularity and used widely in today’s world in many areas for different purposes: office, airport, library, university, hospital, military campus and many more. In another hand, the security of all these functionalities has become a crucial topic in today’s WLAN due to not existing any physical border around WLAN channel, so that, the security of information can be leaked by attackers very easy. For these purposes, IEEE introduced very famous security standards like WEP, WPA and WPA2 for securing communication between two endpoints. This survey gives a brief introduction to types of WLAN security, to study types of attack in wireless LAN, the vulnerabilities, and study on the existing security standards and will focus more on WPA2 which fix the problem of WEP and WPA and then explore the vulnerability of each standard; finally, this paper will end up with some useful mitigation and suggestion how to improve the wireless LAN security.

Sudeshna Chakraborty, Maliha Khan, Amrita, Preeti Kaushik, Zia Nasseri
Security Concerns at Various Network Phases Through Blockchain Technology

Among various recent techniques, IoT is considered as the most popular technique which connects various heterogeneous devices such as vehicles, smart phones, and tickets in order to automate their tasks according to the environment. In most of the IoT applications, the data management and entities control is done by a centralized authority where the security is considered to be a major issue. In order to secure the data from various centralized threats such as man-in-middle attack, single failure etc., it is a needed to propose a decentralized security scheme. Recently, Blockchain is considered as most secured decentralized network that ensures security and transparency while transmitting the data among entities. In this paper, we have discussed the need of blockchain security at different phases of network like data, users, and device by analyzing their different security metrics and key challenges.

Anju Devi, Geetanjali Rathee, Hemraj Saini

Smart Infrastructure and Resource Development and Management Using Artificial Intelligence and Machine learning

Frontmatter
Developing an Evaluation Model for Forecasting of Real Estate Prices

Real estate prices are an important indicator of the economic health of a region. The real estate industry is also growing at a very fast pace and needs the confluence of technology to provide knowledge-enabled services. We tried to explore the drivers of real estate housing. The present investigation is being conducted using a sample of 414 unique UCI datasets on real estate pricing. The OLS multivariate is being performed with the help of control variables such as house cost, age, MRT station, number of accommodation stores, walking and geographic directions. The article provides evidence that almost all the control variables are perceived to be crucial in predicting house price. However, the article did not provide any evidence to support that the geographic coordinate (in longitude) influence sample houses’ price. We argue the house cost is the most important determinate of real estate housing. Besides, ages, MRT station, the numbers of accommodation stores also help to improve the price of real estate housing. The finding of this study will provide investors and other stakeholders with important implications of real estate housing pricing in the best interests of capital appreciations.

Ruchi Mittal, Praveen Kumar, Amit Mittal, Varun Malik
Memetic Optimal Approach for Economic Load Dispatch Problem with Renewable Energy Source in Realistic Power System

Electric power industry is shifting from conventional energy sources to combined renewable sources and thus becoming the most challenging and difficult problems of electric power system. This necessitates the generation and dispatch of load at most economical cost. The main objective of economic load dispatch in power system operation, control and planning is to fulfil the energy load demand at the lowest price while fulfilling all the constraints (equality and inequality constraints). This paper presents the mathematical design of optimal load dispatch problem by considering the sources of energy generation from conventional power plants and renewable power plants (solar power plant), considering all the essential constraints of the realistic power system. In the proposed research the memetic optimizer developed by combining Slime Mould Algorithm with pattern search algorithm (SMA-PS) has been tested to find the solution of integrated thermal solar economic load dispatch problem and experimentally it has been observed that the proposed memetic optimizer is providing cost-effective solution to complex economic load dispatch problem of electric power system.

Shivani Sehgal, Aman Ganesh, Vikram Kumar Kamboj
High-Throughput and Low-Latency Reconfigurable Routing Topology for Fast AI MPSoC Architecture

Multiprocessor system-on-chip (MPSoC) widely uses in various applications due to its capacity of delivering the aggressive performances in the low power cost. The System-on-Chips (SoCs) generally are used in consumer electronics with the integration of multiple functionalities. The main issue faced by the MPSoC architecture is higher delay in techniques of data transmission which causes because of network congestion. In order to overcome the same, an effective network technique or topology with an adequate routing requires to be developed for achieving a better performance. Also, if the introduction of artificial intelligence is done, then it will lead to improve in the performance of routing protocols. The mesh topology is integrated into torus topology to achieve an optimal routing for transmission of data packets from the source to the destination router of MPSoC. Also, XY-YX routing is accomplished over the MPSoC network topology for obtaining the shortest path through the MPSoC. The technique of predicting the response packet path is used to minimize the delay during the data transmission. Also, the performance of the proposed design is compared with three existing architectures MXY-SoC, K Means-MPSoC, and SDMPSoC. The future of the same can be incorporated in field of artificial intelligence by the various algorithms for transmission of data from one point to another. The results obtained using this technique will lead to high throughput and low latency level for fast AI MPSoC architecture.

Paurush Bhulania, M. R.Tripathy, Ayoub Khan
Comparison of Various Data Center Frameworks

Our world is now becoming more attached to IT technologies. With the growing use of the Internet, there is a store and process exponential data. Data centers gained importance due to the emergence of Internet services. Data centers are responsible for the storage, management, and dissemination of data. Since the data is exponentially increasing, the data centers are more overloaded with data and their size is increasing. Since the data centers are more overloaded to fulfill the growing demands of the customers, this in turn is indirectly contributing to global pollution in terms of heavy consumption of power. Data centers consume a lot of power that is why many organizations want to design their data center their operations as green as possible. Thus, an efficient green data center framework is needed for the efficient design of data centers. To achieve this, the comparison of green data centers frameworks is being done which evaluates each of the data centers. The contrast inferred that the frameworks have the chances for enhancement in terms of components, attributes, energy-effective metrics, and implementation procedure. A new data center framework is proposed which is implemented considering the disadvantages of the existing data center frameworks.

Monalisa Kushwaha, Archana Singh, B. L. Raina, Avinash Krishnan Raghunath

Soft Computing

Frontmatter
A New Solution for Multi-objective Optimization Problem Using Extended Swarm-Based MVMO

Multi-Objective optimization problems corresponds to those problems which are having more than one objective to be optimized together. For such problem rather than an optimal solution, a set of solutions exists which is trade-off among different objectives. There are several solution techniques exist including evolutionary algorithms. Evolutionary algorithm provides Pareto optimal solutions after evolving continuously through many generations of solutions. Mean-variance mapping optimization is a stochastic optimization technique which the swarm hybrid variant works well on single objective optimization problem. The paper aims at extending the swarm hybrid variant of mean variance mapping optimization to a multi-objective optimization technique by incorporating non-dominated sorting and an adaptive local learch strategy. The proposed solution is evaluated on standard benchmark such as DTLZ and ZDT. The evaluation results establish that the proposed solution generates Pareto fronts those are comparable to the true Pareto fronts.

Pragya Solanki, Himanshu Sahu
Improving Software Maintainability Prediction Using Hyperparameter Tuning of Baseline Machine Learning Algorithms

Software maintainability is a prime trait of software, measured as the ease with which new code lines can be added, obsolete ones can be deleted, and those having errors can be corrected. The significance of software maintenance is increasing in today’s digital era leading to the use of advanced machine learning (ML) algorithms for building efficient models to predict maintainability, although several baseline ML algorithms are already in use for software maintainability prediction (SMP). However, in the current study, an effort has been made to improve the existing baseline models using hyperparameter tuning. Hyperparameter tuning chooses the best set of hyperparameters for an algorithm, where a hyperparameter is that parameter that uses its value for controlling the training process. This study employs default hyperparameter tuning as well as the grid search-based hyperparameter tuning. Five regression-based ML algorithms, i.e., Random Forest, Ridge Regression, Support Vector Regression, Stochastic Gradient Descent, and Gaussian Process Regression, have been implemented using two commercial object-oriented datasets, namely QUES and UIMS for SMP. To evaluate the performance, a comparison has been made between the baseline models and the models developed after hyperparameter tuning based on the three accuracy measures, viz., R-Squared, Mean Absolute Error (MAE), and Root Mean Squared Logarithmic Error (RMSLE). The results depict that the performance of all the five baseline ML algorithms improved after applying hyperparameter tuning. This conclusion is supported by the improved R-squared, MAE, and RMSLE values obtained in this study. Best results are obtained when the grid search method is used for the tuning purpose. On average, the values of R-squared, MAE, and RMSLE measures improved by 20.24%, 12.26%, and 30.28%, respectively, for the QUES dataset. On the other hand, in the case of the UIMS dataset, an average improvement of 6.27%, 15.71%, and 16.39% has been achieved in terms of R-squared, MAE, and RMSLE, respectively.

Kirti Lakra, Anuradha Chug
Karaoke Machine Execution Using Artificial Neural Network

Musicians and vocalists are facing the challenge of practicing their singing as instrumentalists are not available always or they are not very affordable. Karaoke can help to solve this problem and cater to the demands of these singers. When the vocals of a song have been removed and only the accompaniment or the instrumental background is left, the resulting music is called karaoke. This is used as a form of entertainment, wherein users sing along with the background music. This paper proposes a method to generate karaoke using the artificial neural network (ANN) tool in MATLAB. This is based on the out-of-phase stereo method. First, the training data is generated using the out-of-phase ssstereo method using audacity, to check its effectiveness; then, the same is implemented in MATLAB, and the generated data is used to train the artificial neural network. There is room of improvement in the proposed system, as it has been implemented with limited training data.

R. Sripradha, Plauru Surya, Payreddy Supraja, P. V. Manitha
A Review on Deep Learning Models for Short-Term Load Forecasting

Short-term load forecasting (STLF) is a part of the smart grid (SG) system used in maintenance and management operations. Traditional machine learning (ML) techniques entail complicating and time-consuming processes of feature extraction and selection. Deep learning (DL) techniques of artificial neural network (ANN) have shown great potential in STLF. The modernization of SG and the availability of huge load data offer an opportunity for these DL techniques in STLF. Different techniques based on DL models have been proposed for STLF in the past few years. In this paper, a survey of DL model for STLF is presented. This literature survey includes papers published from 2016 to 2019. Common DL architectures such as stack auto-encoder (SAE), recurrent neural network (RNN), convolution neural network (CNN), and deep belief network (DBN) are frequently applied in combination with clustering methods. These DL architectures are briefly explained with a diagram before presenting a review of related papers. The strengths and limitations of the reviewed methods are discussed. Based on this review, the gaps in the existing research work on DL-based STLF are identified, and future directions are described. This paper is expected to serve as an initial guide for new researchers who are interested in the application of deep learning in STLF.

Ksh. Nilakanta Singh, Kh. Robindro Singh
An Evolutionary Approach to Combinatorial Gameplaying Using Extended Classifier Systems

Extended classifier system (XCS) is an extension of a popular online rule-based machine learning technique, learning classifier system (LCS), in which a classifier’s fitness is based on its accuracy instead of the prediction itself, and a genetic algorithm (GA) and reinforcement learning (RL) component is utilized for exploratory and learning purposes, respectively. With the emergence of increasingly intricate rule-based learning techniques, there is a need to examine feasible methods of learning that can overcome the challenges posed by complex scenarios while supporting online performance. Checkers is a strategic, combinatorial game having a high branching factor and a complex state space that provides a promising avenue for scrutinizing novel approaches. This paper presents a preliminary investigation into feasibility of XCS in such complex avenues by taking 6 × 6 checkers as a specific case of study. The XCS agent was adapted to this problem, trained with random agent and was able to perform well against the alpha–beta pruning algorithm of various depths as well as human agents of different skill levels (beginner, intermediate and advanced).

Karmanya Oberoi, Sarthak Tandon, Abhishek Das, Swati Aggarwal
Metadata
Title
Applications of Artificial Intelligence and Machine Learning
Editors
Ankur Choudhary
Arun Prakash Agrawal
Rajasvaran Logeswaran
Dr. Bhuvan Unhelkar
Copyright Year
2021
Publisher
Springer Singapore
Electronic ISBN
978-981-16-3067-5
Print ISBN
978-981-16-3066-8
DOI
https://doi.org/10.1007/978-981-16-3067-5

Premium Partner