Skip to main content

2021 | Buch

Proceedings of Second International Conference on Computing, Communications, and Cyber-Security

IC4S 2020

herausgegeben von: Dr. Pradeep Kumar Singh, Prof. Dr. Sławomir T. Wierzchoń, Sudeep Tanwar, Dr. Maria Ganzha, Prof. Joel J. P. C. Rodrigues

Verlag: Springer Singapore

Buchreihe : Lecture Notes in Networks and Systems

insite
SUCHEN

Über dieses Buch

This book features selected research papers presented at the Second International Conference on Computing, Communications, and Cyber-Security (IC4S 2020), organized in Krishna Engineering College (KEC), Ghaziabad, India, along with Academic Associates; Southern Federal University, Russia; IAC Educational, India; and ITS Mohan Nagar, Ghaziabad, India during 3–4 October 2020. It includes innovative work from researchers, leading innovators, and professionals in the area of communication and network technologies, advanced computing technologies, data analytics and intelligent learning, the latest electrical and electronics trends, and security and privacy issues.

Inhaltsverzeichnis

Frontmatter

Communication and Network Technologies

Frontmatter
Smart Aging Wellness Sensor Networks: A Near Real-Time Daily Activity Health Monitoring, Anomaly Detection and Alert System

In the growing automation of existing world, activity modeling is being used in the field of technology to serve various purposes. One such field, which will be majorly benefited from daily activity modeling and life- living activities analysis, is monitoring of seasonal behavior pattern of elderly people, which can be further utilized in their remote health analysis and monitoring. Today’s demand is to develop a system with minimum human interaction and automatic anomaly detection and alert system. The proposed research work emphasizes to diagnose elderly persons daily behavioral patterns by observing their day-to-day routine activities with respect to time, location and context. To grow the accurateness of the structure, numerous sensing as well as actuator units have been deployed in elderly homes. Popular this research paper, we have recommended a unique sensing fusion technique to monitor seasonal, social, weather related and wellness observations of routine tasks. A novel daily activity learning model has been proposed which can record contextual data observations of various locations of a smart home and alert caretakers in the case of anomaly detection. We have analyzed monthly data of two old-aged smart homes with more than 5000 test samples. Results acquired from the investigation validate the accuracy and the efficiency of the proposed system which are recorded for 20 activities.

Sharnil Pandya, Mayur Mistry, Ketan Kotecha, Anirban Sur, Asif Ghanchi, Vedant Patadiya, Kuldeep Limbachiya, Anand Shivam
Ant Colony Optimization for Traveling Salesman Problem with Modified Pheromone Update Formula

Traveling Salesman Problem is a combinatorial problem from which various other problems have been derived in the real-world application. It is a well-known NP-complete problem. Its instances are used in various fields around the globe. There have been various optimization techniques that are used to solve this problem. The Ant Colony Optimization (ACO) is an optimization method that is very useful in solving various artificial intelligence problems and obtaining the optimized solution. There have been methods proposed after its introduction in 1991. When using the traditional ACO pheromone update formula on the large dataset of Traveling Salesman Problem, one might get an optimal solution at the cost of a great amount of time. In this paper, we have proposed a modification in the basic Ant Colony Optimization pheromone update formula for discovering the optimized solution for the Traveling Salesman Problem using the probability from the pheromone value from succeeding nodes. This updated formula also helps in reducing the time to obtain the optimal solution as compared to the traditional formula.

Rahil Parmar, Naitik Panchal, Dhruval Patel, Uttam Chauhan
Face Mask Detection Using Deep Learning During COVID-19

With the onset of the COVID-19 pandemic, the entire world is in chaos and is talking about novel ways to prevent virus spread. People around the world are wearing masks as a precautionary measure to prevent catching this infection. While some are following and taking this measure, some are not still following despite official advice from the government and public health agencies. In this paper, a face mask detection model that can accurately detect whether a person is wearing a mask or not is proposed and implemented. The model architecture uses MobileNetV2, which is a lightweight convolutional neural network, therefore requires less computational power and can be easily embedded in computer vision systems and mobile. As a result, it can create a low-cost mask detector system that can help to identify whether a person is wearing a mask or not and act as a surveillance system as it works for both real-time images and videos. The face detector model achieved high accuracy of 99.98% on training data, 99.56% on validation data, and 99.75% on testing data.

Soham Taneja, Anand Nayyar, Vividha, Preeti Nagrath
Compact Millimeter-Wave Low-Cost Ka-Band Antenna for Portable 5G Communication Gadgets

This paper presents design and simulation of a low-cost 5G Millimeter-wave planar antenna with defected ground structure operating in the Ka-band portions of millimeter-wave. The antenna resonates at multiple frequencies of Ka-band especially at 27.80 GHz, 30.86 GHz and 33.74 GHz with a return loss of −21.41 dB, −24.03 dB and −22.27 dB, respectively, and has an impedance bandwidth of 53.5%. The presented antenna has been designed on a low cost FR4 substrate with a dielectric K value of 4.4 and a dissipation value of 0.004. The overall profile of the designed structure is 30 × 40 × 0.8 mm3. The antenna proposed is a compact structure with a peak gain achievement of 3.79 dBi and suits best for the employment with 5G mobile devices and gadgets. Other parameters such as radiation pattern, VSWR, polar plot and surface current density have also been discussed. The well performance of the presented antenna with reference to the return loss (S11), peak gain and associated radiation pattern makes it a sterling and compact design antenna for use in the 5G Millimeter-Wave mobile devices.

Raqeebur Rehman, Javaid A. Sheikh, Khurshed A. Shah, Zahid A. Bhat, Shabir A. Parah, Shahid A. Malik
Lightweight De-authentication DoS Attack Detection Methodology for 802.11 Networks Using Sniffer

Wireless networks are prone to many types of attacks. The open access of wireless technology puts the whole wireless environment into a threat. The new wireless technologies like wireless sensor networks (WSN) and the Internet of Things (IoT) face the challenges of security. The Internet of Things is yet to be standardized because of the unavailability of security solutions to certain wireless threats. Internet of Things uses different technologies for communication such as Bluetooth, Wi-Fi, Li-Fi, and 6LoPAN. Of all, Wi-Fi is mainly preferred. But Wi-Fi itself possesses certain security issues. One of the issues of Wi-Fi is the nature of Management Frames in its 802.11 standard. The Management Frames in 802.11 standard are unencrypted, hence easily interpretable. This very nature of Wi-Fi leads to many attacks like De-authentication attacks and Dis-association attacks. This paper provides an approach to detect De-authentication DoS attacks in Wi-Fi-based IoT networks, using the sniffing concept. For network sniffing, a Python-based tool known as Scapy is used. Attacks and Detection are done in a real-time environment. We used Kali Linux tools and Scapy for real-time attacks. The De-authentication DoS attacks are performed on a Wi-Fi-based IoT network, and the detections are triggered on a Scapy-based Sniffer program. Our algorithm also detects spoofed De-authentication DoS attacks.

Zakir Ahmad Sheikh, Yashwant Singh
Power Distribution Control for SIMO Wireless Power Transfer Systems

Transmission of power wirelessly has emerged as a prominent technology in recent decades; however, the selective flow of power in systems with multiple receivers has always been a conflicting issue. This paper proposes a favorable method to control power distribution among multiple receivers using different resonating frequencies for transmitter and receiver coils. A multiple load system with different parameters has been analyzed theoretically, using Advanced Design System (ADS), and experimentally by practical implementation. Upon detailed analysis, it is found that power transfer efficiency reaches its maximum value at the resonant frequency of the receiver under consideration, irrespective of the resonant frequency of the common transmitter. It has also deduced that the driving efficiency of a coil has a great emphasis on the probable amount of power received, lesser the difference between driving frequency and the resonant frequency of the coil, less isolated is the coil.

Sanjog Ganotra
Performance Analysis of WRAN in Light of Full Duplex Capability

IEEE 802.22 is a wireless regional area network (WRAN) standard that works on cognitive radios, allowing sharing of unused spectrum allocated to television services on a non-interfering basis. This standard is helpful in providing data services to rural areas with less population. IEEE 802.22 has three operations: transmission, reception and sensing of data. This standard operates on half duplex mode where not only transmission and reception by a participating station occurs at different times, but sensing of a channel is also not allowed while transmission is going on at any node. This reduces the spectrum usage. Recent developments have shown that full duplex communication, i.e., concurrent transmission and reception by a node, is possible even in wireless networks. The full duplex operation helps in increasing the throughput and reduces collisions. In this paper, our attempt is to explore the possibility of exploiting full duplex capabilities of wireless nodes in IEEE 802.22 WRAN by allowing a node to perform the process of channel sensing while it is engaged in transmission of some data. The simulation results show noticeably good performance enhancement of WRAN in terms of throughput by 9% which leads to efficient usage of spectrum without inviting increased interference.

Khusali Obhalia, Mayur M. Vegad, Prashant B. Swadas
Design of Planer Wide Band Micro-Strip Patch Antenna for 5G Wireless Communication Applications: Review

In this paper, wide band antenna with reduced size and irregular coplanar micro-strip strip took care of fast correspondence applications. Two planned antennas with the radiator patches have been picked as elliptical and semi- elliptical structure. The designs of antenna radiation designs are almost omnidirectional radiation over the ultra-wide band scope with appropriate gain. For the 4th and 5th generation application an antenna wideband structure is examined. Antenna is created on ease effectively accessible FR-4 substrate. The burrowing impact of ENZ thin network with coordinating two microstrip lines along with various impedances attributes have been examined. An exceptionally minimized gaping loop band pass planer channel through unequal frequency result and covers the range of 2.5–2.6 GHz for 4th generation and 3.6–3.7 GHz range for 5th generation purposes have been discussed. To accomplish increased sharp cut-off frequencies, one limitless plus three-limited transmission zeros are effectively produced on the upper and lower ends of the 4th, 5th generation passbands. The 5th generation communication involves a position of lightweight, increased gain. In this article, for 28 GHz, the basic structure of patch antenna have been used to guarantee unwavering quality, versatility, high proficiency, a good shape, low profile, high gain and high efficiency. For the 5G application in Wi Fi, Wi Max array of broadband printed dipole antenna, 5G compact MIMO antenna, Impedance Matching Using ENZ Metamaterials, Planer UWB Antenna for High Speed Communications and UWB slot antenna may be used.

Praveen Tiwari, Praveen Kumar Malik
Planar UWB Antenna for MIMO/Diversity Applications

Ultra-wideband (UWB) communication technology is found to be suitable for short-range high-speed data transfer. But due to the limitation of maximum transmit power, the range and channel capacity are the matter of concern. If we use MIMO technology in UWB-based system, the channel capacity and rate of data transmission can be made better. After 2002 when Federal Communication Commission (FCC) declared a dedicated frequency range to UWB system, a lot of research has been carried out on MIMO antenna for UWB system. Although there are several challenges like the issue of size and compactness appeared, when a number of antenna elements were increased. There was another problem of mutual coupling between closely spaced radiating elements. Use of printed antenna technology was found to be helpful in the reduction in size and area of antenna significantly. Similarly, for the reduction of mutual coupling, several isolation techniques had been incorporated for significant reduction of mutual coupling as well as correlation coefficient. An extensive survey is presented here to appreciate and acknowledge the research carried out in design of UWB antenna for MIMO applications.

Pramod Singh, Rekha Agarwal
Design and Analysis of Wearable Textile UWB Antenna for WBAN Communication Systems

In present scenario, with application-centric approach of all the modern communication devices, integration of electronic gadgets with wearable accessories is in high demand. This research paper presents design and simulation of wearable antenna using flexible textile substrate and analysis of various performance antenna parameters. The design and simulation of proposed jeans substrate-based textile antenna have been performed using CST 2018 Microwave Studio. Main emphasis of the current research work is to demonstrate a small-sized UWB antenna design having overall antenna size 20 mm × 22 mm × 1.07 mm. The resonant frequencies of the designed dual-band antenna are observed to be 4.45 and 8.75 GHz and have a wide fractional bandwidth of 103.5% in the ultra-wide band range. The presented microstrip patch textile antenna is compact, robust and flexible that makes it perfect choice to be utilized as body worn antenna for WBAN communication systems for wireless health monitoring systems.

Bhawna Tiwari, Sindhu Hak Gupta, Vipin Balyan

Advanced Computing Technologies

Frontmatter
Stock Prices Prediction from Financial News Articles Using LSTM and XAI

The stock market is very complex and volatile. It is impacted by positive and negative sentiments which are based on media releases. The scope of the stock price analysis relies upon the ability to recognize the stock movements. It is based on technical fundamentals and understanding the hidden trends which the market follows. Stock price prediction (Vachhani et al in Mach Learn-Based Stock Market Anal Short Surv (2020) [1]) has consistently been an extremely dynamic field of exploration and research work. However, arriving at the ideal degree of precision is still an enticing challenge. In this paper, we are proposing a combined effort of using efficient machine learning techniques coupled with a deep learning technique—long short-term memory (LSTM) to use them to predict the stock prices with a high level of accuracy. Sentiments derived by users from news headlines have a tremendous effect on the buying and selling patterns of the traders as they easily get influenced by what they read. Hence, fusing one more dimension of sentiments along with technical analysis should improve the prediction accuracy. LSTM networks have proved to be a very useful tool to learn and predict temporal data having long-term dependencies. In our work, the LSTM model uses historical stock data along with sentiments from news items to create a better predictive model

Shilpa Gite, Hrituja Khatavkar, Shilpi Srivastava, Priyam Maheshwari, Neerav Pandey
Precision Agriculture: Methodologies, Practices and Applications

IoT-enabled modern agricultural methodologies can change the current agriculture practices by automating the entire process of agriculture from crop management, water irrigation to making better decisions based on real-time monitoring of environmental conditions, soil conditions and landscape conditions. In the recent times, technology-enabled precision agriculture solutions have enabled a paradigm shift from static and manual agriculture methodologies to automated precision-oriented agricultural methodologies using the latest technologies such as Internet of agricultural things, AI-based agricultural analytics, cloud computing and WSN-enabled crop monitoring and control. In the proposed review work, a rigorous and detailed assessment has been conducted to identify the research gaps and analyze the latest technology-enabled PA methodologies and applications. Furthermore, in the proposed review work, we have presented IoAT-based PA model, which is comprised of five layers. The first layer depicts the physical layer devices, second layer describes security protocols, third layer highlights efficient data management practices, fourth layer provides effective irrigation models, and the final layer discusses technology-enabled water management services. In the end, along with future directions, we have represented categorical analysis of the conducted review work in the form of graphical results along with gained experiences and learnt lessons.

Sharnil Pandya, Mayur Mistry, Pramit Parikh, Kashish Shah, Gauravsingh Gaharwar, Ketan Kotecha, Anirban Sur
Predicting Customer Spent on Black Friday

The paper provides insights into the expenditures of consumers on a Black Friday. On the day of Black Friday, most of the retail shops are hugely crowded, therefore it becomes very difficult to control the crowd and to have proper stocks of a variety of products. This study analyzes the shopping pattern based on various categories like age, occupation, marital status, city, etc. This study centers around the field of forecast models to build up an exact and efficient calculation to dissect the client spending before and yield the future going through of the clients with the same highlights. In this study, a forest regressor is used to predict the expenditure of consumers. Further, this study talks about the information prehandling and perception procedures utilized to achieve the ideal outcomes. With the help of this study, any retail store participating in Black Friday can improve its efficiency and prepare himself to handle consumers.

Ashish Arora, Bhupesh Bhatt, Divyanshu Bist, Rachna Jain, Preeti Nagrath
Analyzing the Need of Edge Computing for Internet of Things (IoT)

Whenever we need to monitor something or sense the data such as traffic, temperature, or may be pollution then Internet of things (IoT) comes in picture. IoT devices can only collect and transfer data for analysis. Today’s increased computer literacy enables them to perform complex real-time computations. The concept of edge computing means running fewer processes in cloud and transferring rest of the processes to the user’s computer, IoT device, or server side. In this paper, recent advances in computer technology and their impact on IoT have been discussed. I have also created computer market classification by analyzing and classifying current prose, thus revealing the secrecy and supportive features of various IoT computational models. I have also highlighted the important requirements to increase the use of the edge of computing in IoT. Few implementation case studies examine the overhead caused by edge calculations and present a possible implementation of the computing edge paradigm.

Ajay Pratap, Ashwani Kumar, Mohit Kumar
A Mobile-Based Farm Machinery Hiring System

Agriculture as the oldest profession dates back to the Stone Age. Advances in technologies like diesel engine tractors and other tools with hydrostatic capability and control brought about agricultural mechanization which increased food productivity and industrialization. The aim of this research work is to design a mobile application for distributing or leasing agricultural machineries to farmers using locations-based services. The design also took into consideration the configuration of the various topologies and other factors that could enhance the flexibility of a mobile application of this nature. The user platform is categorized into three sections: the presentation layer, the business layer, and the data layer. The presentation layer is focused on the design logic and the navigational tools used in locating the right hiring type. It is also responsible for choosing the right data format using data validation technique to protect the app from invalid data entry. The business layer is responsible for logging in, authentication, exception handling, and security matters. While the data layer is focused on facilitating secure data transactions, the beauty of it is that it can be rescaled over time to meet the challenges of the time. The application was developed using JavaScript and MySQL with Phonegap/Cordova, XAMMP, and PHP for the backend. It was validated using formative evaluation which was conducted using interviews and open-ended questionnaires. The results of usability test obtained are promising.

Oluwasefunmi Arogundade, Rauf Qudus, Adebayo Abayomi-Alli, Sanjay Misra, JohnBosco Agbaegbu, Adio Akinwale, Ravin Ahuja
Cloud Computing Offered Capabilities: Threats to Software Vendors

Cloud computing has permeated and penetrated every aspect of our lives both personal and professional. This technology has gained a phenomenal acceptance as it provides opportunities for organizations, by offering large collections of easily accessible data. This has transformed the way businesses are done. The ever-changing demands of users, fierce competitive activities, and rapidly evolving technology pose a great challenge to software vendors to be innovative and constantly seek to be on top of their game. Cloud computing allows business to scale efficiently as they constantly adjust their operations to the new realities that help them to optimize cost, quality, and time. The main objective of this project is to ascertain if cloud computing offered capability is a threat to software vendors, and the extent of the threat, thus proposing solutions. This study used multi-stage research technique and the research instrument being questionnaire. The completed questionnaire forms were collated, coded, and analyzed using both descriptive and parametric statistics. Quantitative data was coded and entered into statistical packages for social scientists and analyzed using descriptive statistics for expressing the demographic characteristics of the respondents. Qualitative data was analyzed based on the content of the responses. Responses with common themes or patterns were grouped together into coherent categories. Chi-square analysis and logistic analysis were used to show dependencies and relationships between the variables and also to determine the probability that software vendors are aware of cloud computing threats and the severity of those threats. Quantitative data was rendered in tables and explanation was presented in logical prose.

Oluwasefunmi Arogundade, Funmilayo Abayomi, Adebayo Abayomi-Alli, Sanjay Misra, Christianah Alonge, Taiwo Olaleye, Ravin Ahuja
The Sentimental Analysis of Social Media Data: A Survey

Nowadays, machine learning plays a very important role in every field. For recommendation systems, user feedback is relevant because they contain different forms of emotional details that may affect the reliability or consistency of the recommendation. Online reviews, comments are very helpful in selecting the said items and services as it gives real feedback about the quality of these items and services. The categorizations of these items based on feedback provided by actual users are known as sentimental analysis. In this study, we described various machine learning techniques and parameters used for the sentimental analysis of reviews, comments, and feedback available on health care, Facebook, Twitter, and other social media networking sites. The study reveals that the most commonly used approaches are machine learning and deep learning.

Vartika Bhadana, Hitendra Garg
A Review Study on IoT-Based Smart Agriculture System

Agriculture has been going on in every country for ages, and it is also the economic system's backbone, which is declining due to overpopulation and urbanization, hence smart agriculture is necessary for all. Smart agriculture includes the incorporation of sensors, automated monitoring, networking, and data processing capability. The arrival of the accelerated growth of IoT and site-specific farming activities is difficult to increase crop yield and the productive utilization of assets associated with the agriculture procedure. It gives a real-time evaluation of the different crops and environmental conditions by determining the agricultural land and the volume of fertilizers, water, and other inputs needed. This approach empowers our farmers to attain productivity within the right time frame and thus ensures the production of secure and stable non-toxic crops. We studied different wireless network technologies that can be used in agricultural farms for smart purposes. These technologies are used to monitor the parameters of agriculture, i.e., moisture in the soil, humidity, and temperature, etc. Agriculture issues have frequently hampered the development of the region. The main solution for this issue is smart farming by changing the current conventional strategies for farming. The objective of the task is accordingly to make farming smart utilizing, automation, and IoT advancements. These two activities ought to be done by a remote smart unit or Internet-associated computer, and the process is conducted by interfacing sensors, ZigBee, Wi-Fi devices, cameras, etc. This paper represents numerous possibilities and difficulties in the field of IoT-based smart farming.

Mir Saqlain Sajad, Farheen Siddiqui
Secure Group Data Sharing with an Efficient Key Management without Re-Encryption Scheme in Cloud Computing

Cloud computing becomes an essential tool for internet users which provide various computing resources over the cloud. Providing on-demand storage is being one of the major services which is effective to meet out the expectations of an organization when utilizes the computational resources while accessing large amount of data. Storing the data on cloud is being a big challenge for the researchers in terms of maintaining the protection and security of data. Sharing of the data among a group can lead to insecurity of data from external and internal threats. This paper introduces a protected data sharing framework in the cloud storage that maintains the privacy and confidentiality of data. The proposed method uses the MD5 hash function that is relatively faster than the SHA256 hash function for checking the data integrity. The framework also provides a check on accessing and sharing of data. Exhaustive re-encryption computations are avoided and a single encryption key is used to encrypt the entire file. There are two distinct key shares for each of the users and user is allowed to share one at a time get access data. In this way having access to a single portion of a key permit the framework to safeguard the data against internal threats. Cryptographic server is used to store the other key share and performs all the expensive computations which are considered as a trusted third party. Proposed work also measure the efficiency of the model based on the time taken to execute the various operations.

Lalit Mohan Gupta, Hitendra Garg, Abdus Samad
Energy-Efficient Bonding Based Technique for Message Forwarding in Social Opportunistic Networks

In Social opportunistic networks where no end to end path exists between a sender and a destination node, the nodes usually perform message routing using the store-carry and forward pattern. Nodes consume a significant amount of energy during the node discovery phase and the message transmission phase. Therefore, it is challenging to design an energy-efficient message forwarding protocol. This paper improves an existing scheme named bonding based technique for message forwarding in social opportunistic network (BBFT) concerning energy consumption. The newly introduced energy-efficient BBFT approach proposes an energy estimation model that estimates the amount of energy consumed for transmitting, receiving, and scanning in the message forwarding process by the sender node. This energy consumption estimation model reduces the message flooding in the network, which results in conservation in the nodes’ residual energy, hence increasing the network’s lifetime. Simulation has been performed using ONE simulator to assess the EBBFT algorithm’s performance on energy parameters such as average residual energy, dead nodes, and other parameters overhead and dropped messages and compared against the BBFT and the energy-efficient SEIR protocols. On average, EBBFT is 90.65 $$\%$$ % better than BBFT and 27.18 $$\%$$ % better than SEIR concerning average residual energy while varying the message generation interval and had no dead nodes.

Satbir Jain, Ritu Nigam, Deepak Kumar Sharma, Shilpa Ghosh
Threat Modelling and Risk Assessment in Internet of Things: A Review

The Internet of things (IoT) plays an important role in our daily lives. They have been used in our homes, hospitals and industries for so long. IoT has also been used to monitor and inform the changes in the environment. Furthermore, the data exchanged within and among IoT devices are increasing aggressively, and the permeative of such systems take them to come in the tenancy of very delicate information, as a result, there are huge risks of security and privacy. Many research works have been conducted to secure the Internet of things. In this paper, we provide an introduction to the Internet of things and reassess some threat models and risk assessment methodologies of IoT.

Mahapara Mahak, Yashwant Singh
Deep Learning-Based Attack Detection in the Internet of Things

The exponential growth of the Internet of Things in various domains has escalated the rise of concern in this digital era. The concern is primarily due to the evolution of cyber-attacks leading to the emergence of numerous threats and anomalies. The bottlenecks in traditional security techniques have unclouded the vision of learning techniques for intrusion detection. The use of classical ML techniques for the identification and classification has been around for a long time, but it suffers from the issue of scalability and feature engineering, which limits its usage. In this paper, we have analyzed the use of deep learning for intrusion and anomaly detection. The deep learning algorithms used here are deep neural networks (DNN) and long short-term memory recurrent neural networks (LSTM). Denial of service, malicious control, wrong setup, data type probing, scan, spying, malicious operation are the attacks against which the algorithms are tested. An accuracy of 99.30% is achieved for deep neural networks and 97.50% for the LSTM model.

Parushi Malhotra, Yashwant Singh

Data Analytics and Intelligent Learning

Frontmatter
An Improved Dictionary Based Genre Classification Based on Title and Abstract of E-book Using Machine Learning Algorithms

The amount of digital books or e-books is increasing day by day. Book Assortment is the job of assigning a category or set of appropriate genres to a book. The goal of this research paper is to classify books with related genres. Many existing approaches, like Support Vector Machine (SVM), Neural Text Categorizer (NTC), etc. are available for text mining. We applied existing machine learning algorithms with different datasets and implemented existing feature selection methods to select features. In our proposed dictionary-based approach, we classified books by its attributes like title, description, genre, and author using text mining. In the learning part, we created a dictionary of keywords from the book’s description and title and then assigned genres to the keywords. In the classification part, we attributed genres to a book. For classifying the books, we extracted a dataset from web pages using web scraping. Our proposed approach outperforms traditional approaches to reduce the time of training when massive data is considered.

Vrunda Thakur, Ankit C. Patel
A Novel Multicast Secure MQTT Messaging Protocol Framework for IoT-Related Issues

Edge computing and fog computing have emerged as effective and efficient technologies for IoT-related issues. In the recent times, fellow researchers have proposed numerous researches under the area of edge and fog computing. Still, edge and fog computing have remained open research problems. In the proposed research work, we have proposed a MQTT-based broker security mechanism to protect the IoT-based system from a selection of security penetrations such as man-in-the-middle attacks, DDoS, DoS and many more. In general, MQTT broker architecture acts as an intermediator to establish connection between a publisher and subscriber. For secure communication between both the ends, it is essential to establish a novel security protocol which secure communication channel between subscribers and publishers. In the presented research work, we have proposed a novel authentication mechanism which makes the use of MQTT broker in achieving data privacy, authentication and data integrity. Furthermore, a detailed and rigorous analysis of a variety of security attacks has been analyzed and discussed in the end. At last but not the least, we have also presented multicast-MQTT secure messaging protocol and compared it with state-of-the-art methodologies such as RSA and advanced encryption standard.

Sharnil Pandya, Mayur Mistry, Ketan Kotecha, Anirban Sur, Pramit Parikh, Kashish Shah, Rutvij Dave
Experiential Learning Through Web-Based Application for Peer Review of Project: A Case Study Based on Interdisciplinary Teams

Engineering is collaborative by nature, and this study was conducted to further embed these traits through the experiential learning-based inter-disciplinary project activity spanned over three weeks. Students of three different branches of engineering voluntarily worked on common projects to develop understanding on common topics. They also reviewed the projects of their peer through a Web-based peer review process (AmiPREP) specifically developed to bring technical uniformity in selected projects. After the completion of the projects, participants were interviewed and their responses implicated that the activity proved to be meaningful, interesting and effective to work in inter-disciplinary teams and understand the significance of mathematical modeling through MATLAB/Simulink.

Pallavi Asthana, Sudeep Tanwar, Anil Kumar, Ankit Yadav, Sumita Mishra
Machine Learning Applications for Computer-Aided Medical Diagnostics

Machine learning has made potential developments in biotechnology. Years of medical training are required for correct diagnosis of diseases. Diagnostics is often a very time-consuming process, and it requires strenuous effort. Data generated through varieties of imaging modalities for the diagnoses purpose is very bulky. In the corporate and government hospitals, a high number of patients are visiting per day for the disease diagnosis and treatment. This may cause diagnosis burden on the clinicians and radiologist. For interpretation, overload of image data may produce oversight and observational errors. Machine learning algorithms have recently made huge advancements in automated disease detection and classification. These algorithms can learn to view the patterns in an image similarly the way doctors do by training those using lots of annotated examples. Various machine learning algorithms used for automated diagnosis in medical imaging filed are discussed in the paper. Comparative analysis of these algorithms based on different parameters is also presented. This paper also focused at various applications of machine learning in diagnostic imaging, which can be part of routine clinical work for detection and classification of the process.

Parita Oza, Paawan Sharma, Samir Patel
Music Genre Classification ChatBot

Classification of music on the basis of genre is a sub-domain of the multidisciplinary field of music information retrieval (MIR) that is gaining traction among researchers and data scientists. Even though this problem has been extensively researched and tested, the problem still lies in the foundations, as the true definition of genre still lies to the mercy of human subjectivity. In this paper, we have proposed a classification model which employs a convolutional neural network (CNN) to differentiate between audio files by assessing the visual representations of their timbral features [1]. The music genre classification model is outlined by a ChatBot model built using NLTK, which can simulate an intelligent conversation with a user, and it employs a feature that enables it to recognize and process the audio file based on the input from the user. The GTZAN dataset [2] was used for training the music genre classification model, and the so trained model for this purpose yielded an accuracy of nearly 68.9%. The accuracy so obtained is relatively better than several other classification models that we had researched. Through extensive research and constant trials, we can state, with some certainty, that such a system can be extensively used alongside several music streaming services, as it would facilitate the process of automation of the classification of songs.

Rishit Jain, Ritik Sharma, Preeti Nagrath, Rachna Jain
Detection of COVID-19 by X-rays Using Machine Learning and Deep Learning Models

In India, test for COVID-19 is very expensive and not everybody can afford it. This document provides knowledge and awareness to the reader on COVID-19 screening of a person using radio chest x-ray images. Here Machine Learning and Deep Learning algorithms like CNN and max-pooling are used. These algorithms identifies different features in the images and help us to distinguish between a COVID-19 and non COVID-19 chest X-ray. This paper also describes the data set of COVID19 open image X-rays.It was created by collecting medical images from websites and publications. Our model accuracy is following a trend of greater than 95% on every run time. Machine learning models can’t have 100% accuracy and hence, this is the best one can get.

Yash Varshney, Piyush Anand, Achyut Krishna, Preeti Nagrath, Rachna Jain
Implication of Machine Learning Models Toward Education Loan Repayment Rate Analysis

Education loan is a significant factor contributing to one’s decision to pursue studies. Students now do not hesitate about taking the risk of being in debt of thousands of dollars. They believe in getting the highest quality of academic qualification first without realizing how risky it can get to be in such an enormous amount of debt. Banks nowadays are also wary of approving loans because they face difficulty in analyzing how credible the borrower is. It makes the whole process very tedious and time taking and often proves to be inefficacious. Prediction of education loan repayment rate can make the job easier for both banks and the applicants. This research paper aims at analyzing the education loan repayment rate by the use of machine learning algorithms. Machine learning is extensively used now and finds application in almost every domain. Predictions and analysis carried out using ML helps in making an informed decision and gives an idea of how future trends predict to look. In this research, various features are analyzed and researched thoroughly by the use of Python language. Its extensive set of libraries enables easy manipulation and visualization of the data. The paper contains a description of the analysis and rich visuals to produce a clearer image of the dataset. Various models are implemented, and their accuracy is measured using the R2 score.

Anushree Bansal, Shikha Singh
Predicting the Result of English Premier League Matches

Nowadays predictive models or prediction of results in any sports become popular in data mining community, and particularly, English Premier League (EPL) in football gains way much attention in the past few years. There are three main approaches to predict the results: statistical approaches, machine learning approaches, and the Bayesian approaches. In this paper, the approach used is machine learning and evaluating all features that influences the results and attempts to choose the most significant features that lead a football team to win, lose, or draw and even considering the top teams. This predictive model basically helps in betting areas and also for managers to have a knowledge how to set up their team by analyzing the results also companies like StatsBomb which use these kinds of tools for setting up scouting networks for searching of hidden gems throughout the world. These features help predict the best possible outcome of the EPL matches using these classifiers logistic regression, support vector machine, random forest, and XGBoost; the data used for prediction is selected from the Web site: datahub.io, and the model is based on the data of last ten seasons of EPL. K-fold cross-validation is used to describe the accuracy of the model.

Ashutosh Ranjan, Vishesh Kumar, Devansh Malhotra, Rachna Jain, Preeti Nagrath
Comment Filtering Based Explainable Fake News Detection

Fake News Detection is one of the most currently researched areas over the globe; many methods have come to light using different features as their sources. Hence, there are also methods using existing comments on any news article which can be used to determine the credibility of the news article as fake or real. Here, we have introduced a hypothesis that uses a machine learning approach to check the credibility of comments before they can be analyzed for further fake news detection. So, we have used various text classification algorithms to check for our hypothesis that filtering comments since there is a high possibility that the comments used for any analysis can be useless and full of useless stuff. For example, the comments showing only the emotion of readers like ‘Yesss’ or ‘Nooo!’ and likewise or the comments built using only the curse words. Such comments would prove useless as a contributing factor for fake news detection and might also affect the results of fake news detection for any news article. These text classifiers are—Complement Naïve Bayes, Logistic Regression, Multinomial Naïve Bayes, and Support Vector Machine. Out of these, the best accuracy is provided by the MultinomialNB method of 75.7% and Decision Tree with 75.4% as opposed to the original algorithm with an accuracy of 73.3% using the same dataset. Since the MultinomialNB has provided the best improvement in all the metrics compared to the original method, and we are focusing our paper on this method. This hypothesis aims to classify comments as junky (useless) comments and utility (useful) comments. These utility comments will be further used for analysis to identify fake news. Also, since the size of comments per article may vary from a few tens to a few hundred or thousands, we have used the semi-supervised approach to classify the comments in junky or utility comments classes. We have also collected data from various sources and collaborated them to fetch ourselves from a usable dataset. It contains 415 records with contents or article data for each record, along with many comments for each record. Moreover, we have also classified those comments into junky and utility comment classes using the basic definition of spam filtering. This can be improvised for different uses using different criteria. Hence, eradicating the useless comments and only analyzing the useful comments for better identification of fake news is fake news detection.

Dilip Kumar Sharma, Sunidhi Sharma
Comparative Analysis of Intelligent Solutions Searching Algorithms of Particle Swarm Optimization and Ant Colony Optimization for Artificial Neural Networks Target Dataset

The optimizations approaches of ant colony optimization (ACO) and particle swarm optimization (PSO) were targeted at improving the outcomes of artificial neural networks for finding best solution in the space. Both ACO and PSO are derived from the artificial intelligence concept that imitate the natural behaviors of animals in finding best path to foodstuff location relative and back their nest. The artificial neural networks (ANNs) are reliant on estimated research scheme in which models are generated for unspecified function in order find suitable interrelationships in input and output datasets. These are not without challenges including large time of computation, expansive hidden layer size, and poor accuracy. This paper examines the effects of pretraining dataset with ACO and PSO prior training process of ANN in order to overcome the aforementioned problems of speed and accuracy through optimization of the local and global minima. The outcomes of the study revealed that the ACO outperformed PSO in conjunction with ANN in terms of RAE, MSE, RMSE, and MAPE utilized. The error rates of ANN pretrained with ACO and PSO distinctively are 62 and 73% accordingly. Benchmarking the results against the solution optimization studies, ACO and PSO algorithms are most preferred in finding the best solution or nearest-optimal in search spaces.

Abraham Ayegba Alfa, Sanjay Misra, Adebayo Abayomi-Alli, Oluwasefunmi Arogundade, Oluranti Jonathan, Ravin Ahuja
An Online Planning Agent to Optimize the Policy of Resources Management

Reinforcement learning-based systems have received a lot of attention in various domains in recent years. In such domains, an autonomous agent learns from environment to provide a solution. Resource scheduling is considered as research challenge where such autonomous agent optimizes the solutions. This work is presented as an investigation on the effectiveness of various algorithms which drives actions associated with autonomous agent. We give a detailed contention between three differing algorithms—Q-learning, Dyna-Q, and deep-Q-network, given the task of effectively allocating the resources in an online basis. Among the mentioned algorithms, the Q-learning and deep-Q-network, which are model free algorithms, have remained in wide use for planning. However, this paper focuses on highlighting the effectiveness of lesser-known model-based algorithm, Dyna-Q. The experiment results show agent-based policy derived by the Dyna-Q algorithm which provides optimized resource scheduling for the current environment.

Aditya Shrivastava, Aksha Thakkar, Vipul Chudasama
CNN-Based Approach to Control Computer Applications by Differently Abled Peoples Using Hand Gesture

In today’s world, the computer and digital technology have revolutionized every aspect of human life. From connecting with each other in a split second to sharing ideas sitting miles apart, the Internet and digital technology have changed the way humans used to look at things. But the advancement of technology has not been revolutionary for every part of human community, especially differently abled people. The learning curve of this new technology has been designed in such a fashion that it is near impossible for differently abled people to learn it, use it and interact with it. There is always a void between the differently abled and the modern digital technology. It has been always difficult for them not to only use the modern technology and derive its benefits, but also to interact with other people and share their ideas and thoughts with them.

Hitesh Kumar Sharma, Prashant Ahlawat, Manoj Kumar Sharma, Md Ezaz Ahmed, J. C. Patni, Sahil Taneja
A Frequency-Based Approach to Extract Aspect for Aspect-Based Sentiment Analysis

Data is king nowadays, and users worldwide express their views on different platforms to aggregate this data and analyze it. Sentiment analysis becomes a major tool for analysts. Sentiment analysis can be done on different levels. This will be discussing a more granular level of sentiment analysis using aspect-based sentiment analysis, which aims to predict the sentiment polarity of text for a specific target. The majority of work done in this field focuses on the extraction of aspect or feature and then finding their sentiments polarity and aggregating them to find the whole text's final polarity. Aspect extraction is the key to this process; our work will be focusing on aspect extraction. In this paper, we will address the issue of aspect extraction and then propose our approach to deal with it and show how it is better than these existing approaches.

Rahul Pradhan, Dilip Kumar Sharma
Sentiment Analysis Techniques on Food Reviews Using Machine Learning

Review or opinion is a text which expresses the user's thought and response to the product or service he/she has availed or purchased. Processing this input and getting to know whether these sentiments are positive, negative, or neutral is called sentiment analysis. The reviews are then used by data analysts to perform evaluations about the product/service. We classify these sentiments in three principal types positive, negative, and neutral. Amazon Food reviews dataset would be used to train the classifier. Zomato, being the most popular food delivery site and restaurant aggregator provides information, menus, and user reviews of restaurants. Python language is used to conduct research on a carefully chosen data and apply a classification algorithm on it.

Shilpa Gite, Abhishek Udanshiv, Rajas Date, Kartik Jaisinghani, Abhishek Singh, Prafful Chetwani
Parts of Speech (POS) Tagging for Dogri Language

Parts of speech tagging is an important activity of natural language processing, information extraction, language translation, speech synthesis, question understanding and many more. Parts of speech tagging is basically the problem of assigning the parts of speech tags to the words in the text. As per English grammar, there are many parts of speech tags such as noun, adjective, pronoun, verb, adverb, preposition, conjunction and interjection. The methods of assigning tags to words are categorized into rule-based, stochastic and hybrid approach. In this paper, a rule-based parts of speech tagger for Dogri (regional language of Jammu) language along with the algorithm and the modular structure of the system is presented. The proposed system is evaluated over a number of corpus with six different parts of speech tags for Dogri, and hence, the evaluation is done on five datasets of Dogri corpus, and the corresponding results are also demonstrated in the paper.

Shivangi Dutta, Bhavna Arora
Deep Learning-Based 2D and 3D Human Pose Estimation: A Survey

In the real world, estimation of human pose has gained considerable consideration owed to its diverse application. Here, 2D pose estimation has remarkable research and achieves targeted output however challenges still remain in 3D pose estimation. As deep learning can improve the presentation of human pose estimation, it also brings very closest result. A literature review of deep learning methods for human pose estimation presented and analyzes the methodology used by this paper. It also includes real-world video with crowded scene pose estimation with latest research information. With a methodology-based taxonomy, we sum up and discuss recent works. It also addresses and compares the datasets used in this function. Thus, this survey makes interpretable each phase in the approximation pipeline and help to reader with easy comprehensive information. Future work and Challenges are detected.

Pooja Parekh, Atul Patel
Deepfake: An Overview

Recent advancements in digital technologies have significantly increased the quality and capability to produce realistic images and videos using highly advanced computer graphics and AI algorithms due to which it becomes difficult to distinguish between the real media and fake media. These computer-generated images or videos have useful applications in real life; however, these can also lead to various threats related to privacy and security. Deepfake is one of the ways which can lead to these threats. Term “Deepfake” is combination of two terms “deep learning” and “fake." Using deepfake, anyone can replace or mask someone else face on another person's face in an image or a video. Not only this, deepfake can change the original voice and facial expressions also in an image or a video. Nowadays, deepfake uses techniques like deep learning and AI to replace the original face, voice, or expressions. It is very hard for a human to detect that the content has been manipulated by deepfake techniques. This paper has attempted to introduce this concept of deepfake and has also discussed different types of deepfakes. The paper has also discussed methods to create and detect deepfake. The motivation behind this paper is to make the society aware about the deepfake tricks along with the treats offered by it.

Anupama Chadha, Vaibhav Kumar, Sonu Kashyap, Mayank Gupta
Instinctive and Effective Authorization for Internet of Things

Internet of Things (IoT) is currently deployed across applications, most of them connected to the Internet or at least connected to a gateway (superior processing capabilities) which is in turn connected to the Internet. The wireless sensor networks (WSNs) refer to a group of spatially dispersed and dedicated sensors for monitoring or recording data and collecting the same in a centralized location. Much research has been done to address the problem of security arising due to concern of authentication, avoidance of DOS attacks, identity hijacking, spoofing, etc. Some even went in depth to address issues related to authentication in a heterogeneous environment, i.e., solves authentication among devices of different make and model deployed in different networks and still trying to connect, addressing multiple authentication or certification (chain of) authorities. However, much less of research has focused on trying to address the true identity of the device. This paper proposes a scheme in post-authentication to explore and validate the identity of the device and later take a decision that needs to be done as necessary for the dynamic authorization phase. Here, we propose the post-authentication using dynamic authorization—Nonce (one-time credential) for a device which is not associated nor owned by system to perform limited use privilege operation on sensitive resource.

Nidhi Sinha, Meenatchi Sundaram, Abhijit Sinha
Methodical Analysis and Prediction of COVID-19 Cases of China and SAARC Countries

COVID-19 pandemic has become a major challenge for all the countries of the world. No medicine has been developed till now to cure it. Coronavirus (COVID-19) is the family of viruses that causes illness and has symptoms like the common cold, influenza, and severe acute respiratory syndrome (SARS) that spread via breathing droplets. Proper analysis and prediction of the COVID-19 patients and its increasing rate of spread will help the government and people to mitigate its effect. This gives a reason to analyze, compare, and predict the cases in India, China, and SAARC countries to make early decision for taking preventive measures to combat its effects in a timely manner. In this paper, we have analyzed COVID-19 cases from January 21, 2020 to June 25, 2020 and have predicted the cases of COVID-19 for the period of next two weeks using multiple linear regression and polynomial regression models of machine learning.

Sarika Agarwal, Himani Bansal
Coronary Artery Disease Prediction Techniques: A Survey

Machine learning has become a salient part of our life nowadays. It has a significant effect on the medical decision support system also. In the healthcare domain, it is beneficial to predict the disease and perform analysis to derive useful patterns from the electronic health records to reduce the toll. The primary cause of death worldwide is coronary artery disease (CAD), also known as atherosclerosis. It occurs when any of the arteries get blocked, resulting in weak or no blood flow to parts of a heart, leading to a heart attack. The prediction of CAD at an early stage is possible with the help of machine learning techniques like support vector machine, artificial neural network, k-nearest neighbors, decision trees, logistic regression, fuzzy rule-based methods, and many more. This paper gives insights into the research done on the prediction of this disease. We reviewed in-depth knowledge of the disease, various diagnostic techniques, and available datasets. Finally, we discussed and concluded how the machine learning technique creates an impact on predicting this disease.

Aashna Joshi, Maitrik Shah

IoT Supported Healthcare (Or: Computer Aided Healthcare)

Frontmatter
COVID-19 a “BIG RESET”—Role of GHRM in Achieving Organisational Sustainability in Context to Asian Market

As per a report by UNCTAD, with outbreak of COVID-19 worldwide, human activities have stopped witnessing the revival of nature; CO2 emission, global air traffic reduced by 25%, 60%, respectively. The effects on environment like improved air quality index, less resource consumption are short-lived as these are likely to rise to previous levels once economic activities pick up after crisis. Henceforth, it has become need of an hour to tackle the environmental concerns at organisational level by aligning environmental management, HRM and technology. Considering Asian economic advancement and environment adversity, it is imperative to explore GHRM in Asian context to move a step forward attaining organisational sustainability. Our findings, based on multimethod for collection of qualitative and quantitative data and analysis via NVIVO and IBM SPSS 23, respectively, confirm the degree of implementation of GHRM practices and also found the relationship between GHRM and organisational sustainability. Data was collected using survey questionnaire and interview from 107 h professionals of various sectors. However, more such studies are important for developing countries to address environmental concerns.

Meenu Chaudhary, Loveleen Gaur
Anemia Multi-label Classification Based on Problem Transformation Methods

The CBC report is considered to be the most important report for assessing the overall health of the human body which is an important test for the diagnosis of diseases such as anemia, cancer, infections, vitamin, and mineral deficiencies. Anemia is a common health problem among people worldwide. Anemia is not a disease but a sign of a serious illness. Therefore, it can be prevented from serious diseases diagnosed at an earlier stage. The pattern of CBC parameters is found to be very complex. Therefore, a multi-label classification of the types of IDA, Vitamin B12, aplastic, and sickle cell anemia has been made by the machine learning problem transformation method which gives an idea of the type of anemia that is likely to occur due to the abnormal condition of CBC parameters. The use of the model can help predict anemia at an earlier stage. And serious diseases can be avoided. Ultimately, people can be saved from financial costs, kept mentally healthy, and concentration and regularity in teaching children can also be improved. This multi-label classification inspires the application of methods to classify these anemia types. Binary relevance, classifier chains, and label power set methods are used with different base classifications, and the results are analyzed. The results obtained by the SVM model based on the classifier chains method of multi-label classification have proved to be superior to other methods.

Bhavinkumar A. Patel, Ajay Parikh
Automated Disease Detection and Classification of Plants Using Image Processing Approaches: A Review

For preventing damages in agriculture field plant monitoring is necessary. Plant monitoring or plant disease detection at primary stage can enhance the productivity of crops in terms of quality and quantity both. Field monitoring can be possible in many ways like farmers can take the help of experts or they can use the pesticides also for removing unwanted plants and diseases from plant but experts presence is not possible at every place second problem about pesticides in how much quantity farmers should use it. So these traditional approaches not suitable as much or they takes lots of time. For effective growth of yield and for increment the farmers benefit there is need of automated plant disease detection. Automated plant disease detection can be possible by many techniques such as by Image processing, computer vision, machine learning and through neural network, etc. In this paper, we discussed the Image Processing technique with its approaches such as Image Acquisition, Preprocessing, Segmentation, Feature Extraction and Classification. This paper shows the potential of plant disease detection system that offers the favorable opportunity in agriculture field. The review presents in this paper showing the detailed discussion about existing studies with their strengths and limitation and also giving the information about uncovered research issues on which future scope be there.

Shashi, Jaspreet Singh
Heart Disease Prediction Using Machine Learning

Early Prediction of Chronic Heart Disease (CHD) makes use of the data collected on people over the span of 10 years. The data contains various information corresponding over 4000 individuals, attributes such as sex, age, diabetes, total cholesterol along with 11 more are used to train various machine learning models to predict if a certain individual has a high likelihood of suffering from CHD in the next ten years. This investigation will be used by health care institutions and societies to predict CHD beforehand and suggest the necessary preventive precautions required. In the article, the authors evaluate machine learning techniques: Support Vector Machines (SVM), Decision Tree, Artificial Neural Network (ANN), Naive Bayes on the Framingham data set by using parameters such as accuracy, recall, precision, and F-score for comparison. The authors use these classification approaches to determine which approach would work the best and under what scenarios based on experiment study.

Jaydutt Patel, Azhar Ali Khaked, Jitali Patel, Jigna Patel
Future of Augmented Reality in Healthcare Department

In today’s world, augmented reality (AR) is a highly challenging and immerging technology which presents some additional information to the existing real world. This is done by using special glasses like Google glasses or with help of advanced devices. This technology is an advance version of virtual reality. As, in VR, we have to work in a completely virtual environment, but in this technology, we do not have to work in virtual world, but being in the real world, we are getting some additional information. This paper provides a brief description about the architecture of AR, possible solutions provided by several researchers, and academicians, their challenging issues and real-time application in medical or emergency department.

Gouri Jha, Lavanya shm. Sharma, Shailja Gupta
E-health in Internet of Things (IoT) in Real-Time Scenario

Internet of things (IoT) has drawn much attention in recent years, and it has a very significant role in the IT industry. The IoT helps to modernize the healthcare system with promising technologies and economic prospects. This paper presents working of IoT in e-health, IoT-based technologies, and communication range in the e-health sector. Furthermore, this paper also analyzed the IoT security and privacy in e-health, security risk which is involved in the e-health and the real-time application which is used in health care. The IoT application is the utmost necessary part of daily life, and it is used in almost every sector of human and industry activity. Communication standards of IoT will be provided along with some real-time applications.

Gourav Jha, Lavanya Sharma, Shailja Gupta
Diagnosis of Heart Disease Using Internet of Things and Machine Learning Algorithms

In the current scenario of the digital world, the healthcare industry generates a huge amount of patient data. Manual handling of these produced data becomes very difficult for doctors. The Internet of things (IoT) is very effectively handling the produced data. The IoT captured huge amounts of data, and with the machine learning algorithms, it can detect the disease and diagnose the disease. The work aims to apply various machine learning methods to the produced data. A machine learning framework has proposed for early prediction of heart disease in conjunction with IoT. The developed model is evaluated with k-nearest neighbor (K-NN), decision trees (DTs), random forest (RF), multilayer perceptron (MLP), Naïve Bayes (NB), and linear-support vector machine (L-SVM). The model achieved the diagnostic accuracy for 82.4%, 81.3%, 92.3%, 88.2%, 89.6%, and 82.4% for K-NN, DT, RF, MLP, NB, and L-SVM, respectively. As per the experimental results, random forest has the highest prognosis rate of 92.3%.

Amit Kishor, Wilson Jeberson
Diabetes Prediction Using Machine Learning

Diabetes is a chronic disease which is characterized by the rise of sugar level in blood. There are many complications of the disease when it remains undetected and untreated. This disease, most of the time, gets identified by various symptoms. With the adverse effects of this disease on the patient’s entire life, it is crucial to take the necessary actions to mitigate the results. Hence, this disease needs to be identified as soon as possible. The growth of machine learning technology helps to identify such problems. The motive behind this research paper is to build a machine learning model that can identify the probability of a person testing positive for diabetes based on the features. Thus, various machine learning algorithms are used to make a comparative study through which best ML technique has been identified. Model uses random forest, SVM, logistic classification, naive Bayes, KNN and decision tree which are implemented on Pima Indians Diabetes Dataset. Evaluation is done on three different accuracy measures which are accuracy, precision and recall. Along with classification algorithms, the use of gradient boosting and bootstrapping has been used for the improvisation of the results of the evaluation metrics and the classification process. Bootstrapping used avoids overfitting of the classification algorithm used, whereas the boosting uses the weak learners to learn from their errors by learning from the stronger ones. The results of the evaluation metrics compared showed a huge improvement in the entire process of the prediction system based on important features.

Harsh Jigneshkumar Patel, Parita Oza, Smita Agrawal
Myocardial Infarction Detection Using Deep Learning and Ensemble Technique from ECG Signals

Automatic and accurate prognosis of myocardial infarction (MI) from electrocardiogram (ECG) signals is a very challenging task for the diagnosis and treatment of heart diseases. Hence, we have proposed a hybrid convolutional neural network—long short-term memory network (CNN-LSTM) deep learning model for accurate and automatic prediction of myocardial infarction using ECG dataset. The total 14552 ECG beats from “PTB diagnostic database” are employed for validation of the model performance. The ECG beat time interval and its gradient value ID are directly considered as the feature and given as the input to the proposed model. The used data is unbalanced class data, hence synthetic minority oversampling technique (SMOTE) & Tomek link data sampling techniques are used for balancing the data classes. The model performance was verified using six types of evaluation metrics and compared the result with state-of-the-art method. The experimentation was performed using CNN and CNN + LSTM model on both imbalance and balance data sample, and the highest accuracy achieved is 99.8% using ensemble technique on balanced dataset.

Hari Mohan Rai, Kalyan Chatterjee, Alok Dubey, Praween Srivastava
BL_SMOTE Ensemble Method for Prediction of Thyroid Disease on Imbalanced Classification Problem

The imbalanced classification problem is one of the most challenging problems in various domains such as in machine learning and data mining. In this state of an imbalanced dataset, each class associated with a given dataset is distributed unevenly. This case arises when the positive class is smaller than the negative class. To overcome this problem, oversampling and undersampling techniques are used. Undersampling leads to the problem of information loss. In this paper, borderline_synthetic minority oversampling technique (BL_SMOTE) ensemble method is used for the prediction of thyroid disease to solve imbalanced classification problems using the oversampling technique. For the ensemble, we have used decision tree and random forest classifier. The proposed method for detection of the thyroid has achieved 98.88% accuracy, 99.12% specificity, 98.93% F-measure, and 98.66% sensitivity on thyroid UCI repository dataset. The proposed method is competitive to the other methods proposed in the literature for prediction of thyroid disease on an imbalanced classification problem.

Rajshree Srivastava, Pardeep Kumar
Computer-Aided-Diagnosis System for Symptom Detection of Breast and Cervical Cancer

As witnessed,’ Cancer metastasis is the leading cause of death worldwide, lots of efforts done for understanding the pathology of cancer for prognosis and diagnosis. Cancer treatment at an early stage can increase the chances of survival of the sufferer considerably. This research aims to contribute to the detection of breast and cervical cancer in the early stages. A comparative study of classification techniques includes Support Vector Classifier (SVC), Multi-layer Perceptron (MLP), and Random Forest (RF) has done to identify the best model for cancer prediction. These models are differentiated for different symptoms collected from electronic health care data. A correlation matrix with a heat map is used for symptoms/feature selection from the results of biopsy examinations applied on a dataset collected from the UCI repository. The predictive model based on Random forest techniques achieves the highest testing accuracy 98.83% in the case of cervical cancer and 96.50% for breast cancer with the selected symptoms/feature.

Piyushi Jain, Drashti Patel, Jai Prakash Verma, Sudeep Tanwar
Blockchain Adoption for Trusted Medical Records in Healthcare 4.0 Applications: A Survey

Healthcare 4.0 allows monitoring of electronic health record (EHR) at distributed locations, through wireless infrastructures like Bluetooth, ZigBee, near-field communication (NFC), and GPRS. Thus, the private EHR data can be tampered by malicious entities that affect updates through different stakeholders like patients, doctors, laboratory technicians, and insurance agencies. Hence, there must be a notion of trust among aforementioned stakeholders. Moreover, the accessed volume of data is humongous; thus, to ensure security and trust, blockchain (BC)-based solutions can handle timestamped volumetric data as chronological ledger. Motivated from the same, the paper presents a systematic survey of BC applications in Healthcare 4.0 ecosystems. The contribution of the paper is to conduct a systematic survey of BC adoption in Healthcare 4.0. The survey identifies tools and technologies to support BC-based healthcare applications and addresses open challenges for future research of integrating BC to secure EHR in Healthcare 4.0 ecosystem.

Umesh Bodkhe, Sudeep Tanwar, Pronaya Bhattacharya, Ashwin Verma
The Amalgamation of Blockchain and IoT: A Survey

Blockchain, the form of Distributed Ledger Technology, is getting momentum. Data in the blockchain is in the form of transactions. Blockchains are transparent, immutable and auditable distributed ledgers with peer to peer connection, cryptography and consensus algorithm. All the parties involved have the same copy of data (transparency), data cannot be modified or deleted (immutability) and a full history of transactions is available (auditable). Based on history, future transactions are validated by all the parties (consensus). Internet of Things means connecting lightweight devices to share information among themselves and to enable some functionalities based on the shared information. This exchange of information among multiple devices must be secure as it contains sensitive and safety-critical data. Because of the scale and distributed nature of IoT, security and privacy are major concerns. Traditional security solutions are not suitable for IoT devices as they have limited resources in terms of memory and processing power. Blockchain can bridge the gap. The information exchange among these devices can be stored on the blockchain to increase transparency among the devices. This paper provides a review of literature in the area of IoT and blockchain.

Jignasha Dalal
C2B-SCHMS: Cloud Computing and Bots Security for COVID-19 Data and Healthcare Management Systems

Technologies play an essential role in mitigating the physical human need and replacing this with robots (bots). Hence reducing social involvement results in a reduction in COVID-19 patients. This proofs to be safe for the human generation and humanity too. The technological aspects of cloud computing resource management and bots can be used for the management and security of the patient’s data and incorporating intelligent decision support in case of the massive reporting of the patients. As the healthcare sector continues to offer life-critical services while working to improve treatment and patient care with new technologies, criminals and cyber threat actors look to exploit the vulnerabilities that are coupled with this expertise. Healthcare organizations collect and store vast amounts of personal information, making them a primary target for cyber-criminals. In this paper, we will explore and discover the security implications and privacy issues of these health care technologies related to the management of patient’s data. We also describe various security breaches in medical data and used a framework called as C2B-SCHMS which usages machine learning-based isolation graph for handling anomaly.

Vivek Kumar Prasad, Sudeep Tanwar, Madhuri Bhavsar
FemtoCloud for Securing Smart Homes—An Edge Computing Solution for Internet of Thing Applications

Increasing capabilities of mobile devices have manifested great potential for the feasibility of edge computing to aid IoT applications toward building more efficient and smart systems of the future world. This paper is intended to propose a multiparameter and secured FemtoCloud solution for securing smart homes from coercions. The architecture focuses on securing mainly smart buildings and smart residential locations. On identification of any threat, the data and suggestions are provided to the user’s mobile device (User Node) via Femtocell. Our system is bifurcated into two brief machine learning models which are trained and deployed at both the levels, i.e., mobile and cloud. Edge devices perform the rudimentary computations at the edge level itself to increase the response time factor. On identification of a serious threat, Femtocell is triggered to send the data of mobile device to the cloud for performing comprehensive processing to estimate the intensity of the threats. We use modified open-source machine learning models to detect and determine the situations which can be of potential threat. The objective is to leverage mobile devices and the cloud to analyze the most appropriate solution. Finally, we aim to provide challenges and future opportunities to present a wide scope of research in the same sphere.

Abhinav Rawat, Avani Jindal, Akshat Singhal, Abhirup Khanna

Security and Privacy Issues

Frontmatter
Global Intrusion Detection Environments and Platform for Anomaly-Based Intrusion Detection Systems

The defense is the critical element of the computer system, and the most challenging issues are detecting the intrusion attacks. The IDS is the most critical cyber-security factor which can detect intrusion before, during, and after an attack. This paper provides an overall IDS benchmarking which quantifies different IDS properties, types of anomaly-based IDS that are deployed in different environments or platforms, and comparison among them based on methods used, their details, and advantages of each method. We have analyzed the different IDS techniques based on anomaly and various issues associated with anomaly-based IDSs. We addressed global environments for intrusion detection and framework for behavioral or anomaly-based intrusion detection systems and discussed the challenges facing anomaly-based IDSs. After reviewing the various anomaly-based IDS techniques, we have analyzed that successful detection rates could not be achieved by a single technique. To lower the false prediction rate and decreased the complexity of the process, an efficient automated hybrid technique is suggested for achieving accurate detection rates to enhance anomaly detection.

Jyoti Snehi, Abhinav Bhandari, Manish Snehi, Urvashi Tandon, Vidhu Baggan
Image Steganography Using Bit Differencing Technique

The swift progression of evidence correspondence in contemporary time demands protected communication of data. Steganography obscures personal material in numerous record organisations, for example, photograph, content, sound, and audio–visual, impalpability, payload and vigour are the key complications on the way to steganography. This proposed work would give a nascent procedure that could be used for storing imperative data inside some cover metaphors by means of most significant bits (MSB) of the cover metaphors that have been presented. Firstly, the bits are numbered in increasing order, and bit number 6 is concerned over storing the confidential data which is based on the difference of bits 6 and 7. The result is generated on the basis of the difference whether the resultant bit is same as that of secret bit or not. If the resultant bit is distinct as compared to confidential pixel, the bit number 6 is modified and updated in the original bit. The outcome communicates the suggested model that must give good percentile of improvements in signal-to-noise ratio. Here, picture is fragmented into red, green and blue components where red components used for signalling on the way to store information in green or blue components of the portrait. The proposed method built on most significant bits (MSB) is used for safeguarding the organisation from unauthorised admittance, and the interlopers would not be intelligent to admittance the intimate data.

Mudasir Rashid, Bhavna Arora
Detection and Prevention of DoS and DDoS in IoT

Internet of Things (IoT) is a network of interconnected devices embedded with software, sensors and essential electronics that allow us to gather and exchange data between them. Through IoT, it is difficult to guarantee the privacy and protection of the users due to various artifacts linked to the Internet. Denial of Service (DoS) and Distribution Denial of Service (DDoS) are among the main security issues in IoT. DoS is a type of attack where attackers try to prevent access by legitimate users to the service. A DDoS is where multiple systems target a single, DoS attack system. This occurs when several systems overload a target system’s bandwidth or resources, normally at one or more servers. This is because of resource-constrained IoT network characteristics that have become a big victim. The early detection of DoS and DDoS attacks will prevent the resource-constrained devices from becoming a target and early death. This paper focuses on vulnerabilities in IoT such as Distributed Denial of Services (DDoS). Many privacy-conserving mechanisms have been discovered (such as automatic solution learning, and DDoS warning mechanisms). And, related work is under way. The goal of this paper is to present the detection and prevention of DDoS in IoT and privacy issues faced by the IoT environment and current mechanisms for its security.

Meetu Sharma, Bhavna Arora
Approach for Ensuring Fragmentation and Integrity of Data in SEDuLOUS

With the rapid adoption of cloud services, more and more data are being uploaded on the cloud platform. These data are under threat from various threat actors constantly researching to steal, corrupt, or get control over the data. The threat actors are not only limited to malicious attackers, but these also include curious service providers, social activists, business entities, and nations. They pose a serious risk to cloud services. There are various approaches to protect data at various levels. Division and replication is one such approach where data are divided into chunks and spread over the cloud to reduce the risk of data leakage and simultaneously increase the accessibility. Under division and replication approach, SEDuLOUS provides a heuristic algorithm for data placement in a distributed cloud environment. In this research work, we have provided a comprehensive analysis on cloud data storage services and associated security issues, analysis of SEDuLOUS algorithm, and methodology to improve the SEDuLOUS by specifying minimum fragments to ensure fragmentation of all files and hashing of each chunk to identify the compromised storage nodes.

Anand Prakash Singh, Arjun Choudhary
Efficient Classification of True Positive and False Positive XSS and CSRF Vulnerabilities Reported by the Testing Tool

Security testing is essential for website and web applications in current days. It is easy for an attacker to invade the security and do malicious activities through web applications if they are not properly protected against known attacks. General practice in web application before release is using testing tools to recognize the possible set of vulnerabilities, which can be true-positive or false-positive. Then the developer team will be asked to revise code to protect against true-positive vulnerabilities. For that, the testing team needs to classify each reported vulnerability into true-positive and false-positive individually, which is very time-consuming. This article suggests innovation in this practice to reduce the time of recognizing true-positive vulnerabilities. It presents a novel approach to classify reported multiple vulnerabilities of different attacks using a single script and in single go. These attacks should share common triggering events or testing process. Cross-Site Scripting (XSS) and Cross-Site Request Forgery (CSRF) attacks are chosen to illustrate the approach.

Monika Shah, Himani Lad
A Survey on Hardware Trojan Detection: Alternatives to Destructive Reverse Engineering

System security has always been associated with the software being used on it. The hardware has been by default considered trusted. This root of trust for hardware has been violated after the emergence of Hardware Trojan (HT) attacks. Such attacks can be used by an adversary to leak important or secret information or to conduct a system failure. This paper gives a broad overview on different techniques that can be used to detect HT inside a circuit in place of destructive reverse engineering. Further a detailed literature survey has been presented which gives an overview of the efficiency of the detection techniques used in the literature.

Archit Saini, Gahan Kundra, Shruti Kalra
Comparative Study of Various Intrusion Detection Techniques for Android Malwares

The spread of digital crimes have increased with the expansion in the use of smartphones. Especially, the major security threats have been seen in the case of android devices as android is the most famous working framework among smart phones. As these gadgets store confidential data of clients like private information, monetary data, thus malwares are being produced for stealing data. The reason behind why android OS is progressively prone toward malware assaults is that it does not put restrictions on its clients to download from unreliable sites. For understanding the risks to the Android clients’ data, it is relevant to comprehend the distinction in the conduct of genuine and pernicious applications and study mobile malware detection. There are various methodologies for these Intrusions’ identification, for example, static investigation, dynamic investigation and hybrid investigation which have been covered in this paper along with their functionalities. The benefits and constraints of each classification of android malware detection systems are also discussed. Therefore, this paper fundamentally focuses on the comparative study of these techniques.

Leesha Aneja, Jaspreet Singh
Error Detection Using Syntactic Analysis for Air Traffic Speech

Air traffic controllers and pilots communicate primarily through voice/speech to perform their day-to-day operations. Automatic speech recognition when applied in this domain can reduce the workload of both controller and pilot. The speech processing has several challenges in this domain like poor quality of radio channel, faster speaking rate and very strict vocabulary with infinite accent combinations. As errors have impact over the speech recognition process, the focus is on error detection in air traffic controllers speech when using automatic speech recognition. In this line, various error detection techniques available in normal English speech recognition are compared, and specific techniques which can help in air traffic controllers speech domain are discussed. Also, this paper emphasizes on syntactic analysis as a major component in the post-processing. Syntactic analysis along with phonetic string distance analysis helped to obtain close to 10% overall improvement in word error rate and 10–15% improvement in concept recognition rate for the experiments conducted over air traffic speech data considered for discussions.

Narayanan Srinivasan, S. R. Balasundaram
Road Segmentation from Satellite Images Using Custom DNN

Recently, with the enhancement in the field of remote sensing and computation techniques, road detection from satellite images is getting possible. In these days, precise extraction of the lane from satellite images has become one of the major important fields of research in both remote sensing and transportation. The road network performs an imperative role in the traffic system, urban planning, route planning, and self-driving. In this paper, technique for road segmentation from the satellite images has been introduced. In the proposed method, a custom deep neural network (DNN) has been used for the detection of the road from satellite images. We have used a simple and custom neural network which is computationally faster and as accurate as a traditional deep neural network like Inception, YOLO, and ResNet-50 for road detection in the satellite images. In the initial stage, images are preprocessed with the help of OpenCV and morphology. We have annotated each pixel value as 0 for non-lane pixels and 1 for lane pixels. With this annotated data, we have trained our custom DNN model. The road region is denoted by white pixels, and black pixel denotes a non-road region. In the final result, the noise removal technique is used to remove distorted white pixels to improve the accuracy further.

Harshal Trivedi, Dhrumil Sheth, Ritu Barot, Rainam Shah
Performance Analysis of SoC and Hardware Design Flow in Medical Image Processing Using Xilinx Zed Board FPGA

The requirement of the real-time implementation of the image processing algorithm compels FPGA adoption due to parallelism, reconfigurability, and pipelining architecture. This paper presents the coherent design of advanced edge detection algorithms using two design methodologies with FPGA: SoC design flow and hardware flow for the medical image processing purposes. Vivado HLS and Vivado IP integrator implement the SoC design flow, while system generator realizes the hardware flow. We have implemented Canny–Deriche edge detection and Laplacian of Gaussian (LoG) edge detection and practiced several brain tumor images with distinct mathematical parameters such as threshold and standard deviation. Thus, this paper aims to examine two edge detection algorithms in terms of noise reduction, edge response characteristics, and edge localization, and two design methodologies in three parameters: power consumption, resource utilization, and timing constraints. We present a real-time image processing method utilizing a pipeline structure that emphasizes medical image enhancements using this system on the Zed board SoC FPGA platform.

Neol Solanki, Chintan Patel, Neel Tailor, Nadimkhan Pathan
SDN Firewall Using POX Controller for Wireless Networks

Recently, high-speed broadband services, social networking and cloud environments have increased Internet penetration significantly. Due to this, there is large amount of users data available in the Internet including personal data, enterprise data and financial data. This leads to serious threats from malicious users. Researchers have proposed various security threats for protecting this data from unknown threats. Most of the security solutions are employed on traditional networking techniques. They are very complex, and it is very difficult to manage them. Traditional networking techniques depend on manual configuration of devices. Each device may have different policy, hence leading to policy conflicts in managing the resources over network. Network security may be compromised in such case. Software-defined networking (SDSN) is a new paradigm addressing this issue. SDN provides various advantages including network wide visibility, centralized control, flexible network architecture and ease of management. The control plane (network controller) is separated from the data plane (forwarding devices). The controller is responsible for monitoring, management and controlling the behavior of the forwarding devices. OpenFlow protocol is used by the SDN controller. In this paper, we have proposed the SDN-based firewall. POX Python-based controller of SDN is used for control and management of the network. Using wireless network topology based on SDN controller, we have evaluated the performance of the network in terms of delay, TCP bandwidth and UDP jitter. Different wireless network topologies have been created for evaluating the performance.

Sulbha Manoj Shinde, Girish Ashok Kulkarni

Latest Electrical and Electronics Trends

Frontmatter
Output Load Capacitance Scaling-Based Energy-Efficient Design of ROM on 28 nm FPGA

In this paper, we have proposed the design of an energy-efficient ROM by scaling down the output load capacitance. The ROM is implemented on a 28 nm-based CPG236 package and Artix-7 FPGA. To achieve the energy efficiency of ROM, we have reduced capacitance from 30 to 0 pF by reducing 15 pF capacitance at each step to exhibit the consequence of reduced capacitance on the total power consumption of ROM. Reduction of 35.88% in I/O power consumption, 26.35% in static power consumption, 35.44% in total power consumption, and 23.04% in the junction temperature of ROM is observed when capacitance is reduced from 30 to 15 pF. Subsequently reducing capacitance from 15 to 0 pF results in a reduction of 55.96% in I/O power, 17.89% in static power, 54.64% in total power consumption, and 29.76% in junction temperature. From experimental results, it has been observed that signal power and logic power are not varying with variation in capacitance. By scaling output load capacitance, not only energy efficiency is achieved, but also thermal management has also improved. Reduction in junction temperature results in low static power consumption and an increase in device reliability and lifespan. The design is implemented on Artix-7 FPGA using Vivado Design Suite and Verilog. Because of low power consumption, this design of ROM can be ingrained into a microprocessor setup or can be assembled as ROM ASIC.

Pankaj Singh, Bishwajeet Pandey, Neema Bhandari, Shilpi Bisht, Neeraj Bisht
Image Correction and Identification of Ishihara Test Images for Color Blind Individual

Color blindness is when someone is unable to see color in a normal way or make out the differences in certain colors. The color-sensitive cells in our eyes react differently to various wavelengths of light giving our brain the information needed to create the ‘perceived’ color in our vision. There is no way to rectify the cells. In other words, color blind people cannot be treated. This paper uses image processing techniques to make Ishihara Test (test for color deficiency) images visible to partially and totally color blind people. Authors have targeted the following types of dichromatic color blindness—deuteranopia, protanopia, tritanopia and also monochromatic color blindness. All of this is achieved by applying various state-of-the-art image processing and mathematical operations in the image pixels to allow the user to ‘perceive’ the color as it was intended. To give a kind of proof of concept of the image processing for complete color blindness, a machine learning model has been implemented and trained on the widely known MNIST dataset.

Himani Bansal, Lalit Bhagat, Satyam Mittal, Ayush Tiwari
Gradient Feature-Based Classification of Patterned Images

Image classification is the task of assigning a class to an image. It has a wide range of applications: image and video retrieval, object tracking, object recognition, Web content analysis, number plate recognition, OCR in banking systems, etc. Color, texture, gradient, shape, keypoint descriptors, etc. are the various features being used for the image classification. A patterned image is an image in which selected pattern is repeated, for example, horizontal stripes, vertical stripes, polka dots, geometric shapes, etc. Gradient feature plays a vital role in distinguishing the different patterns. Therefore, in the proposed approach, gradient features are used for the classification of patterned images like cloth patterns (vertical stripes, horizontal stripes, polka dots, etc.), English characters (capital and small alphabets) and numerals (0–9) and geometric shapes (square, triangle, etc.). The different patterns recognized in the present paper show the versatility of the approach. It can be applied to many of the real-time applications like number plate recognition, cloth pattern recognition and retrieval. The proposed approach achieves the accuracies of 95.4, 93.5, 91.4 and 92% on standard datasets describable texture dataset (vertical stripes, polka dots), EnglishImg dataset (small and capital English alphabets), numerals dataset (0–9) and geometric shapes (triangle, square) dataset, respectively.

Divya Srivastava, B. Rajitha, Suneeta Agarwal
Corrosion Estimation of Underwater Structures Employing Bag of Features (BoF)

According to World Corrosion Organization (WCO), the estimated annual cost of damage due to corrosion across the globe is approximately US$2.5 trillion which contributes 3–4% GDP of developed countries. Minimizing losses due to corrosion and to ensure longer life of structures is thus a major concern. Paper presents a technique employing bag of features (BoF) for underwater structural corrosion recognition. BoF methods are based on an unorganized grouping of image features and it is conceptually simpler than various other alternatives. The model is trained on three labeled datasets corroded, un-corroded, and damaged obtained from underwater structures along the Gomti River in Lucknow. Dataset of around 2000 images is used to train the model. Trained prototype BoF learning model is capable of efficiently classifying pure and corroded images and achieves an accuracy of 82.38% demonstrating the feasibility of this method. The technique proposed and its deployment on handheld and autonomous devices provide an efficient and intelligent method for underwater structure corrosion recognition.

Anant Sinha, Sachin Kumar, Pooja Khanna, Pragya
On Performance Enhancement of Molecular Dynamics Simulation Using HPC Systems

The proposed work aims to enhance performance of molecular dynamics (MD) simulation code using various high-performance computing (HPC) approaches. The two-dimensional (2D) legacy code is parallelized using message-passing interface (MPI). Parallelization strategies when deployed with HPC platform, the performance and scalability improve with reduction in required computational time. Simulation experiments included two different numbers of atoms deployment keeping step size, time step, initial and boundary condition constant. Various profiling tools have been applied for identifying the hot spots that consume most of the execution time in the code. MD code is optimized employing following four approaches namely (1) force decomposition, (2) force decomposition with data organization, (3) intra- and inter-force decomposition with data organization and (4) intra- and inter-force decomposition with data organization and grid management. The output of these approaches is tested for the required accuracy by comparing its results with original standard MPI parallelized code. Simulation results for these approaches are found satisfactory from performance aspect. A comparative study is carried based on various performance metrics like execution time, speedup ratio and efficiency with multiprocessors. These approaches, when deployed on various platforms, are found better than standard MPI parallelized code except for the data organization approach. When the code is reformed implementing all approaches, the maximum speedup is found in the range of 2.5–4.5 times based on use of number of processors. Enhancement of code performance by saving computation time helps to solve the large-scale problems more efficiently.

Tejal Rathod, Monika Shah, Niraj Shah, Gaurang Raval, Madhuri Bhavsar, Rajaraman Ganesh
Backmatter
Metadaten
Titel
Proceedings of Second International Conference on Computing, Communications, and Cyber-Security
herausgegeben von
Dr. Pradeep Kumar Singh
Prof. Dr. Sławomir T. Wierzchoń
Sudeep Tanwar
Dr. Maria Ganzha
Prof. Joel J. P. C. Rodrigues
Copyright-Jahr
2021
Verlag
Springer Singapore
Electronic ISBN
978-981-16-0733-2
Print ISBN
978-981-16-0732-5
DOI
https://doi.org/10.1007/978-981-16-0733-2

Neuer Inhalt