Skip to main content
Top

2023 | Book

Information and Communication Technology for Competitive Strategies (ICTCS 2021)

ICT: Applications and Social Interfaces

Editors: Dr. Amit Joshi, Dr. Mufti Mahmud, Dr. Roshan G. Ragel

Publisher: Springer Nature Singapore

Book Series : Lecture Notes in Networks and Systems

insite
SEARCH

About this book

This book contains best selected research papers presented at ICTCS 2021: Sixth International Conference on Information and Communication Technology for Competitive Strategies. The conference will be held at Jaipur, Rajasthan, India, during December 17–18, 2021. The book covers state-of-the-art as well as emerging topics pertaining to ICT and effective strategies for its implementation for engineering and managerial applications. This book contains papers mainly focused on ICT for computation, algorithms and data analytics, and IT security. The book is presented in two volumes.

Table of Contents

Frontmatter
Two-terminal Reliability Assessments of a Complex System

Assessment of the reliability of a system from its basic elements is one of the most important aspects of reliability analysis. To determine overall reliability easily, the given system must have all components organized either in series or in parallel. But if that is not the case, calculating the overall reliability of a system becomes challenging. In a complex system, one can group the components of the system neither in series nor in parallel. Therefore, the determination of the reliability of complex systems becomes more complicated. There are two ways for computing network reliability: exact and approximate. The approaches discussed in this paper compute exact network reliability.

Ajay Agarwal, Apeksha Aggarwal
AI-Based Interactive Agent for Health Care Using NLP and Deep Learning

Artificial intelligence (AI)-based interactive agent plays a vital role in health care. In this paper, we have implemented AI-based Interactive agent using natural language processing (NLP) and deep learning (DL) which deals with resolving simple queries and provide health care services. This work helps to build a system for providing better health care service for user and increase user interaction. This work builds a chatbot using DL and NLP pipeline. We conclude that chatbots build with NLP and neural network are more efficient than human resource and predefined chatbots that are not efficient. It is necessary to discuss potential research directions that can improve capabilities.

U. Hemavathi, Ann C. V. Medona
Simulation of WSN-Based Crowd Monitoring and Alert Generation Architecture

Simulation tools provide a way to simulate the network under different conditions, test the network, and analyze the results produced. One such simulation tool is network simulator (NS2). This paper presents the preliminary performance analysis of the WSN-based crowd monitoring architecture with the help of NS2. The architecture tries to reduce the occurrences of casualty by monitoring the individuals at crowd gatherings, keeping a track of the crowd in a sector, and disseminating the crowd to neighbouring sectors when the need arises. NS2 tries to simulate the architecture, and the generated results are analyzed based on various criteria like throughput, end-to-end delay, energy consumption, and many others.

Jenny Kasudiya, Ankit Bhavsar
Real-Time Assessment of Live Feeds in Big Data

An extremely challenging proposition is constructing a functional, well-designed, and reliable big data application that supplies to a variety of end-user latency necessities. Let alone constructing applications that work for the problem at hand, it can be discouraging plentiful to just keep up with the fast pace of technology innovation happening in this space. But, there are assured high-level architectural concepts that can help to imagine how diverse types of applications fit into the big data architecture and how several of these technologies are renovating the existing enterprise software scene. Big data analytics is used to gather valuable findings, identify trends and detect patterns from the ocean of information. Proficient operators say its vital to assess the prospective business significance that big data software can offer. This paper presents a study of the lambda architecture enterprise design to build a data-handling back-end on cloud, assuring intense, high throughput, and dense data demand provided as services. This paper also covers the advantages proposed by the lambda architecture and how effectively it can be used across wide range of use cases across the industry. Based on the experience gathered during this research—this paper also discusses the challenges those would be posed by lambda architecture.

Amol Bhagat, Makrand Deshpande
Mobile Application to Digitalize Education Volunteer Process

Sharing knowledge has always been a complicated task from long ago, but the emergence of smartphones has made the task easier and more flexible. There are still many places that do not move with time, and hence, people can only learn limited knowledge with the limited workforce. Many educated people have come forward in improving these areas through various means such as NGOs and student volunteers. But there is no proper mode of communication between school management and the people who are interested in helping them. A mobile app is one possible solution to this problem. It connects schools, students, and student volunteers, making the communication between them simpler. Many features like slot booking, nearest school location details, chatting service, etc., are possible to incorporate inside the mobile app. This can be implemented by using Java programming language, Android Studio, and Firebase which is used to develop mobile applications. The final result is making these educational places have an efficient workforce, introducing newer ways to learn and education for everyone.

Aishwarya Chengalvala, Divya Sai Asanvitha Gundala, Nallamothu Lakshmi Hemanth, Sethu Sandeep Kadiyala, G. Kranthi Kumar
Emotion Recognition in Human Face Through Video Surveillance—A Survey of State-of-the-Art Approaches

Emotion is a state of thoughts and feelings related to events and happenings. Emotion plays a significant role in all aspects of life. If we capture the emotion of an individual, then many issues can be resolved. Emotion recognition is becoming an interesting field of research nowadays due to huge amount of information available from various communication platforms. Emotion detection will play vital role as we are moving towards digital era from all the aspects of life. The capacity to understand the emotion by computer is necessary in many applications, especially emotion detected from video. At the present, emotional factors are significant because we get efficient aspect of customer behaviour. Video surveillance plays important role in recent times for face detection and feature extraction. Video surveillance will help us to understand the emotion that is being carried out by an individual. Facial expression is the natural aspect which helps to integrate the quality of emotion. Human face helps to get the emotion a person expressing in due course of action and activities. Many clues we can get from the human face which will help us to formulate and solve the issues and threat in the domain of emotion detection. This scholarly work is a comparative study of emotion recognition and classification of emotion from video sequences which exhibits the current trends and challenges in emotion recognition. Intensity of emotion and duration of an emotion are the two significant attributes for effective identification of emotion of an individual though video surveillance. This study of emotion recognition has given insight and future direction in the domain of emotion recognition technology.

Krishna Kant, D. B. Shah
Character Segmentation from Offline Handwritten Gujarati Script Documents

Modern innovations make great impacts on the human lifestyle and their way of working. It boosts the efficiency and productivity of people by reducing efforts, which help to handle several tasks at a time. Nowadays, all the government offices, banks, businesses, and education systems are influenced by paperless technology. It improves documentation and the secure sharing of information by saving space and resources. Paperless technology works on Optical Character Recognition (OCR) to convert all physical documents into machine-based documents. OCR consists of mainly two steps: segmentation, and recognition. The success rate of character recognition depends on the segmentation of required regions of interest. This paper introduced an algorithm that utilized projection profile, bounding box, and connected component labeling techniques for the development of Gujarati handwritten dataset and segmentation of handwritten text from Gujarati documents into line, word, and characters.

Mit Savani, Dhrumil Vadera, Krishn Limbachiya, Ankit Sharma
A Comparative Study of Stylometric Characteristics in Authorship Attribution

This paper presents a study of various stylometric characteristics and their uses along with various methods of feature selection, extraction, classification for authorship attribution (AA). We learned of the importance of data or corpus size, which plays a heuristic role in the choosing of a sufficient amount of features to appropriately recognize the true writer of an undesignated text or piece of document. We also analyzed that by the time features type also changed and new features introduced that made this task easier. Over time machine learning became more interactive and easy to use. Many S/W are available for the AA task like WEKA, JAGAAP.

Urmila Mahor, Aarti Kumar
Is Data Science and Blockchain a Perfect Match?

Blockchain technology can help with IoT, digital transaction records, and supply chain management. Enterprises over the world are focusing more on the innovative work of blockchain technology. Blockchain innovation permits the end-client to do exchanges with each other without the association of the middle man. Any benefits from cash to motion pictures can be moved, put away, executed, oversaw, and traded with no intermediaries. Data science, as we know it, is the science to extract valuable insights and information from both structured and unstructured data to solve real-world problems. With a lot of advantages and challenges, blockchain and data science can end up being an amazing blend to oversee information amount with quality productively. In this paper, we are investigating the new aspect of mixing data science and blockchain.

P. Swathi, M. Venkatesan, P. Prabhavathy
Review of Road Pothole Detection Using Machine Learning Techniques

The paper is about detecting the number of potholes on the roads using digital image processing and machine learning. There is a dire need for improving the road conditions and quality of roads. We felt that there is a need for such a software because the road conditions in many areas in the country are not that good, and the reason for many road accidents in today’s generation have been the bad conditions of roads. This software will help us know the presence and the quantity of the number of potholes on the roads. This data can be sent to the road repairing/maintenance teams. They will be able to improve/repair the roads knowing the exact number of potholes present on specific roads/localities. The motivation to do this project came from the greater social cause of improving the roads so that every regular day of a person is safer than today. The procedure that we used was Image preprocessing and then further on we used machine learning to get our desired results. The paper kept us dedicated to keep learning and to use the knowledge for good use. Using digital image processing we will program the software which reads the image and pre-processes it. Using machine learning we will train the software to recognize the potholes on roads. In this paper, we discussed to identify the potholes on the roads with the use of a video and a dataset of the images. The video helped us to create frames; each frame checked for the presence of a pothole. Once the pothole gets detected, we counted the number of potholes in all the frames created by our algorithm. Our aim is to ease the process to remove the potholes on the roads that will in turn help reduce the large number of fatalities and accidents.

Ankit Shrivastava, Devesh Kumar Srivastava, Aditi Shukla
Performance Improvement of the Multi-agent System of Knowledge Representation and Processing

The article deals with the organization and development of the Multi-Agent System of Knowledge Representation and Processing (MASKRP). The diagrams of states, transitions and interaction of software agents, the types of queries to the knowledge base and the methods of knowledge processing are described. Ways to improve the performance of the Multi-Agent Solver and the construction of an optimal logical structure of a Distributed Knowledge Base (DKB) are proposed. In the algorithm for the synthesis of the optimal logical structure of the DKB, it is proposed to distinguish two interrelated stages. At the first stage, two problems are solved: the distribution of DKB clusters in the nodes of the computing system and the optimal distribution of data groups of each node by the types of logical records. At the second stage the problem of DKB localization on the nodes of the computer network is solved. As a result of optimization, the distributed knowledge base is decomposed into a number of clusters that have minimal information connectivity with each other. The results demonstrate the efficiency of the presented approach to development of the MASKRP.

Evgeniy I. Zaytsev, Elena V. Nurmatova, Rustam F. Khalabiya, Irina V. Stepanova, Lyudmila V. Bunina
Cyber-Attack in ICT Cloud Computing System

Nowadays, cloud system is laid low with cyber-attack that underpin a great deal of today’s social group options and monetary development. To grasp the motives behind cyber-attacks, we glance at cyber-attacks as societal events related to social, economic, cultural and political factors (Kumar and Carley in 2016 IEEE conference on intelligence and security information science (ISI). IEEE, 2016 [1]). To seek out factors that encourage unsafe cyber activities, we have a tendency to build a network of mixture country to country cyber-attacks, and compare the network with different country-to-country networks. During this paper, we gift a completely unique approach to find cyber-attacks by exploitation of three attacks (Zero, Hybrid and Fault) which may simply break the cyber system. In hybrid attack, we use two attacks, dictionary attack and brute-pressure attack. Firstly, we analyze the system and then lunch the attacks. We have a tendency to observe that higher corruption and an oversized Web information measure favor attacks origination. We have a tendency to additionally notice that countries with higher per-capita-GDP and higher info and communication technologies (ICT) infrastructure are targeted a lot often (Kumar and Carley in 2016 IEEE conference on intelligence and security information science (ISI). IEEE, 2016 [1]).

Pranjal Chowdhury, Sourav Paul, Rahul Rudra, Rajdeep Ghosh
Internet of Things: A Review on Its Applications

The Internet of Things (IoT) is the ambassador of computing technology and is the next phase in evolution of Internet. It is the interconnection of billions of smart things that contain objects with sensors, actuators, networking and technology which work together to make this “network of networks of autonomous objects” a reality by providing smart services to the end-users. In this paper, we give an overview about IoT and its history. Secondly, we discuss some of the applications of IoT mainly smart home, smart health care, smart farming, smart city and smart industry as well as blockchain application in industry. Finally, the issues and challenges are analysed to encourage more investigations into the domains.

C. A. Irfana Parveen, O. Anjali, R. Sunder
Cumulative Neural Network Classification and Recognition Technique for Detecting and Tracking of Small-Size Garbage Materials

A review of current advanced classification and recognition techniques, as well as recent technological advances in machine vision, shows that the application of a cumulative neural network (the most advanced machine learning algorithm in) for the waste management is an area of research that remains unexplored. The neural network used to prove the notion is referred to as a proof-of-concept neural network. A low-cost method for identifying and classifying recyclables, increase sorting efficiency, lower human workload, and boost to better comprehend the waste data network revolution how complexes of neural networks may change the industry of waste management. Using only color images of input waste, the system was able to classify objects with an accuracy of up to 90° by type of material (paper, glass, cardboard, metal, and plastic). The potential implementation of the recycling algorithm was assessed in terms of economic, social, commercial, and environmental performance, under the concept of integrated and sustainable waste management. When CNN-based systems are compared with existing waste management technologies, it has been found that they have the potential to modify extensive, semi-reversible manufacturer liability programs, and can change. The economy underpins all recycling.

Vishal Pattanashetty, Suneeta Bhudhihal, K. Shamshuddin, Shweta Kore, Shraddha Hiremath
Performance Analysis of OFDM-IM Using DLD

The proposed work presents an advanced signal detection methods which is a wide challenge to model it for OFDM-IM. However, the complexity at the receiver side is widely increased due to introduction of Index Modulation (IM), and for IM, it also needs to know the Channel State Information (CSI), which increases the complexity and it also results to system overhead. The deep learning-based detector (DLD) improves the system performance and avoids the system overhead, comparing to the tradition detectors like Maximum Likelihood (ML), Greedy Detector (GD), Log Likelihood Ratio (LLR), etc. The proposed deep learning-based detector (DLD) uses deep neural network (DNN) with fully automated connecting layers is used for detecting the data bits at the receiver of OFDM-IM system. Firstly, the deep learning detector (DLD) is trained offline by collecting the data sets of the simulated results for improving the BER performance and accordingly the model gets trained to use it for online detection of OFDM-IM signal at the receiver. The results proves that the deep learning-based detector provides an adequate BER performance with a minimum runtime than the traditional detecting methods.

Chilupuri Anusha, S. Anuradha
Blockchain Technology—A Brief Survey

Today’s world is all about innovations. At present, any human cannot lead a normal lifestyle without the application of software technologies in their day-to-day activities. In such case, security has become a great threat to maintain sensitive records of people. Blockchain technology provides a solution in this case. A blockchain is indeed a public ledger that is open to all but without any control by a central authority. It is a technology that enables individuals and businesses to work together with trust and accountability. Cryptographic currencies like Bitcoin are one of the best-known blockchain applications. Blockchain technology is considered as the driving factor behind the next fundamental IT revolution. Several blockchain technology implementations are commonly accessible today, each with its particular strength for a particular application domain. This paper gives a brief introduction about various applications of blockchain technology in various domains.

F. Monica Rexy, G. Priya
Topic Modelling-Based Approach for Clustering Legal Documents

The justice system has been institutionalized around the world for a long time, increasing the number of resources available for and in this field. The colossal increase in dependency on the World Wide Web is commensurate to the increase of digitization of documents. Along with this growth has come the need for accelerated knowledge management—automated aid in organizing, analysing, retrieving and presenting content in a useful and distributed manner. For a fair, cogent and strong legal case to be built, the individual fighting the case must have access to case documents from not only several years ago but also a few months ago. Any particular part of any of these various cases that received a verdict in the previous years could be beneficial to the individual’s case. Considering all these factors, it is evident to develop a search engine for legal documents which will provide the user with all the relevant documents it requires. Moreover, unlike widely accessible documents on the Internet, where search and categorization services are generally free, the legal profession is still largely a fee-for-service field that makes the quality (in terms of performance metrics used such as precision and recall) a key difference in services provided. This paper proposes a unique approach to cluster these documents using the mini batch k-means algorithm on dimensionally reduced sentence embeddings generated with the use of DistilBERT and UMAP. The proposed approach has been compared to state-of-the-art topic modelling and clustering approaches and has outperformed them.

Aayush Halgekar, Ashish Rao, Dhruvi Khankhoje, Ishaan Khetan, Kiran Bhowmick
Traffic Sign Detection and Recognition

Surge in the amount of automobiles on street imposes the consumption of automatic systems for driver aid. These structures form significant instruments of self-driving automobiles as well. Traffic Sign Recognition remains such an automatic structure which affords the relative responsiveness aimed at self-driving automobile. In this work we are able to perceive and identify traffic signs in video classifications detailed by an onboard automobile camera. Traffic Sign Recognition (TSR) stands used to control traffic signs, inform a driver and facility or proscribe definite actions. A debauched real-time and vigorous instinctive traffic sign finding and recognition can upkeep and disburden the driver and ominously upsurge heavy protection and ease. Instinctive recognition of traffic signs is also important for automated intellectual driving automobile or driver backing structures. This paper presents a study to identify traffic sign via OpenCV procedure and also convert the detected sign into text and audio signal. The pictures are mined, perceived and recognized by preprocessing through numerous image processing methods. At that time, the phases are accomplished to identify and identify the traffic sign arrangements. The structure is trained and endorsed to find the finest network architecture. Aimed at the network exercise and assessment we have generated a dataset containing of 1012 images of 8 diverse classes. The tentative results demonstrate the exceedingly accurate groupings of traffic sign patterns with composite contextual images and the computational price of the planned system. Though, numerous features make the road sign recognition tricky and problematic such as lighting state changes, occlusion of signs due to hitches, distortion of signs, gesture blur in video images.

Preeti S. Pillai, Bhagyashree Kinnal, Vishal Pattanashetty, Nalini C. Iyer
Lung CT Scans Image Compression Using Innovative Transformation Approach

Doctors face difficulty in diagnosis of lung cancer due to the complex nature and clinical interrelations of computer diagnosed scan images. An image is an artifact which depicts visual perception. It contains large amount of data, hence requires more memory storage and causes incontinence for transmission over the limited bandwidth channel. By removing excessive redundant data, image compression technique reduces the amount of data required to represent an image and reduces cost of storage and transmission. Before compressing an image we are performing image transformation which consists of more number of zeros when compared to number of one’s. Image compression minimizes the size in bytes of a graphic file without degrading the quality of the image. The main aim of compression is to reduce redundancy in stored or communicated data in turn increasing effective data density. In this research work normal c code is performed the transformation and compression is being performed to get the compressed image.

J. Vasanth Wason, A. Nagarajan
Effect of Data Compression on Cipher Text Aiming Secure and Improved Data Storage

Cryptography is a technique of protecting the data from an unauthorized access using encryption process. Encryption converts the plain text into cipher text which is in non-readable form. Past studies suggest that the size of cipher text is the major concern which hinders users from adopting encryption methods to secure the data. This work involves an experimental study on exploring the effect of data compression on an amino acid form of cipher text using dictionary-based methods as well as entropy coding methods without compromising the security. Compression ratio is measured for different file sizes. The results show that 47% storage savings is achieved using entropy coding method and 60% using dictionary-based coding method. Owing to this, storage efficiency is also doubled. The advantage of this method is thus convinced and provides an improved data storage.

M. K. Padmapriya, Pamela Vinitha Eric
RSS Using Social Networks User Profile

E-commerce application in recent days are gaining more importance due to its worldwide services and user-friendly applications. Some of the social networking applications are being backbone of e-commerce applications help in its functioning. With this, the number of e-commerce applications are being associated with social networking site and their user profiles. E-commerce sites use this social networking sites as service providers to their customers associated with their social networking profiles. In these online transactions, social networking sites act as third-party service providers to its users with respect to e-commerce interactions. Social networking sites acting as third-party service providers should be careful about users type, their behaviour and services that they provide. Users choosing their services should also be careful about what to choose and what not to choose. In this three-tier interactions, trust plays a very important role. Trust on any service providers should be evaluated to stay away from online fraudulent activities. User interacting through these applications should always be able to decide between the trusted services and fraud services. Hence, service selection in social networking is proposed with the help of Refined Service Selection (RSS) algorithm using social networks user profiles. Implementation is done with the help of real data set and found that the accuracy of RSS is more when compared with the existing algorithms.

S. M. Manasa, Shreesha, Leena Giri, R. Tanuja, S. H. Manjula, K. R. Venugopal
Learning-Based Framework for Resource Allocation for Varying Traffic

Network function virtualization (NFV) presents a model to remove physical middle-boxes and replace them with virtual network functions (VNFs) that are more resilient. Adjusting resource allocation in response to the varying demands of traffic, there is a need for instantiating the VNFs and also balancing the resource allocation based on demand. Current optimization methods frequently expect the amount of resources required by every VNFs instance is fixed, resulting in either resource wastage or poor quality of service. To resolve this issue, machine learning (ML) models are used on real-time data of VNF, which contains performance indicators and requirement of resources. Evaluating the result, using ML models along with the VNF placement algorithms shows the reduced amount of resource consumption, thereby improving the quality of service and reducing the delay.

P. Anvita, K. Chaitra, B. Lakshmi, Shraddha, B. Suneeta, P. Vishal
Machine Learning Classifiers for Detecting Credit Card Fraudulent Transactions

Credit card usage has increased significantly as a result of the fast development of e-commerce and the Internet. As a consequence of enhanced credit card usage, credit card theft has risen substantially in recent years. Fraud in the financial sector is expected to have far-reaching effects in the near future. As a response, numerous scholars are concerned with financial fraud detection and prevention. In order to prevent bothering innocent consumers while detecting fraud, accuracy has become critical. We used hyperparameter optimization to see if created models utilizing different machine learning approaches are significantly the same or different, and if resampling strategies improve the suggested models’ performance. The hyperparameter is optimized using GridSearchCV techniques. To test the hypotheses of data that has been divided into training and test data, the GridSearchCV and random search methods are used. The maximum accuracy 72.1% was achieved by decision tree classifier on the imbalanced German credit card dataset. The maximum accuracy of 98.6% is achieved by LDA on imbalanced European credit card dataset. Additionally, logistic regression and naïve Bayes were also tested and SMOTE was applied.

Bharti Chugh, Nitin Malik
Virtual Ornament Room Using Haar Cascade Algorithm During Pandemic

Generally, purchaser wishes to attempt all jewels on them and share those photos on social media to get a recommendation. But now, in the present situation, all are avoiding touching activities unnecessarily. It is not so safe due to COVID-19. So without stopping purchaser jewellery attempts, a solution is provided by using an application called virtual ornament room. The proposed scheme is based on creating a Web application that identifies where the human face is located in the frame and superimposes the chosen jewellery on the face using the HAAR Cascade Algorithm. By using the concepts of augmented reality, the digital objects are estimated and placed onto the frame in real time. The system is implemented by using Flask framework and OpenCV, a Python module. This application works with an attached camera, Internet, and a Web browser. From the results, it can be seen all the jewellery can be easily selected by the customers from their home itself during the pandemic situation.

S. S. Sunaina M, L. S. N. J. Manjusha P, Kishore O
Performance Analysis of an ADM Ring Network for Optical Communication System

In this paper, the performance of the ADM optical ring network is demonstrated. The four nodes ring network at 40 Gbps is demonstrated. It is analyzed that the signal quality rises as the number of nodes increased. It is found that the signal can be transferred up to a communication distance of 300 km with acceptable BER and Q factor. The degradation of a signal occurs after 300 km distance. The received power at various input powers is investigated, and results are also calculated by eye diagrams from which crosstalk of a signal can be observed.

Amit Grover, Shyam Akashe, Anu Sheetal, Reecha Sharma, Meet Kumari
Biometric-Based Authentication in Online Banking

The use of technology has become the integral part of human life. Most importantly the introduction of Internet has made the lives of people easy and due to its cost effectiveness; usage of Internet by people has been increased. The Internet has made the people to move toward the online mode of transaction. Banks recommend their customers to use Internet banking facility and assure it as a safest mode of transaction but it is associated with huge risk. The continuous rise in online banking brings several security issues and increased cost of implementing higher security systems for banks and customers. Present online banking technology works on 2-Factor authentication mode, i.e., it works on both transaction level and authentication level. The main problem associated with 2-Factor authentication is that there may be risk associated, like cyberattack (SIM swapping fraud), etc., so we have proposed a biometric authentication technique which makes transaction more secure.

Shivaraj Hublikar, Vishal B. Pattanashetty, Venkatesh Mane, Preeti S. Pillai, Manjunath Lakkannavar, N. S. V. Shet
Various Diabetes Detection Techniques a Survey

Diabetes Mellitus is one of the most serious illnesses, and it affects a large number of people. Diabetes Mellitus may be caused by age, obesity, lack of exercise, genetic diabetes, lifestyle, poor diet, high blood pressure, and other factors. Diabetics are at a better chance of developing such as heart failure, kidney disease, stroke, eye problems, nerve damage, and so on. The standard hospital procedure is to collect the necessary information for diabetes diagnosis is obtained through a variety of tests, and appropriate medication is prescribed based on the results. Type 1 and 2 diabetes are the maximum not unusual place sorts of the condition, however there also are different kinds, inclusive of gestational diabetes which happens during pregnancy, as well as other forms. The emphasis of this paper is on various prediction techniques that have had a major impact in the field.

Shahee Parveen, Pooja Patre, Jasmine Minj
Utilization of Intellectual Learning Methods in Forest Conservation: Smart Growth and Upcoming Challenges

Nowadays, forest ecology is applied in various disciplines, mainly in machine learning, which is a crucial branch of artificial intelligence. Here, widely used machine learning approaches and its uses of forest natural science were reviewed over the last ten years. Granting ML techniques help classify, model, and predict forest ecology studies, the further development of ML technology is narrowed by the insufficient of applicable data and greater threshold applications. On the other hand, the combination and use of different algorithms, as well as enriched discussion and group effort flanked by environmental scientists and ML creators, still current significant experiments and challenges for upcoming environmental investigation. We trust in “big data,” where environmentalists resolve soon have right to use to additional kinds of data such as sound and video, forthcoming uses of machine learning in environmental science will turn out to be an progressively smart device for environmentalists, potentially inaugurate a novel ways of investigation in forest environmental science.

R. Vasanth, A. Pandian
A Review on Deep Learning Techniques for Saliency Detection

Salient object detection (SOD) has an important role in computer vision and digital image processing especially in the field of medical, ecology and transportation etc. At the same time a lot of complexity exists for detection of salient objects due to the effect of light, weather and density of the image. This review paper is proposed to summarize the existing implementation and recent technology development in the SOD. The major attention is given on reviewing deep learning techniques and edge detection techniques for SOD. From this work it is observed that the use of deep learning along with convolution neural network detects the objects accurately in less time, and by reviewing different edge detection method it is noticed that by incorporating edge detection methods can detect the objects efficiently even if it is present at clumsy and occluded regions.

Kokila Paramanandam, R. Kanagavalli
Energy and Memory-Efficient Routing (E&MERP) for Wireless Sensor Networks

Wireless sensor networks (WSNs) comprised of sensor nodes use the capabilities of sensing, computing, then communicating. But, sensors have limited energy and memory capacity to perform the above operations. Therefore, using the available energy and memory efficiently is perplexing problems in WSNs. Clusters-based routing protocols are situated to exploit network lifetime. In this work, we recommend an advanced combination of clustering and replacement algorithm to decrease energy consumption, prolong network lifetime and improve throughput during frequent transmission. The recommended protocol reduces energy depletion by finding the cluster head (CH) node with maximum residual energy, thereby preventing energy hole, thus, improving network lifetime, and manages memory by using replacement algorithm in case of buffer overflow. The simulation shows that the recommended protocol has extended network lifetime, increased data delivery than protocols like LEACH, LEACH-C, SEP and EEEHR protocols.

Karuppiah Lakshmi Prabha, Chenguttuvan Ezhilazhagan
Multi-scale Aerial Object Detection Using Feature Pyramid Networks

Aerial object detection on an UAV or embedded vision platform requires accurate detection of objects with various spatial scales and has numerous applications in surveillance, traffic monitoring, search, and rescue, etc. The task of small-object detection becomes harder while using standard convolutional neural network architectures due to the reduction in spatial resolution. This work evaluates the effectiveness of using feature pyramid hierarchies with the Faster R-CNN algorithm for aerial object detection. The VisDrone aerial object detection dataset with ten object classes has been utilized to develop a Faster R-CNN ResNet model with C4 and FPN architectures to compare the performance. Significant improvement in the performance obtained by using feature pyramid networks for all object categories highlights their importance in the multi-scale aerial object detection task.

Dennis George Johnson, Nandan Bhat, K. R. Akshatha, A. K. Karunakar, B. Satish Shenoy
Application-Based Approach to Suggest the Alternatives of Daily Used Products

For decades, we consider nature conservation a serious topic. However, with the support of people of all ages, one can excitingly hold nature. A multipurpose application is created using flutter and firebase machine learning vision (ML vision) in which one can use a camera/gallery image to find the alternatives of the day-to-day required items and can also see the amount of pollution caused by a similar item. The application also suggests an environment-friendly alternative to the daily used products. Soon, detecting items and seeking alternatives for the same will become a fun learning task. One can also share the amount of pollution they have saved, after selecting the environment-friendly alternatives and can also win rewards and social media badges. The application also has a community tab in which one can share their thoughts regarding nature conservation and a group of people can conduct a healthy discussion on the nature conservation topics.

Vedant Dokania, Pratham Barve, Krushil Mehta, Dev Patel, Mayur M. Sevak
Comparative Analysis of Multi-level Inverters with Various PWM Techniques

Multi-level inverter (MLI) with multi voltage level has consistently been common in inverter arrangements because of its added benefits of low distortion with improved output wave, low-esteem filter segments, higher power yield, etc. This paper hits on with the comparative analysis of 17-level voltage source inverter with PWM techniques to the conventional topologies of cascaded H Bridge MLI. The two inverters are contrasted in terms of THD, output voltage and currents. The switching instances ensure the minimum number of switching transition in contrast to the H Bridge MLI and allows us to design with reduced switch count. The proposed 17 level MLI is verified by required simulations in PWM techniques and cascaded H bridge model on MATLAB and the corresponding sim-results are presented.

V. Ramu, P. Satish Kumar, G. N. Srinivas
A Review of Various Line Segmentation Techniques Used in Handwritten Character Recognition

Segmentation is a very critical stage in the character recognition process as the performance of any character recognition system depends heavily on the accuracy of segmentation. Although segmentation is a well-researched area, segmentation of handwritten text is still difficult owing to several factors like skewed and overlapping lines, the presence of touching, broken and degraded characters, and variations in writing styles. Therefore, researchers in this area are working continuously to develop new techniques for the efficient segmentation and recognition of characters. In the character recognition process, segmentation can be implemented at the line, word, and character level. Text line segmentation is the first step in the text/character recognition process. The line segmentation methods used in the character recognition of handwritten documents are presented in this paper. The various levels of segmentation which include line, word, and character segmentation are discussed with a focus on line segmentation.

Solley Joseph, Jossy George
Designing a Digital Shadow for Pasture Management to Mitigate the Impact of Climate Change

Pasture is a said to be a natural feed to the animal. Pastures are maintained to ensure a good quality and quantity of the feed to get the best produce from the animals. Traditional ways of managing the pastures are costly and labor intensive. The use of technology in farming introduced the concept of smart farming which made pasture management easier and cost effective. Smart farming integrates advanced technological methods that include Internet of Things (IoT), big data and cloud computing. These enable tracking, monitoring, automating and complex operational analysis operations. However even with the use of these new technological techniques there are still challenges facing pasture management. Pastures are dependent on climatic conditions therefore climate change has a great impact on pasture management. With the onset of climate change, it has become a challenge for livestock farmers to predict weather conditions therefore it is impossible for decision makers to deal more effectively with the effects of climate variability. However, the digital twin concept proved to be the best technique to tackle these challenges due to its complex prediction techniques that include artificial intelligence (AI) and machine learning. This paper analyses the models used in building a digital shadow as the first step in developing a digital twin.

Ntebaleng Junia Lemphane, Rangith Baby Kuriakose, Ben Kotze
Parsing Expression Grammar and Packrat Parsing—A Review

Ford presented Parsing Expression Grammars (PEGs) as an alternative to specify rules for programming language, along with a Packrat parser, based on an idea of memoization. The idea proposed by Ford guarantees parsing of grammar written using PEGs in linear time in spite of backtracking. The primary aim of the paper is to highlight the details of PEGs followed by various challenges existing for a better understanding of the readers. From the entire overview presented, it has been observed that PEGs address the issue of undesired ambiguity in grammar for computer-oriented programming language, by not allowing any ambiguity in rules itself at the first place. However, the guarantee of linear time execution comes with a cost of large heap consumption, making it infeasible to implement for large inputs. Optimizing the resources required for memoization, may allow us to utilize the benefits offered by PEGs.

Nikhil S. Mangrulkar, Kavita R. Singh, Mukesh M. Raghuwanshi
Precision Testing of a Single Depth Camera System for Skeletal Joint Detection and Tracking

Implementing a marker-less, camera-based system, to capture skeletal joint motion in a timely manner that is robust to variables such as lighting conditions, occlusion and changes in the subject’s distance from the camera remains a challenge. This paper aims to test the feasibility of such a system using the Intel RealSense D435i depth camera to capture joint coordinates in the camera reference frame. The subject was placed in various lighting conditions to analyze the effect of glare on joint coordinate precision, a decimated filter and a spatial filter were applied to test which of these filters reduced noise in the image signal most effectively. To examine which setup is the most effective in measuring coordinates of the neck, elbows, wrists, knees and ankles precisely, variations were made in the subject’s distance from the camera, position in the field of view and ambient lighting conditions. The empirical cumulative distribution function (ECDF) plots of the true distance obtained from the data collected were used to systematically eliminate undesirable recording conditions. The coordinates measured will be used to calculate gait parameters.

Ketan Anand, Kathiresan Senthil, Nikhil George, Viswanath Talasila
Generation of User Interfaces and Code from User Stories

Coding is time-consuming and confusing more than any other single process in a software development project. This research addresses the above issue and proposes the transition from a computation independent model (CIM) to a code whose aim is to generate cross-platform mobile and Web applications with the CRUD functionalities (create, read, update, and delete). The implementation of the model-driven architecture (MDA) approach resides in the generation of cross-platform applications, through model-to-text transformations with ACCELEO. Our automation approach is based on the agile requirement to generate an Ecore file representing the platform-independent model (PIM). Subsequently, the source code and interfaces are automatically built. The CRUD application is generated for each class of an Ecore file. These output applications are carried out using the Flutter framework. In our development process, the automation of transformations from one step to another leads to productivity gains and a reduction in overall costs.

Samia Nasiri, Yassine Rhazali, Amina Adadi, Mohamed Lahmer
Sentiment Analysis: Twitter Tweets Classification Using Machine Learning Approaches

Analyzing public data from social media could give fascinating results and insights into the realm of public opinion on almost any product, service, or person. Sentiment analysis is a method for analyzing and interpreting Twitter data in order to determine public opinion. Creating sentiment analysis software is a method for measuring Twitter tweet perceptions programmatically. Results classify user’s perspective via tweets into positive and negative, which is represented in result, may include polarity of the tweets. In this paper, we evaluate various machine learning algorithms performance to determine suitable algorithm for twitter datasets classification.

Akula V. S. Siva Rama Rao, Sravana Sandhya Kuchu, Daivakrupa Thota, Venkata Sairam Chennam, Haritha Yantrapragada
Practical Fair Queuing Algorithm for Message Queue System

In microservices systems, the message queue is an architecture that provides asynchronous communication between services. The system has to receive requests from many sources to enqueue, while computing resources are limited. So, the queuing problem involves finding the order in which services are served fairly. Many theoretical studies have been conducted but no formal studies in the field of software development. This paper provides methods and designs of practical fair queuing algorithm in message queue of microservices system. Finally, the work presents the application of the design routines to a case study site.

Le Hoang Nam, Phan Duy Hung, Bui Trong Vinh, Vu Thu Diep
Indigenous Non-invasive Detection of Malnutrition-Induced Anemia from Patient-Sourced Photos Using Smart Phone App

The proposed online-based malnutrition-induced anemia detection smart phone app is built, to remotely measure and monitor the anemia and malnutrition in humans by using a non-invasive method. This painless method enables user-friendly measurements of human blood stream parameters like hemoglobin (Hb), iron, folic acid, and vitamin B12 by embedding intelligent image processing algorithms which will process the photos of the fingernails captured by the camera in the smart phone. This smart phone app extracts the color and shape of the fingernails, will classify the anemic and vitamin B12 deficiencies as onset, medieval, and chronic stage with specific and accurate measurements instantly. On the other dimension, this novel technology will place an end to the challenge involved in the disposal of biomedical waste, thereby offering a contactless measurement system during this pandemic Covid-19 situation.

K. Sujatha, N. P. G. Bhavani, U. Jayalatsumi, T. Kavitha, R. Krishnakumar, K. Senthil Kumar, A. Ganesan, A. Kalaivani, Rajeswary Hari, F. Antony Xavier Bronson, Sk. Shafiya
Speaker Diarization and BERT-Based Model for Question Set Generation from Video Lectures

The year 2020 and the onset of the pandemic has, by and large, rendered the traditional classroom-based learning experiences obsolete. This rapid change in the learning experience has brought with it the opportunity to explore new avenues such as online learning. In this paper, we outline a methodology that aims to aid this paradigm shift by proposing an automated question set generation model based on the video lectures delivered by the teacher. We study the usage of pre-trained Scientific Bidirectional Encoder Representations from Transformers (SCIBERT) checkpoints for question generation on text derived from video lectures. The proposed methodology takes into consideration the presence of multiple speakers in the video lectures and employs speaker diarization for obtaining the audio corresponding to the teacher’s voice. Diarization is implemented using the mean shift clustering algorithm. For the purpose of answer-agnostic question generation, pre-trained SCIBERT checkpoint is used to warm start an encoder–decoder model. The model is fine-tuned using the SQuAD dataset. The results show that the model is able to successfully generate questions for a given context derived from the transcript of a diarized video lecture.

Sravanthi Nittala, Pooja Agarwal, R. Vishnu, Sahana Shanbhag
Detecting Depression in Tweets Using Natural Language Processing and Deep Learning

COVID-19 has caused physical, emotional, and psychological distress for people. Due to COVID-19 norms, people were restricted to their homes and could not interact with other people, due to which they turned to social media to express their state of mind. In this paper, we implemented a system using TensorFlow, which consists of multilayer perceptron (MLP), convolutional neural networks (CNN), and long short-term memory (LSTM), which works on preprocessing, semantic information on our manually extracted dataset using Twint scraper. The models were used for classifying tweets, based upon whether they indicate depressive behavior or not. We experimented for different optimizer algorithms and their related hyperparameters for all the models. The highest accuracy was achieved by MLP using sentence embeddings, which gave an accuracy of 94% over 50 epochs, closely followed by the other two.

Abhishek Kuber, Soham Kulthe, Pranali Kosamkar
Predicting the Consumer Behaviour Based on Comparative Analysis of Salary and Expenditure Using Data Mining Technique

Data mining is also called information mining (otherwise called information revelation from datasets) and is the interaction of extraction of covered up, already obscure and possibly helpful data from datasets. The result of the separated information can be dissected for the future arranging and improvement points of view. In this paper, researcher has created an endeavour to exhibit the relation between monthly income and expenditures for calculating the customer behaviour. Their investigation utilizes the amazing information data mining tool Weka.

Teena Vats, Kavita Mittal
Improve the Detection of Retinopathy with Roberts Cross Edge Detection

Image processing is most useful tool for pattern recognition and compare between two images. In retinopathy diagnosis, Roberts cross is an image processing filter that can be extracted vertical and horizontal edges, where the change has been detected from the corresponding pixels. It is a gradient kernel where vertical edges and horizontal edges are extracted separately, and later, magnitude has been computed by combining both the parameters. It has been considered as best edge detection tool in image processing for obtaining sensitive edges. For calculations, 150 images are taken for result which is generated by this system. Here, system achieved 96.66% of accuracy with nominal false acceptance rate. The results are run on MATLAB tool.

Arun Kumar Jhapate, Ruchi Dronawat, Minal Saxena, Rupali Chourey
A Study of Emerging IoT Technologies for Handling Pandemic Challenges

Modern years, the Internet of Things (IoT) is mechanizing in abundant real-world functions such as smart transportation, smart business to build an individual life more accessible. IoT is the mainly used method in the previous decade in different functions. Deadly diseases always had severe effects unless they were well controlled. The latest knowledge with COVID-19 explains that by using a neat and speedy approach to deal with deadly diseases, avoid devastating of healthcare structures, and reduce the loss of valuable life. The elegant things are associated with wireless or wired communication, processing, computing, and monitoring dissimilar real-time situations. These things are varied and have low remembrance, less processing control. This article explains a summary of the system and the field of its function. The recent technology has supplied to manage previous closest diseases. From years ago, scientists, investigators, physicians, and healthcare specialists are using novel computer methods to resolve the mysteries of disease. The major objective is to study dissimilar innovation-based methods and methods that support handling deadly disease challenges that are further appropriate developments that can probably be utilized.

M. Deepika, D. Karthika
Interpretation of Handwritten Documents Using ML Algorithms

Handwritten character recognition is a continuing field of research which covers artificial intelligence, computer vision, neural networks, and pattern recognition. An algorithm that executes handwriting recognition can acquire and detect characteristics from given handwritten document/image as input and convert them to a machine-readable form. It is also known as the task to convert the input text to extracting the features with the help of symbol representation and icons of each letter. The main goal of this handwritten text recognition is to identify an input character and text on a scanned image which are written in cursive writing, and these features are extracted with each input character pixels. Each character dataset contains 26 alphabets. IAM datasets are used for training the characters and for classification and recognition. The output is generated in the form of human-readable text. In this paper, the handwritten documents has been interpreted using machine learning algorithms such as connectionist temporal classification (CTC), long short-term memory networks (LSTMs), and generative adversarial networks (GANs). The results show the comparison of feature extraction based on proposed approach with convolutional neural networks (CNN) and recurrent neural network (RNN) with real-time datasets gain in better performance when using these algorithms.

Mohd Safiya, Pille Kamakshi, Thota Mahesh Kumar, T. Senthil Murugan
Survey of Streaming Clustering Algorithms in Machine Learning on Big Data Architecture

Machine learning is becoming increasingly popular in a range of fields. Big data helps machine learning algorithms better timely and accurate recommendations than ever before. Machine learning big data phases and subsystems point the way to comparable issues and threats, as well as to associated research in various of previously new of unexplored areas. In latest days, data stream mining has become a popular research topic. The biggest challenge in streaming data mining is extracting important information directly from a vast, persistent, and changeable data stream in just one scan. Clustering is a powerful method for resolving this issue. Financial transactions, electronic communications, and other fields can benefit from data stream clustering. This research examines approaches for synthesizing enormous data streams, including projection clusters for high-dimensional data, scalability, and spreading computing, as well as the issues of big data and machine learning. The data abstraction and regular features of learning streams, such as abstraction wander, data flow system, and outlier observation, are discussed in this study. AutoCloud, which seems supported by previous inform prospect of typicality with eccentricity data science, will be used to discover deviations but instead resolve them using the support clustering approach, which is the process of computing difficulty and clustering exactness, will be detailed in this article. We present the MLAutoCloud algorithm, which will be used in machine learning PySpark frameworks. Implement the MLAutoCloud algorithm in the future to tackle the AutoCloud algorithm’s problems.

Madhuri Parekh, Madhu Shukla
Social Commerce Platform for Food Enthusiasts with Integrated Recipe Recommendation Using Machine Learning

In the contemporary world, humans are constantly involved in the race of making their lifestyle better by making all possible efforts to earn more and more money. However, it can be frequently observed that in this hustle for money, he often compromises on his health, and eating on the go has now become a trend that has replaced the infamous cook by your own tradition. Cooking at home by ourselves ensures the best quality of food and that ensures good health, whereas the eat on the go culture has led to more and more consumption of fast food which affects the health of a person thereby indirectly reducing his/her efficiency to work by deteriorating his/her health. Through this paper, we aim to help people around the globe with the problem stated above via a social commerce platform.

Atul Kumar, Bhavak Kotak, Umang Pachaury, Preet Patel, Parth Shah
CoviFacts—A COVID-19 Fake News Detector Using Natural Language Processing

Fake news confronts us on a daily basis in today’s fast-paced social media world. While some instances of fake news might seem innocuous, there are many examples that prove to be menacing. Misinformation or disinformation which takes the form of these weaponized lies which eventually amount to defective information, defamatory allegations, and hoaxes. The only motive behind such a malicious act is to engender emotional instability among the public. One such prevalent example today is COVID-19 which has caused an unprecedented paradigm shift in numerous businesses and quotidian activities across the globe. One of the primary activities is being news reporting. On average, people are spending almost one hour a day reading news via many different sources. The development in technology has obviated the barriers between sharing of information, thereby truly making the industry cosmopolitan. Therefore, it is paramount to curb fake news at source and prevent it from spreading to a larger audience. This paper describes a system, where the user can identify apocryphal news related to COVID-19 so as to ensure its authenticity.

Nandita Kadam, Shreyas More, Vishant Mehta, Sanyam Savla, Krisha Panchamia
Improved LBP Face Recognition Using Image Processing Techniques

The face recognition process is used to distinguish individual’s faces based on their unique facial traits. In face recognition, the detection of faces in real-time videos under varying illumination conditions is one of the challenging tasks. In this study, we are detecting faces using Haar classifiers because of their high detection accuracy and local binary pattern (LBP) classifiers due to their invariant nature under varying illumination conditions. Image processing techniques such as contrast adjustment, bilateral filtering, histogram equalization, image blending, and quantization are applied to improve the detected faces. Also, we have applied quantization on raw face images at various levels to evaluate the feasibility of the proposed method in effectively recognizing the faces in low-quality images. Using local binary pattern histogram (LBPH) recognizer, a face recognition rate of 100% has been achieved when resized raw images and preprocessed images are blended. Also, an equal performance has been achieved when the quality of the images is reduced by applying quantization of 16 levels. Hence, the proposed method has proven its effectiveness in recognizing the faces in low-quality images. The results show that using the preprocessed image, the proposed face recognition method is invariant to varying illumination conditions.

G. Padmashree, A. K. Karunakar
A Bibliometric Analysis of Green Computing

The objective of this paper is to study the trends in the research papers in last two decades in the area of green computing. The present study is aimed to find the existing literature in the field of green computing by undertaking a bibliometric review. This work examines the research papers which were extracted from Scopus database. Scopus is one of the largest databases, and it is much user-friendly in terms of extracting the existing studies undertaken by other researchers. The papers extracted from Scopus were from year 2001 to 2020, and authors have used only one filter in addition to year, i.e. language as English. The authors have used VOSviewer software to generate network diagrams. The results show that there has been a significant increase in the number of research studies since 2012 onwards. Around 84% of research studies are published in the conference proceedings and journal. Another important finding was that India was at the top of the list in terms of number of articles by a country where India has 186 research papers in the area of green computing. The study also gives an insight into the co-authorship and keyword analysis. The study is very important from the perspective that it gives a direction to the future researchers by apprising the existing studies in the area of green computing. The study is unique in a way that it gives co-authorship and keyword analysis using VOSviewer.

Arti Chandani, Smita Wagholikar, Om Prakash
Fractal Image Coding-Based Image Compression Using Multithreaded Parallelization

Fractal image coding-based image compression is characterized by its high compression ratio, high-resolution, and lower decompression time. In spite of these advantages, it is not being widely adopted because of its high computation time. Attempts made to reduce the computation duration in fractal image compression (FIC) fall into two categories like heuristics-based search time reduction and parallelism-based reduction. In this work, we have proposed a multithreading-based parallelism technique on the multi-core processors to minimize the compression duration. The compression duration of the suggested multithreading process is tested upon the images having different resolutions. It is observed that the proposed solution has reduced the compression time by almost 2.51 times as compared to sequential method.

Ranjita Asati, M. M. Raghuwanshi, Kavita R. Singh
The Analysis of Noto Serif Balinese Font to Support Computer-assisted Transliteration to Balinese Script

This study is aimed to preserve the endangered Balinese local language knowledge through technology by analyzing Noto Serif Balinese (NSeB) font to support computer-assisted Balinese script transliteration. This research has never been done before and contributes to the future improvement of that font development for wide use on computer systems, including the smartphone. The analysis is done on Noto Balinese as the first computer-assisted transliteration to Balinese script using NSeB font that complies with Balinese Unicode. The testing result shows that NSeB font support should be enhanced by providing wider glyph for a certain appended form or gantungan, providing higher glyph for tedung, repositioning glyphs for certain clusters of vowel-sign and consonant-sign to avoid overlap displaying, and providing correction for a certain combination of syllable and long vowel. In the future, the accompanying algorithm of this study should be improved by taking care of the line breaking mechanism and by enriching its special words repository.

Gede Indrawan, Luh Joni Erawati Dewi, I. Gede Aris Gunadi, Ketut Agustini, I. Ketut Paramarta
Email Spam Detection Using Multilayer Perceptron Algorithm in Deep Learning Model

Email spam detection is a filtering process which identifies either it is spam or not. It also removes the unsolicited data present in the user’s email inbox. Certain type of spam mails contains malware which misuse the users’ data. Hence, we need to identify spam mails and take necessary actions. Many machine learning algorithms have proposed for differentiate spam mails from normal mails. Tokenization of emails between length and frequency is one of the techniques. It helps to split the raw emails into tokens known as small words. After tokenization, tokenized count has taken into consideration for process the emails. Based on that spam emails to be identified which are present in the dataset of spam and ham emails. To extracting these features, term frequency—inverse document frequency (TF-IDF) has used to train the model. In this method, multilayer perceptron deep learning algorithm is applied to compute the model. It has two layers. When input is given to the perceptron, the input is multiplied by the hidden layers, and it holds the activation function such as sigmoid activation with regularization function. For the better optimization, the model uses the Adam optimizer with gradient descent for fastest optimization. The network learns the model. The learning rate is set to true. While computing the model, it goes in the forward direction to train the model and comeback again (backpropagation). This process will be repeated. Going to the forward direction and comes back, then again, maintaining forward approach is called one epoch. The epoch rate has computed in the model. In the comparison between multilayer perceptron algorithm and machine learning algorithms such as support vector machine (SVM), random forest, and XGBoost, the deep learning algorithm produces 99% of accuracy on precision, recall, and F-measure and holds less computation time. Hence, the results prove that deep learning algorithm performs better than machine learning algorithms.

Senthil Murugan Tamilarasan, Muthyala Hithasri, Kamakshi Pille
Robust Feature Extraction and Recognition Model for Automatic Speech Recognition System on News Report Dataset

Information processing has become ubiquitous. The process of deriving speech from transcription is known as automatic speech recognition systems. In recent days, most of the real-time applications such as home computer systems, mobile telephones, and various public and private telephony services have been deployed with automatic speech recognition (ASR) systems. Inspired by commercial speech recognition technologies, the study on automatic speech recognition (ASR) systems has developed an immense interest among the researchers. This paper is an enhancement of convolution neural networks (CNNs) via a robust feature extraction model and intelligent recognition systems. First, the news report dataset is collected from a public repository. The collected dataset is subjective to different noises that are preprocessed by min–max normalization. The normalization technique linearly transforms the data into an understandable form. Then, the best sequence of words, corresponding to the audio based on the acoustic and language model, undergoes feature extraction using Mel-frequency Cepstral Coefficients (MFCCs). The transformed features are then fed into convolutional neural networks. Hidden layers perform limited iterations to get robust recognition systems. Experimental results have proved better accuracy of 96.17% than existing ANN.

Sunanda Mendiratta, Neelam Turk, Dipali Bansal
An IoT-Based Temperature Measurement Platform for a Real-Time Environment Using LM35

The number given to an object to signify its warmth is called its temperature. People tried to quantify and measure differences in warmth, which led to the invention of the temperature notion. When a hot item collides with a cold item, heat is transferred until the two things are the same temperature. The two items are considered to be in thermal equilibrium once the heat transfer is complete. The quantity of warmth that is the same for two or more things in thermal equilibrium is thus defined as temperature. In this study, we are presenting a microcontroller system that will automatically estimate the temperatures of certain area or surroundings using the sensing devices LM35. The results of a rooms or atmosphere-related process is seen or considered.

Anupam Kumar Sharma, Prashant Singh, Dhyanendra Jain, Anjani Gupta, Prashant Vats
Cross Validated Canonical Variate Analysis for Damage Grading After Math of Earthquake Building

Seismic events are disasterous happenings that can destroy structures and endanger human life. Due to the obvious innumerable sources of environmental uncertainty, estimating earthquake damage to buildings remains a difficult task. A variety of factors influence the severity of earthquake-induced building damage, such as amplitude, proximity to geographic centre, geological features and compliance with building reliability. Examining the damage rate of concrete structural system is critical for acknowledging the maintenance need avoiding the level of damage during the next catastrophic event. With this incentive, this paper discusses the significance of building damage characteristics in predicting building damage grade. The building earthquake damage dataset with 27 building attributes with 762,106 building damage details from KAGGLE warehouse is used in the execution analysis to determine the grade of damage. The earthquake damage dataset is preprocessed with missing values estimation, categorical feature encoding and feature scaling. The dataset with all 26 features is applied to all the classifiers to grade the damage with and without components scaling and the performance is analysed cross validating the training and testing data with 80:20, 70:30 and 60:40. The dataset is minimized with 10 linear discriminant components and then applied to all the classifiers to grade the damage with and without components scaling and the performance is analysed with cross validating the training and testing data with 80:20, 70:30 and 60:40. The scripting is written in Python and implemented with Anaconda Navigator, and the outcome shows that the random forest classifier is exhibiting 92% of accuracy after feature scaling and giving 96% of accuracy with 10 component LDA reduced dataset before and after feature scaling.

M. Shyamala Devi, A. Peter Soosai Anandarai, M. Himavanth Sai, S. Suraj, S. Saranraj, M. Guruprasath
Do Age and Education Affect Self-schema? An Exploratory Study Among Individuals

Self-schema is known as the dispositions or opinions about him or herself that are usually extracted by self-talk. The schemas are generally made in the childhood. When the child starts looking at the world, at first his parents influence his opinions. The talks and the ideas are imbibed into the person as a child form like schema. For example, the characteristics of good child and a bad child are learnt from parents and near family only. While analyzing the labels of self-aspect, it was found that the individuals who had compartmentalized organization tend to define the self-aspects in narrow and negative terms (Showers in J Pers Soc Psychol 62:1036, 1992, [1]). This paper is based on research done taking few parameters about various self-schemas taking a random sample of 528 using an instrument questionnaire for data collection. As self-scheme is highly subjective, a vast subject few parameters are considered under self-schema. These parameters are tested against the demographical features such as age and education of the respondents. Multivariate analysis is done to analyze if there is any interaction between the demographical features of a person with his self-schema.

Arti Chandani, N. Srividya, B. Neeraja, Aditya Srinivas
Automatic Waste Segregation System: An IOT Approach

Effective waste management and disposal are major issues in today’s world, especially in countries like India where a majority of individuals do not use separate trash bins for recyclable and non-recyclable waste. This is a major cause of pollution and diseases in the country. However, separating them manually is a tedious process and also not efficient. Improper segregation results in pollution and affects the environment. We have developed an automated trash bin which autonomously classifies and segregates wastes as biodegradable and non-biodegradable and places them in the respective bin. We have achieved the same using IOT by passing the image data to the cloud to perform analytics and return the classified output to the controller of the bin and trigger it to place the waste in the correct container. We also send the usage data to the cloud to perform analytics about the bin usage and deploy it in a Web app to report the live status.

Siddharth Nair, C. K. M. Ganesan, A. Ram Gunasekaran, S. Sabarish, S. Asha
Employer Brand for Right Talents in Indian IT Companies

Employer branding is a concept which deals with human resource. It has become very important in today’s time when the reliance on intellectual capital is more. The concepts started from the ashes of Industrial age in the 1980s and made its presence felt in the 1990s. In post liberalization era, we have seen brand name being established not for the product they deliver, but as a place to work. Employer branding is a process with which the organization places an image of being “a great place to work” in the minds of the target future employees. The best place to work is actually feedback given by employees as to what they think of your organization, “a place to work.” In this research paper, we have highlighted some of the important points which are relevant for the IT companies as far as employer brand is concerned. The research has showed that IT companies have separate budget for employer brand, and they really do not have difficulty in attracting the right talent for their organization. These IT companies are well versed with the fact as to where the potential employees hang out and what are the factors which the potential as well as current employees are looking in an organization.

Pravin Kumar Bhoyar, Rajiv Divekar
A Short Paper on Web Mining Using Google Techniques

The quick improvement of data innovation throughout the most recent couple of years has brought about information development for a gigantic scope. Users make substance, for example, blog entries, tweets, informal community communications and photos; workers constantly make action logs; researchers make estimations information about the world we live in; and the web, a definitive storehouse of information, has gotten dangerous concerning adaptability. To optimize web activities using different techniques can help in bringing the insights out for a clear big data picture.

Kriti Gupta, Vishal Shrivastava
Overview of Blockchain Technology and Its Application in Healthcare Sector

A blockchain is distributed ledger technology (DLT) which is used to store data securely. In the blockchain, data is stored as a peer-to-peer network. In this digital era, most of the business wanted to convert their data centers from centralized to decentralized. Blockchain can secure their data through a specific algorithm. The blockchain ensures data transparency, data security, and data integrity. Blockchain is known for cryptocurrencies, but it is beyond cryptocurrency. This paper aims to overview the architecture of blockchain, features of blockchain, and application of blockchain in the healthcare sector. The main goal of applying blockchain in the healthcare sector is for security and to achieve interoperability. Patient data is very sensitive and very crucial to provide security features. Due to COVID-19, most of the medical data is stored in electronic format, and storing their data in the blockchain is the best thing to make data secure and transparent.

Neha A. Samsir, Arpit A. Jain
Digital Platforms and Techniques for Marketing in the Era of Information Technology

Digital marketing is the promotion of a product or service through at least one form of electronic media. This form of marketing is distinct from traditional marketing, but it uses some of the ideologies of traditional marketing. This research article examines the various technologies and platforms used in digital marketing that allow any organization or business to do this form of marketing and study what works for them and what does not. The article also explores the recent advancements in digital marketing due to the increase in users and the vast amount of data collected from these users. The two main advancements analyzed and discussed in this paper are machine learning (ML) and artificial intelligence (AI) tools.

Roohi Sharma, Amala Siby
Tacit Knowledge Management in Engineering Industries: A Bibliometric Analysis

Using a bibliometric methodology, this paper examines scientific research in the topic of tacit knowledge management in engineering companies. To that purpose, this report examines 705 publications from the Scopus database using several performance indicators, including total articles, total citations, and citations per paper. This paper also evaluates the most productive and well-cited authors, important subject areas, publication sources, countries, and institutions. The collection included publications published in scholarly journals between 1983 and 2021. The evolution of the research of tacit knowledge management is summarized in this publication. Our findings show that the most cited papers are from the United States and Norway, respectively. The most productive year in terms of published articles is 2010, whereas the most successful year in terms of citations is 2002. The findings could aid future research on this subject.

Pawankumar Saini, Pradnya Chitrao
Image Processing Techniques in Plant Disease Diagnosis: Application Trend in Agriculture

In agriculture, plant diseases cause significant economic losses every year and are major threats against food security. To control the infestation at the earliest stage and minimize crop loss, precise diagnosis of plant diseases is the only way. The diagnostic process usually relies on the in-field visual identification by agricultural experts with rich experience. However, for a large number of underdeveloped and developing countries, human experts are scarce and expensive. As an alternative, image processing techniques are becoming popular for automated plant disease diagnosis in agriculture. Several image processing systems have been proposed worldwide during the last one and a half decades. Aside from a mere literature survey, this paper presents a statistical study of the application trend of various image processing techniques used to design image processing systems for plant disease diagnosis in agriculture. This study will be very beneficial for aspirant researchers in the future to design such systems.

Debangshu Chakraborty, Indrajit Ghosh
A Bibliometric Analysis of E-Governance and ICT

The objective of this paper is to study the trends in the research papers in last 2 decades in the area of e-governance and ICT. The present study is aimed to find the existing literature in the domain of e-governance and ICT by undertaking a bibliometric review. This work examines the research papers which were extracted from Scopus database. Scopus is one of the largest databases and it is much user friendly in terms of extracting the existing studies undertaken by other researchers. The papers extracted from Scopus were from year 2001 to 2022 and authors have used only one filter in addition to year, i.e. language as English. The authors have used VOSviewer software to generate network diagrams. The results show that there has been a significant increase in the number of research studies since 2005 onwards. Around 47.3% of research studies are published in the conference proceedings and there are around 28.5% as journal articles. Another important finding was that India was at the top of the list in terms of number of article by a country where India has 229 research papers in the area of e-governance and ICT. The study also gives an insight into the co-authorship and keyword analysis. The study is very important from the perspective that it gives a direction to the future researchers by apprising the existing studies in the area of e-governance and ICT. The study is unique in a way that it gives co-authorship and keyword analysis using VOSviewer.

Smita Wagholikar, Arti Chandani
Face Recognition for Surgical and Non-surgical Faces Using Patch-Based Approach

Face Recognition (FR) is the task of remembering the individuals from their faces rather than any other body parts. In addition to all existing non-surgical challenging problems, Facial Plastic Surgery (FPS) is one of the most recent challenging issue to FR problems. Specifically, FPS is the medical process of restoration, reconstruction, or enhancing the appearance of some portion of the face. FPS causes significant difficulties since it permanently alters some face features, which alters the facial appearance further. These challenging issues of variations in facial appearances either due to non-surgical or surgical conditions motivate us to understand the effect of each on FR, respectively. The study of face recognition has been conducted in this chapter in non-surgical and surgical situations using a patch-based approach in which facial statistical features are extracted for recognition using a nearest neighbor concept such as a k-NN classifier. The resented study is tested on different facial datasets.

Kavita R. Singh, Roshni S. Khedgaonkar
RATSEL: A Game-Based Evaluating Tool for the Dyslexic Using Augmented Reality Gamification

Dyslexia is a learning disorder that disrupts the ability of a person to understand and manipulate language during reading or writing. Children between the ages of 5 and 12 will show signs of the disorder as they begin school, where they will focus heavily on learning. The regular methodology of teaching and evaluation is not always supportive of kids with dyslexia. These kids need the support of the environment and society in order to improve accessibility to challenges and difficulties faced by them. ICT enables valuable support for dyslexia kids through the visual and auditory processes of learning. In this paper, we have proposed RATSEL, an Augmented Reality (AR) enabled game-based evaluation tool for enhancing the existing school examination methodology in order to sustainably support kids with dyslexia. RATSEL: the application holds a lot of subjective games which provide visual and auditory assistance in exercises and reading activities such that they are easier to grasp and understand. The main objective of RATSEL is to provide dyslexic kids with a game-based environment for evaluation using augmented reality technology, which transforms the evaluation processes in a more encouraging and hopeful manner.

D. Sorna Shanthi, Priya Vijay, S. Sam blesswin, S. Sahithya, R. Sreevarshini
Design of Mini-Games for Assessing Computational Thinking

Computational Thinking (CT) is a process that allows us to solve problems in an efficient and effective manner using a computer. Many of the educational institutions around the world have adopted CT into their curriculum because it helps to foster the core competencies of the twenty-first century. There are different methods for imparting and improving CT but are not limited to interventions, scaffoldings, games, AR/VR, robotics and unplugged activities. Games and Gamification are interesting domains for educational researchers. Gamified approaches, including serious and mini-games, provide a platform that enables educators to impart, foster and assess CT skills while maintaining the interest, motivation and fun element of games. This research paper proposes a design of an interactive tool for assessing CT. The proposed tool allows the assessment of CT cornerstones in a fun and motivating environment that alleviates the stress and boredom that may occur while attending the traditional multi-choice assessment.

V. V. Vinu Varghese, V. G. Renumol
Vehicle Cluster Development

As the evolution of the technology in the automobile industry is taking place rapidly in recent times, this insists upon the need of developing an affordable and durable digital dashboard. This paper presents the work pertaining to the project which aims at developing a digital dashboard for automobiles using the model-in-loop, software-in-loop, and hardware-in loop methodology. In an automobile, vehicle cluster or a dashboard is the panel which has gauges featuring a speedometer, a tachometer, a fuel indicator, etc. Dashboards are of two types, namely analog dials and digital dials. An instrument panel cluster (IPC), or a vehicle dashboard, is one of the few devices in an automobile that communicates with the driver and since it is one of the few modules the driver can see it becomes an important aspect for the driver to get relevant information about the information and the status of the vehicle. This paper describes the design and implementation of a digital dashboard for an automobile, using MATLAB’s Simulink for simulations. MATLAB’s App Designer for designing a GUI and an onboard diagnostics port (OBD-II port) present inside a car and graphic programmable LCD. The LCD is programmed in a way so as to exhibit speed, engine RPM along with others, which are necessary parameters to be displayed to the driver while driving. The final device is low cost and ready to be adapted to other vehicles with low physical or software modification.

Sanskruti Raut, Shravani Naware, Vanita Tank
Technology for Creation of False Information Fields for UAV Protection

The issues of ensuring the safety of UAVs are quite acute in modern times. UAVs can be used to implement critical processes. Such processes can be relaying communications, searching for missing persons, reconnaissance operations, guarding facilities, and much more. At the same time, the UAV is often outside the controlled area and is physically unprotected. It is difficult to protect UAVs and communication channels between UAVs since the wireless communication channels used for UAVs are physically unprotected. There are many options for physical protection that involve imposing additional noise on a channel. In this case, an attacker can try to carry out an information attack or, for a start, simply detect the UAV. In our research, we propose a way to hide UAVs by creating false information fields. Next, we test our method by analyzing the radio spectrum and comparing our fake fields with legitimate ones. The results showed the effectiveness of the developed software, which allows you to create fake access points that can be detected by an intruder and allow you to hide the real transmission.

Elena Basan, Nikita Proshkin
Implementation of CSRF and XSS Using ZAP for Vulnerability Testing on Web Applications

The security of Web applications is one noteworthy component that is often overlooked within the creation of Web apps. Web application security is required for securing websites and online services against distinctive security threats. The vulnerabilities of the Web applications are for the most part the outcome of a need for sanitization of input/output which is frequently utilized either to misuse source code or to pick up unauthorized access. An attacker can misuse vulnerabilities in an application’s code. The security of Web applications may be a central component of any Web-based commerce. The security of Web applications deals particularly with the security encompassing websites, Web applications, and Web administrations such as APIs. This paper gives a testing approach for vulnerability evaluation of Web applications to address the extent of security issues. We illustrate the vulnerability assessment tests on Web applications. Showing how with an aggregation of tools, the vulnerability testing broadcast for Web applications can be enhanced.

Farzana Sultana, Md. Mynul Islam, Md. Tanbir Hasan, Md. Ismail Jabiullah
Smart Classroom: A Step Toward Digitization

The word SMART has already been implemented on a large scale in different fields from smart home to smart industry. Smart classroom service, one of the prominent technologies in the Internet of Things era, will change classrooms into being more intelligent, interconnected, and remotely controllable. The existing manual attendance system kills a tremendous amount of time for both the faculties and the students. Proxy attendance is also a huge challenge. There is a need for digitization of course materials for allowing access to it anytime from anywhere. Considerable amounts of energy can also be saved by switching off the smart board when not in use. Our proposed system aims to create an advancement in education systems by face detection and recognition-based attendance system, speech-to-text transcription for digital course material, and to save energy by automatically switching off the smart board when not in use.

Shivani Agrawal, Abhinandan Chandhok, S. Maheswari, P. Sasikumar
RADAR and Camera Sensor Data Fusion

As demand for vehicle automation has expanded in the last few years, it has become imperative to more precisely recognize the position and speed of surrounding vehicles. Object detection has been recognized as an important feature of the advanced driving assistance system (ADAS). This also ensures the safety of vehicles and prevents accidents caused by the negligence of humans. Object detection with sensor data fusion has proved to be very effective. Obstacles can be detected and labeled with the help of RADAR, LIDAR, and camera. Every sensor has advantages and limitations. Limitations of one sensor can be overcome by another sensor. Sensors such as LIDAR, RADAR, and camera are used together in order to obtain optimum results contributing to better object detection in autonomous systems. The paper describes the fusion of data acquired by the two sensors RADAR (AWR1642BOOST) and a two-dimensional camera (LOGITECH C170). RADAR can achieve better results in distance calculation than camera, whereas camera can achieve better results in angle compared to RADAR. Similarly, RADAR works efficiently in poor weather conditions and lighting, whereas camera may not provide accurate results. The data acquired by both the sensors are fused in order to obtain better object detection and ensure accurate calculation of parameters of the object detected. Region of interest detection and Haar Cascade algorithms are implemented to get satisfactory results and has been implemented in real time.

Venkatesh Mane, Ashwin R. Kubasadgoudar, P. Nikita, Nalini C. Iyer
Backmatter
Metadata
Title
Information and Communication Technology for Competitive Strategies (ICTCS 2021)
Editors
Dr. Amit Joshi
Dr. Mufti Mahmud
Dr. Roshan G. Ragel
Copyright Year
2023
Publisher
Springer Nature Singapore
Electronic ISBN
978-981-19-0095-2
Print ISBN
978-981-19-0094-5
DOI
https://doi.org/10.1007/978-981-19-0095-2