International Conference on Innovative Computing and Communications
Proceedings of ICICC 2022, Volume 3
- 2023
- Book
- Editors
- Deepak Gupta
- Ashish Khanna
- Aboul Ella Hassanien
- Sameer Anand
- Ajay Jaiswal
- Book Series
- Lecture Notes in Networks and Systems
- Publisher
- Springer Nature Singapore
About this book
This book includes high-quality research papers presented at the Fifth International Conference on Innovative Computing and Communication (ICICC 2022), which is held at the Shaheed Sukhdev College of Business Studies, University of Delhi, Delhi, India, on February 19–20, 2022. Introducing the innovative works of scientists, professors, research scholars, students and industrial experts in the field of computing and communication, the book promotes the transformation of fundamental research into institutional and industrialized research and the conversion of applied exploration into real-time applications.
Table of Contents
-
Multilingual Emotion Analysis from Speech
Poonam Rani, Astha Tripathi, Mohd Shoaib, Sourabh Yadav, Mohit YadavThe chapter delves into the critical area of speech emotion recognition (SER), highlighting the need for multilingual systems to accurately detect emotions such as anger, anxiety, and happiness from speech signals. It reviews existing models and methods, including bi-directional LSTM networks and ensemble approaches. The authors introduce a novel multichannel CNN architecture that significantly enhances the accuracy and efficiency of emotion detection across multiple languages. The proposed model utilizes features such as Mel Frequency Cepstral Coefficients and embeddings from the Roberta language model, processed through a multi-layer perceptron with SoftMax activation. The chapter also discusses the managerial and social implications of effective emotion recognition, such as improving psychotherapy consultations and personalized recommendation systems. The study concludes with a detailed evaluation of the proposed model, demonstrating its superior performance and potential for real-world applications.AI Generated
This summary of the content was generated with the help of AI.
AbstractEmotion is an important aspect of a person’s life, and in contemporary times, speech emotion recognition is an integral part of making human–machine interconnection applications. Like humans, machines need to understand emotions while interacting with humans because human emotions form the core of conversation and interaction. The primary purpose of human–computer interaction (HCI) is to provide customized solutions for different problems, create an esthetic design, online learning improvement, and for effective user interaction, recognizing the emotional state and needs of the user is very important. So, feelings have become an important part of HCI. The past body of work has focused on using RNN/LSTMs/ensemble CNN-RNN for recognizing emotions in a single language which makes use of very large sequential models, thus having very large training time and high model complexity. This paper reviews how SER plays a vital role in the healthcare sector and how we increase the efficiency and accuracy of multilingual speech emotion recognition systems which only use easily parallelizable convolution layers that reduce the number of parameters and thus model complexity. -
Analysis on Detection of Brain Tumor Using CS and NB Classifier
Damandeep Kaur, Surender Singh, KavitaThe chapter presents a novel approach for detecting brain tumors using a combination of Contrast Limited Adaptive Histogram Equalization (CLAHE), Independent Component Analysis (ICA), and a cuckoo search-based optimization technique. The method is evaluated using the DICOM dataset, which is widely used in medical imaging. The study compares the performance of the proposed method with other classifiers such as KNN, ANFIS, and back propagation, demonstrating superior accuracy and efficiency. The chapter also discusses the preprocessing steps, feature extraction, and classification process, providing a detailed analysis of the results and future research directions.AI Generated
This summary of the content was generated with the help of AI.
AbstractBrain tumor segmentation ad classification plays vital role in tumor diagnosis using various image processing techniques, as brain tumor is a critical and life threatening condition which is spreading worldwide. So, early detection of brain tumor can improve the patient survival. For this, computer-aided methods play a major role with better accuracy. It includes steps like preprocessing, segmentation, extraction, and classification with the help of MRI Images. Multiple proposed methods are already implemented with few limitations. In order to overcome existing limitations in current automation process, a model is proposed which detects the brain tumor using following approaches; ICA is chosen for extraction of features, and further, optimized technique will be the cuckoo search, and then, classification is done with Naïve Bayes. -
Full Connectivity Driven K-LEACH Algorithm for Efficient Data Forwarding in Wireless Sensor Networks
Ahmed Ashraf Afify, Catherine Nayer Tadros, Korhan Cengiz, Bassem MokhtarThe chapter discusses the challenges of energy consumption in Wireless Sensor Networks (WSN) and introduces the Full Connectivity Driven K-LEACH (FCDK-LEACH) algorithm as a solution. The FCDK-LEACH algorithm builds upon the K-LEACH protocol, enhancing its efficiency by using a new cluster-head selection criterion and an energy consumption model. The algorithm aims to maintain network connectivity for as long as possible by preserving cluster-heads until the last active node dies out. Simulation results demonstrate that the FCDK-LEACH algorithm outperforms the classical K-LEACH in terms of network lifetime, making it a promising approach for applications requiring long operational periods, such as healthcare monitoring.AI Generated
This summary of the content was generated with the help of AI.
AbstractDue to the usage of Internet in everything in our life, our environment is transformed into digital society, in which everything can be accessed from anywhere. This is the main concept of Internet of Things (IoT), which consists of intelligent devices connected together without location limitation. These devices can be sensors and actuators, which are used in environmental monitoring, home automation, disaster management and more. This is the definition of Wireless Sensor Network (WSN), which is considered a subset from IoT environment. WSN consists of hundreds of nodes spread in different area for monitoring different physical objects, it suffers from highest energy consumption of nodes, which affect network lifetime. Different routing protocols are used to cope with this challenge, Low Energy Adaptive Clustering Hierarchy (LEACH) protocol is the most common used one. LEACH is a cluster-based micro sensor network protocol that offers energy-efficient, and scalable routing for sensor nodes. So, in this paper, we investigate and present a modified algorithm using LEACH in conjunction with K-means clustering approach in order to achieve a Full Connectivity Driven K-LEACH algorithm (FCDK-LEACH). Based on the CH selection, the k-means algorithm aids in decreasing energy usage and therefore extending network lifetime. The CH is chosen based on the remaining energy level and the CH’s position with relation to the sensor node. The evaluation results show that our modified k-means-based hierarchical clustering enhances network lifetime. -
Detection of Potential Vulnerable Patients Using Oximeter
Navjyot Kaur, Rajiv KumarThe chapter discusses the innovative use of mobile phones equipped with sensors for detecting vulnerable patients through oximetry. It introduces Mobile Crowd Sensing (MCS) as a powerful tool for healthcare monitoring, detailing its benefits over traditional sensing networks. The text explores the various scales of MCS, from individual to community sensing, and highlights its applications in environmental, infrastructure, and health monitoring. Challenges such as energy consumption, data costs, and incentive schemes are addressed, along with a proposed method for automated data collection and analysis. The chapter concludes with a look at the future scope of this technology in healthcare, making it a compelling read for professionals interested in the intersection of healthcare and technology.AI Generated
This summary of the content was generated with the help of AI.
AbstractAgriculture, education and health systems have all progressed in the last decade. In times of pandemic crises like COVID-19, IoT and sensors play a critical role in the medical industry. Sensors and IoT-based health care gadgets have emerged as saviors for humanity in the face of resource shortage. Pulse oximeters are one such instrument that has been utilized widely during pandemics. Since a long time, pulse oximeters have been used to measure crucial body functions such as saturation of peripheral oxygen (SpO2) and pulse rate. They have been utilized to detect vital signs in patients in order to diagnose cardiac trouble early. However, oximeters have been widely utilized to detect SPO2 levels in persons during the current pandemic. People are being attacked by the COVID-19, which is silently destroying their lungs, causing pneumonia and lowering oxygen levels to dangerously low levels. We propose a strategy in this study for detecting possibly vulnerable individuals by classifying them using data obtained from pulse oximeters. We propose an approach by involving volunteers who will record their vitals and share it with administrators on a regular basis. -
A Novel Review on Healthcare Data Encryption Techniques
Gaurav Narula, Bhanuj Gandhi, Hitakshi Sharma, Shreya Gupta, Dharmender Saini, Preeti NagrathThe chapter 'A Novel Review on Healthcare Data Encryption Techniques' delves into the critical role of encryption in safeguarding sensitive healthcare data. It begins by emphasizing the necessity of encryption for protecting patient information and complying with regulations like HIPAA. The authors explore various encryption algorithms, including AES, DES, 3DES, and Blowfish, assessing their security and performance. The review also highlights the challenges posed by high-profile data breaches and the need for advanced encryption methods. Additionally, the chapter discusses innovative approaches such as image cryptography and chaotic structures in public-key cryptography. A comparative analysis of these algorithms is presented, identifying gaps in current methods and suggesting areas for improvement. The chapter concludes by underscoring the urgent need for encryption algorithms that are both fast and secure, especially in the face of emerging quantum computing threats. This comprehensive review offers valuable insights for professionals seeking to enhance data security in the healthcare sector.AI Generated
This summary of the content was generated with the help of AI.
AbstractSharing of personal digital health information is an arising idea of changing health statistics for research and different functions. Confidentiality, besides for legal users, and access auditability are sturdy safety requirements for health statistics. This paper will examine those requirements and advise a review for healthcare companies as a good way to assist in securely storing and sharing of affected persons’ statistics they host. It should additionally allow the best legitimate users to get entry to portions of the facts’ statistics they are permitted to. The recognition can be on these precise protection troubles of modern-day encryption techniques utilized in health care and the way encryption can help in addressing healthcare regulatory necessities. This paper provides an overview of encryption and decryption procedures, highlighting their security foundations, implementation regions, and strengths and limitations in the early stages of operation. Finally, the study pinpoints the existing gap based on the findings of the analysis, with a focus on a set of rules that are most acceptable for commercial use, given current cryptography trends that are moving closer to quantum computing. The focus of this study then shifts to the genuine need for a set of rules that offers no trade-off between encryption and decryption speeds, has low computation overhead, and is resilient to quantum method attacks. -
Profile-Based Calibration for AR/VR Glass
S. Vijayalakshmi, K. R. Kavitha, S. M. Subhash, D. Sujith Kumar, S. V. Sharveshvarr, P. BharathiThis chapter delves into the innovative design of AR/VR glasses that incorporate profile-based calibration for an enhanced user experience. The glasses utilize Android or Fuchsia operating systems to ensure ease of use and security. Key features include automatic map updates via Wi-Fi, customizable settings through a smartphone app, and precise navigation using augmented reality. The power adjustment of the glasses is automated by fetching user data from cloud accounts or Bluetooth, ensuring comfortable and accurate vision. The chapter also discusses the use of dedicated hardware for efficient task processing and linear actuators for dynamic power adjustments. Additionally, it explores potential future enhancements such as health and fitness tracking, emergency SOS features, and improved battery technology. The development process is meticulously outlined, from user registration and face recognition to the final power adjustment setup. The chapter concludes with user testing results and suggestions for future improvements, making it a valuable resource for professionals interested in the intersection of technology and eyewear.AI Generated
This summary of the content was generated with the help of AI.
AbstractThe purpose of this paper is to develop an innovative and affordable product, which will be so navigable for the day-to-day lives of the people. This paper ensures that people will be benefited a lot from their eye care to advanced technological features and also makes their days easier by displaying important day-to-day information as pop-ups and notifications in an interesting way. Android glasses which act as an independent device can be used to perform various tasks like an android smartphone. Voice command and a physical button can be used to interact with glass. Various tasks can be performed like navigation, making phone calls to setting reminders and alarms, and seeing weather information and news events. Augmented reality is used in various applications like maps and surroundings in which helps to understand the environment and reality. Even our glasses can automatically adjust the optical power according to the user’s profile. This ultimately helps to get a clear and high-definition image of our surroundings and objects we see on, helps on reading activity where user can also read small texts. -
Performance Analysis of Data Sharing Using Blockchain Technology in IoT Security Issues
R. Ganesh Babu, S. Yuvaraj, M. Muthu Manjula, S. Kaviyapriya, R. HariniThe chapter begins by introducing the Internet of Things (IoT) and its potential security issues due to the vast amounts of data generated. It then delves into the application of blockchain technology in addressing these security concerns, emphasizing the benefits of decentralization, anonymity, and privacy protection. However, the chapter also acknowledges the challenges of integrating blockchain with IoT, such as the resource constraints of IoT devices and the inefficiency of blockchain scaling. The main contribution of the chapter is the proposal of a new cryptocurrency engineering solution for IoT that retains the advantages of blockchain while overcoming these challenges. The proposed solution is illustrated through a smart home example and is designed to be application-agnostic, suitable for various IoT use cases. The chapter also includes a literature survey of existing IoT security measures and a performance analysis of the proposed blockchain algorithm. Throughout the chapter, the authors highlight the need for secure and efficient data sharing in IoT and demonstrate how their proposed solution can address this need.AI Generated
This summary of the content was generated with the help of AI.
AbstractThe Internet of Things (IoT) is advancing rapidly in research and industry, but it still has insurance and security flaws. On a fundamental level, common security and assurance approaches will be irrelevant for IoT due to its decentralized geology and the resource requirements of a significant portion of its contraptions. Blockchain (BC), which supports the digital currency Bitcoin, is being used to provide security and insurance via conveyed networks of IoT-related geologies. Regardless, BCs are computationally expensive but have real-world exchange speeds overhead as well as delays, making them unsuitable for IoT devices. In addition to IoT, this location article recommends another secure, one-of-a-kind, but portable design based on BC advancement that eliminates BC operating costs while retaining a significant portion of its security and higher efficiency. The depicted methodology is based on a brilliant internal implementation and is only a specialist relevant examination for a broader range of IoT devices. The proposed strategy is more even-handed and incorporates magnificent homes, an underlay association, and cloud stores coordinating BC information trades that can provide assurance and security. We use various types of BCs based on an organization’s hierarchy of leadership when a trade occurs, and we also use flowed trust procedures instead of a distributed geology. The abstract evaluations of a design under standard threat models justify providing IoT device security and insurance. -
GreenFarm: An IoT-Based Sustainable Agriculture with Automated Lighting System
Diganta Dey, Najmus Sakib Sizan, Md. Solaiman MiaThe chapter 'GreenFarm: An IoT-Based Sustainable Agriculture with Automated Lighting System' delves into the application of IoT in modern farming practices to address the growing demand for food due to population increase. It presents a model named GreenFarm, which consists of two main sections: Sensor-Based Farming and Automatic Lighting System. The Sensor-Based Farming section utilizes various sensors to monitor soil moisture, temperature, humidity, and detect external movements and rainfall. The Automatic Lighting System ensures optimal lighting conditions for crops, crucial for their growth. The chapter also discusses the implementation of these systems, highlighting their efficiency and cost-effectiveness compared to existing models. By integrating solar panels and automated controls, GreenFarm offers a sustainable and eco-friendly solution to the challenges faced by the agriculture industry.AI Generated
This summary of the content was generated with the help of AI.
AbstractThe population of the world in 2021 was approximate 7.9 billion which will be increased about 10 billion by 2050. In this symphony, the necessity of food and pure water will await about double. On the other hand, the space of free land for agriculture is decreasing day by day. So, it is a very hard challenge for everyone to manage a huge amount of food which is a courtly right for us. Always this challenge is might be a footprint for fulfilling the great demand. For solving such types of problems, we have to connect with the modern technological systems. Nowadays, the Internet of Things (IoT) is an optimal way to prevent such types of challenges. In this paper, we proposed a model by which a farmer can control lighting system, water pump, soil condition, and crops condition with the help of IoT. By implementing such type of model, the farmer will be able to monitor an auto lighting system, an auto water irrigation system, prevent external objects, save the electric power and analyze real-time data which are collected from different types of sensors by using a Wi-Fi system. All the hardware of the proposed model is directly connected with NodeMCU ESP8266. The essential energy of the whole system depends on the solar panel which reduces the cost, saves electricity and makes the total system eco-friendly and cost-effective. By using our proposed model, the farmers can detect the condition of the weather which makes a good impact on agriculture. For the current demand, the proposed model will make a good platform to complete our civil rights in upcoming future. -
A Survey of Different Supervised Learning-Based Classification Models for Student’s Academic Performance Prediction
Sandeep Kumar, Ritu SachdevaThis chapter delves into the critical area of educational data mining, specifically focusing on the prediction of student academic performance using supervised learning-based classification models. It begins by introducing the importance of student performance prediction (SPP) in various educational scenarios, such as learner data analysis and curriculum planning. The chapter then surveys several existing techniques of SPP, including data fusion approaches, neural networks, and machine learning models. It discusses the applications of these models in enhancing student outcomes, aiding instructors in course design, and providing valuable insights to educational administrators. The comparative analysis of different SPP models highlights their strengths and weaknesses, with a particular emphasis on the superior performance of certain methods like naïve Bayes. Additionally, the chapter explores the theoretical underpinnings and future directions of these models, suggesting potential advancements in feature extraction and prediction accuracy. By offering a comprehensive overview of the current state and future prospects of SPP models, this chapter serves as an invaluable resource for professionals seeking to leverage data-driven approaches in educational settings.AI Generated
This summary of the content was generated with the help of AI.
AbstractDespite delivering high-quality learning, the need to evaluate student’s academic achievement has become increasingly essential to optimize the integrity and aid learners to achieve excellent results in academics. One of the critical challenges is the inadequacy of an accurate and efficient estimation method. Predictive analytics (PA) can help organizations make more intuitive and intelligent decisions. The purpose of this paper is to evaluate existing educational-based student performance analytics study that focuses on forecasting learner educational excellence. Earlier academics have presented several strategies for developing the optimal process framework, employing various academic statistics, methodologies, methods, and platforms. Numerous learning challenges, like categorization, prediction, and cluster analysis, are associated with the predictive Analysis used during estimating students’ achievement. The student performance prediction model has various advantages and applications, such as it is used to help instructors in curriculum design and improvement. SPP provides recommendations to the students and offers comments to educators. In this paper, several methods of student performance prediction (SPP) are compared with the help of different performance parameters such as accuracy, specificity, and sensitivity. -
An Exploration of Machine Learning and Deep Learning Techniques for Offensive Text Detection in Social Media—A Systematic Review
Geetanjali Sharma, Gursimran Singh Brar, Pahuldeep Singh, Nitish Gupta, Nidhi Kalra, Anshu ParasharThis chapter systematically reviews machine learning and deep learning techniques for detecting offensive text in social media, addressing the rising issue of abusive language online. It examines various machine learning models, such as Naive Bayes, SVM, and decision trees, and explores deep learning approaches like CNN and RNN. The chapter also delves into available datasets and highlights the challenges and future prospects in this domain. By offering an in-depth analysis of existing methods and a critical evaluation of their effectiveness, this chapter aims to guide researchers and practitioners in developing more accurate and efficient offensive text detection systems.AI Generated
This summary of the content was generated with the help of AI.
AbstractThe increasing popularity of usage of social media platforms such as Facebook, Twitter, and What’s App has also given a potential to spread hatred or to cause harassment or inconvenience by using offensive and abusive texts on these platforms. It has been identified that offensive language is a significant problem for the safety of both social platforms and their users. The circulation of offensive or abusive language to the online community undermines its reputation, scares away users and also directly affects their mental growth. Offensive or abusive text just not only affects users but also affects stakeholders such as governments, autonomous organizations, and social media platforms. Every day such stakeholders have to spend long hours to remove such content manually from these platforms. So, there arises the need to detect offensive and abusive text in user’s posts, messages, comments, blogs, etc., automatically. To address this issue, detection of offensive/abusive text in user’s message, posts, comments, blogs, etc., has become a crucial task in recent times. There are various machine-learning and deep learning approaches existing in literature to identify such abusive texts. We have followed a systematic review process, in which we aim to explore the various machine learning or deep learning approaches adopted by various researchers to detect and the offensive/abusive speech in user’s textual posts, messages, comments, blogs, etc. This systematic review will help to strengthen the design and implementation of a new and efficient approach for automatic detection and removal of abusive or offensive text in user’s message or post. This deep exploration of the existing techniques will further have strong benefit to people, society, government, and social platforms in order to avoid spreading of hatefulness, harassment through social media. -
Voice Synthesizer for Partially Paralyzed Patients
Eldho Paul, K. Ritheesh Kumar, K. U. PrethiThe chapter delves into the creation of a voice synthesizer model designed to retain the original voice of partially paralyzed patients. It addresses the complexities of vocal cord paralysis and the need for efficient voice synthesis. The model employs an orator-selective embedding network to capture speaker-specific traits, followed by training a high-quality Text-to-Speech (TTS) model on a smaller dataset. This approach decouples the networks, allowing each to be trained independently, reducing the need for extensive multi-speaker datasets. The model is evaluated on datasets like VCTK and Libri Speech, demonstrating high naturalness scores. The proposed method shows promise in generating natural and personalized voices for both seen and unseen speakers, making it a significant advancement in the field of voice synthesis for medical applications.AI Generated
This summary of the content was generated with the help of AI.
AbstractIn this paper, we have created a voice synthesizer especially for the people who are partially paralyzed, and our aim is to retain their original voice of the targeted speaker. The model consists of orator encoder, synthesizer, spectrogram generator, and vocoder. The orator encoder is a trained model by using the voices of variety number of speakers including noisy speech and without caption. This is used to generate a stable proportional insertion vector from only a few seconds of a targeted speaker’s source speech. The synthesizer based on Tacotron 2 that is used to condition the generated Mel spectrogram based on the embedded speaker. An auto-regressive vocoder network is used to produce the waveform samples from the Mel spectrogram. Our model is very useful to retain the original voice of the speakers who have been partially paralyzed because they will be unable to raise their volumes of their voices and it is able to produce natural voice of the speaker that is unseen during the training purpose. The proposal is compared with several state-of-art methods and established a mean opinion score is 94%. -
An Ensemble BERT CHEM DDI for Prediction of Side Effects in Drug–Drug Interactions
Alpha Vijayan, B. S. ChandrasekarThe chapter introduces an innovative ensemble BERT model, BERT CHEM DDI, designed to predict side effects in drug–drug interactions. It utilizes pre-trained BERT models and XG Boost algorithms to classify drug sequences and their interactions. The model outperforms existing methods by an average of 3% across various metrics, showcasing its superior accuracy and reliability. The research involves extensive data extraction from PUB CHEM and meticulous pre-processing techniques, including tokenization and padding. Hyperparameter tuning through grid search cross-validation further enhances the model's performance. The chapter concludes by highlighting the importance of addressing drug–drug interactions in healthcare and suggests future directions for further improvement.AI Generated
This summary of the content was generated with the help of AI.
AbstractAdult primary health care is highly dependent on the management of medications prescribed for them. Ineffective administration of drugs can cause serious side effects and lead to death. Early identification of drug interactions helps to effectively manage drugs. We propose a method called Ensemble BIO BERT CHEM DDI a Framework, which aims to identify potential patterns in the molecular structure of drugs. The 2013 DDI interaction dataset is considered for analysis at the molecular level. Historical analysis of similar studies proven that it is time-consuming and high-complexity problem to solve. A solution that can predict accurate drug–drug interactions can save millions of lives years. A total of 730 drug library documents were processed to understand the relationship between drug interactions. The Ensemble BERT CHEM DDI model framework is designed to predict five different categories (negative, effect, query, severe effect, and metabolism). The performance of the model is better than the previous literature. The f1 score is increased by 3%, and the model accuracy is improved by 6%. ROC AUC area increased to 4% compared to previously published work. -
Hybrid Approach for Path Discovery in VANETs
Sharad Chauhan, Gurpreet SinghThis chapter delves into the intricacies of Vehicular Ad Hoc Networks (VANETs), highlighting their self-organized and autonomous nature. It discusses the critical role of VANETs in ensuring road safety through wireless communication and real-time information sharing among vehicles. The chapter provides a comprehensive overview of the various routing protocols used in VANETs, categorizing them into proactive, reactive, and position-based routing. It also introduces innovative routing algorithms such as the Motion Vector Routing Algorithm (MOVE) and Vehicle-Assisted Data Delivery (VADD) protocol. The proposed methodologies focus on optimizing route lifetime and stability by considering the direction of vehicle movement and the duration of node presence in the network. The chapter concludes with a detailed simulation and results section, demonstrating the improved performance of the proposed methods in terms of packet loss and throughput. The research offers valuable insights into enhancing the efficiency and reliability of VANETs, making it a must-read for professionals in the field.AI Generated
This summary of the content was generated with the help of AI.
AbstractVANETs and MANETs are considered as the networks without any prior configuration and are also considered as autonomous in it. The vehicular ad hoc network (VANETs) offers a large number of new applications that do not require a lot of infrastructure. A routing protocol is required for many of these applications. Vehicle-to-vehicle and vehicle-to-infrastructure communication are two types of communication that are possible in VANETs. These methods shown how communication have been performed between vehicles directly or with the help of infrastructure unit present at the road side. In node-to-node communication, path establishment is considered to be a major issue. The proposed work considered the concept of multicasting routing technique for the path establishment from source to destination. This research concluded that performance of VANETs is improved by optimizing the route for sending the packets between the nodes. NS2 simulation environment has been used and shown results in the form of network delay for path discovery. -
Voice Emotion Detection: Acoustic Features Extraction Using Multi-layer Perceptron Classifier Algorithm
Nikhil Sai Jaddu, S. R. S. Shashank, A. SureshThe chapter focuses on voice emotion detection through acoustic feature extraction using a Multi-layer Perceptron (MLP) classifier algorithm. It introduces the challenges and importance of recognizing emotions in speech signals for human-machine interaction. The proposed method improves upon existing systems by enhancing data accuracy and stability in emotion classification. The MLP algorithm is highlighted for its effectiveness in transforming input dimensions into desired outputs, making it suitable for speech emotion recognition. The chapter also discusses the implementation process, including data preprocessing, feature selection, and classification. Experimental results demonstrate the high accuracy of the MLP classifier in recognizing speaker-dependent and speaker-independent emotions. The conclusion emphasizes the potential of MLP in speech recognition and emotion representation, setting a benchmark for future research in the field.AI Generated
This summary of the content was generated with the help of AI.
AbstractVoice emotion detection is the process by which human emotions are predicted by voice, along with the accuracy of the prediction. It creates better human–computer interaction. Emotions are idiosyncratic and difficult to explain by voice, so it is also difficult to predict a person's emotions, but voice emotion recognition makes it possible. This is the same concept that animals such as dogs, elephants, and horses use to understand human emotions. There are various states for predicting one's emotions. They are tone, pitch, expression, action, etc., are considered to find emotions through language. Some samples are used to make the classifier to train and perform speech emotion recognition. This study examines the Ryerson Audio Visual Database of Emotional Speech and Song (RVDESS) dataset. Here, the most important characteristics such as Mel frequency cepstrum coefficient (MFCC) are extracted. -
Link and Coverage Analysis of Millimetre (mm) Wave Propagation for 5G Networks Using Ray Tracing
Animesh Tripathi, Pradeep Kumar Tiwari, Shiv Prakash, Gaurav Srivastava, Narendra K. ShuklaThe chapter delves into the critical role of millimeter-wave (mm-wave) propagation in meeting the escalating data demands of 5G networks. It highlights the advantages of using the 32 GHz band, such as increased spectral efficiency and stronger directivity. The authors employ ray tracing methods to simulate and analyze the channel characteristics, providing a comprehensive study of received power, angles of departure and arrival, and path range. The research focuses on a dense urban environment, using the University of Allahabad's science faculty as a case study. The findings reveal the limitations of mm-wave signals in penetrating buildings and the importance of beam steering to enhance received power. The chapter concludes with a discussion on the future scope of 5G and 6G networks, emphasizing the need for further research on ray tracing accuracy and the modeling of smaller objects.AI Generated
This summary of the content was generated with the help of AI.
AbstractGlobally the Millimetre-wave (mm-wave) technology is leading over the fifth-generation (5G) networks because of higher frequency bands and therefore wider spectrum. The mm-wave communications are now seen as a viable 5G technology. It can improve the speed and remove congestion as the present cellular networks fall short. Link and coverage analysis play very vital role before deploying any network. We should have known everything about the spectrum and its behaviour for every scenario. Deployment of 5G networks has started in many countries, in India it could be started soon, so before deployment link and coverage, analysis is necessary for better network coverage in every area. Though there are several challenges in implementation which include the success of a multi-phase characterization of the mm-wave that characterizes the trustworthy and high-speed channel interfaces and system designs. In this paper, we analyse links in non-line-of-sight conditions and coverage using ray tracing method with different numbers of reflections and launched rays in science faculty, University of Allahabad (A Central University) campus. -
Student Attendance Monitoring System Using Facial Recognition
Reshma B. Wankhade, S. W. Mohod, R. R. Keole, T. R. Mahore, Sagar Dhanraj PandeThe chapter delves into the development of a student attendance monitoring system using facial recognition technology. It begins by discussing the evolution and applications of facial recognition, from detecting frauds to aiding law enforcement. The traditional methods of attendance marking are critiqued for their inefficiency and the prevalence of proxy attendance. The chapter then introduces a novel secured framework for attendance monitoring, utilizing facial landmark algorithms for detection and verification. The proposed system automates attendance during class hours, exams, and other activities, significantly reducing manual effort. The implementation details, system architecture, and results analysis are thoroughly discussed, showcasing the system's efficiency and potential for future enhancements with advanced algorithms like YOLO.AI Generated
This summary of the content was generated with the help of AI.
AbstractWith the furtherance of technology, the frauds and malpractices related to it have been on the verge of happening, and technology has been a kind of savior in so many cases. Facial recognition can be considered as such a savior in terms of numerous malpractices and fraud activities. Not only in the field of fraud prevention or detection, but facial recognition and automated face detection tools and technologies play an important role in the attendance management systems, detection of criminals, etc. Document image analysis is used in detecting frauds, but the proposed model relies on the image or video. In this paper, the implementation of facial recognition techniques along with their features and application has been explained. This paper also explains how facial recognition technology is now getting introduced and applied across numerous aspects of life. This paper also highlights the drawbacks or the limitations of facial recognition technologies, and in addition, it also presents the various methods and ideas using which facial recognition technology, and its performance can be enhanced and the limitations can be overcome. A novel framework for monitoring student attendance has been implemented. A Web application based upon the Django framework has been designed for easy monitoring and maintaining the attendance of the student using the facial landmark algorithm. -
Credit Card Fraud Detection Using Various Machine Learning and Deep Learning Approaches
Ashvini S. Gorte, S. W. Mohod, R. R. Keole, T. R. Mahore, Sagar PandeThis chapter delves into the critical issue of credit card fraud, which has become increasingly prevalent with the widespread use of credit cards. Traditional methods for detecting fraud have proven inadequate, leading to the development of advanced machine learning and deep learning approaches. The chapter examines various algorithms such as Naive Bayes, Support Vector Machines (SVM), Artificial Neural Networks, Decision Trees, Random Forest, Deep Neural Networks, and Logistic Regression, evaluating their accuracy and efficiency in detecting fraudulent activities. A hybrid model combining multiple algorithms is proposed to enhance detection capabilities. The chapter also discusses the importance of dataset selection and preprocessing, as well as the challenges in training models to achieve high accuracy. The results and analysis section highlights the superior performance of deep neural networks compared to other algorithms. The conclusion outlines the future scope of these systems, suggesting the use of optimization techniques and advanced deep learning algorithms for further improvement.AI Generated
This summary of the content was generated with the help of AI.
AbstractIt is evident that the evolution in technology has surpassed expectations and reached different heights in a shorter span of time and with evolving technology; a lot of changes have been introduced in our lives, and one such change is the replacement of traditional payment methods with the credit card system. Credit card use increases the most during online shopping. With the huge demand for credit cards worldwide, credit card fraud cases to are increasing rapidly. In this paper, four machine learning algorithms that are decision tree, random forest, logistic regression, and Naïve Bayes have been used for training the models. Also, deep neural networks have been implemented for model training which is giving more promising results compared to the machine learning algorithms. The accuracy of each algorithm used in the implementation of the credit card fraud detection has been compared and analyzed. -
Forest Fire Detection and Prevention System
K. R. Kavitha, S. Vijayalakshmi, B. Murali Babu, D. Rini Roshan, K. KalaivaniThe chapter delves into the critical issue of forest fire detection and prevention, emphasizing the need for advanced systems in modern industrialized societies. It introduces a sophisticated Forest Fire Detection and Prevention System that integrates multiple sensors, including temperature, humidity, LDR, ultrasonic, and accelerometer sensors, with Arduino UNO. The system is designed to detect abnormal conditions such as increased temperature, smoke, or unusual moisture levels, triggering immediate alerts via GSM modules. The chapter highlights the use of machine learning techniques to enhance detection accuracy and prevent false positives. Additionally, it presents a detailed block diagram and working principle of the system, showcasing its potential to reduce catastrophic fire events and protect natural resources.AI Generated
This summary of the content was generated with the help of AI.
AbstractForest Fire is getting worse for all in recent times, destroyed many lives, homes millions of trees and plants in the forest. Every year it causes a loss of about 440 crores. This is the only case for India, if we take overall forest in the world we cannot predict the loss. This device can be detected and anticipated from the use of Arduino UNO-based totally on GSM board. On this project, sensors are interfaced with Arduino UNO and detects the forest fire. The liquid crystal display showed the current prediction value and GSM board are interfaced with Arduino UNO. The values are taken from the sensor and showed in LCD. With the help of GSM board, the cellular cellphone receive cutting-edge prediction of fireplace.
- Title
- International Conference on Innovative Computing and Communications
- Editors
-
Deepak Gupta
Ashish Khanna
Aboul Ella Hassanien
Sameer Anand
Ajay Jaiswal
- Copyright Year
- 2023
- Publisher
- Springer Nature Singapore
- Electronic ISBN
- 978-981-19-3679-1
- Print ISBN
- 978-981-19-3678-4
- DOI
- https://doi.org/10.1007/978-981-19-3679-1
Accessibility information for this book is coming soon. We're working to make it available as quickly as possible. Thank you for your patience.