Proceedings of the 7th International Conference on Communications and Cyber Physical Engineering
ICCCE 2024, 28–29 Febuary, Hyderabad, India
- 2026
- Book
- Editors
- Amit Kumar
- Stefan Mozar
- Book Series
- Lecture Notes in Electrical Engineering
- Publisher
- Springer Nature Singapore
About this book
This book includes peer-reviewed high-quality articles presented at the 7th International Conference on Communications and Cyber-Physical Engineering (ICCCE 2024), held on July 19 and 20, 2024, at G Narayanamma Institute of Technology & Science, Hyderabad, India. ICCCE is one of the most prestigious conferences conceptualized in the field of networking and communication technology offering in-depth information on the latest developments in voice, data, image, and multimedia. Discussing the latest developments in voice and data communication engineering, cyber-physical systems, network science, communication software, image, and multimedia processing research and applications, as well as communication technologies and other related technologies, it includes contributions from both academia and industry. This book is a valuable resource for scientists, research scholars, and PG students working to formulate their research ideas and find the future directions in these areas. Further, it serves as areference work to understand the latest engineering and technologies used by practicing engineers in the field of communication engineering.
Table of Contents
-
Navigating Complexity: The Role of Tableau in Driving Data-Driven Decision Making
Kesavulu Poola, Pavakumari, J. Anil Kumar, Akkyam VaniThis chapter delves into the pivotal role of Tableau in driving data-driven decision-making, emphasizing its user-friendly interface and powerful visualization capabilities. It explores how Tableau enables organizations to leverage big data for competitive advantage, with a focus on its benefits such as quick data processing, interactive dashboards, and storytelling features. The text also examines the impact of Tableau on organizational performance, highlighting improvements in data accuracy, decision-making speed, and overall business outcomes. Through case studies and practical examples, it demonstrates how Tableau has democratized data analytics, making it accessible to a broader audience. Additionally, the chapter provides a detailed methodology for using Tableau, including steps for importing and cleaning data, and creating various types of visualizations. It also presents an in-depth analysis of data science salaries across different countries, roles, and expertise levels, offering valuable insights into the data science job market. The conclusion underscores Tableau's transformative potential in fostering innovation and agility in today's competitive landscape.AI Generated
This summary of the content was generated with the help of AI.
AbstractThis paper delves into the transformative BI role of Tableau in the Management landscape of data analysis, showcasing its profound impact on extracting insights and driving informed decision-making. We used real time data and created an interactive dashboard for optimal decision making and storytelling through a comprehensive exploration of Tableau's intuitive interface, powerful visualizations, and advanced analytical capabilities, it elucidates how organizations can harness the platform to reveal the full features of the data. By seamlessly integrating with diverse data sources and offering dynamic dashboards. This abstract underscore Tableau's significance in empowering users across industries to navigate the intricacies of data analysis, fostering innovation, agility, and competitive advantage in an increasingly data-driven world. -
IoT Based Solar Powered Multipurpose Autonomous Robot for Effective Farming: A Prototype
Morampudi Rajitha, Ch. Shravani, P. Rajesh Kumar, A. Keerthi, M. Navya, D. MeghanaThis chapter delves into the development and application of an IoT-based, solar-powered multipurpose autonomous robot designed to enhance farming efficiency. The robot, controlled via an Arduino ATmega328 and powered by solar energy, is equipped to perform essential agricultural tasks such as seeding, plowing, and watering. The text provides a detailed overview of the robot's components, including the solar panel, DC motors, WiFi module, and L293D motor driver, and explains how these elements work together to automate farming processes. It also discusses the challenges faced by modern agriculture, such as climate change, water scarcity, and soil degradation, and how this robot can help address these issues. The chapter includes practical results from prototype testing, demonstrating the robot's ability to accurately sow seeds and manage water distribution. The conclusion highlights the potential of this technology to reduce labor and time requirements, ultimately improving productivity and sustainability in agriculture.AI Generated
This summary of the content was generated with the help of AI.
AbstractAcross India, 70% of the population depends on agriculture for their livelihood. Numerous operations are completed in the agricultural field, including plowing, mowing grass, and sowing seeds. Sustainable and effective farming methods have been made possible by the use of robotics and renewable energy sources into agriculture. With its sophisticated sensors, actuators, and autonomous navigation features, the suggested robot can carry out a number of chores on its own, including mowing the lawn, planting seeds, watering crops, and plowing the soil. The proposed robot is powered by a solar panel and is controlled by the Internet of Things, which provides signals to the robot for the necessary mechanisms and movements. With the use of renewable energy, cutting-edge motor technology, an intelligent Arduino, and the Internet of Things, our Solar Powered Autonomous Multipurpose Agriculture Robot will increase farming productivity while reducing environmental damage. -
Revolutionizing Healthcare Logistics: Elevating Drug Accountability Through GIS-Enabled Tracing
Padmavati E. Gundgurti, Poornima Gottimukkala, Nitisha Thallapally, Vaanisha Praveen MahendrakarThis chapter delves into the critical challenges faced by pharmaceutical supply chains, including the proliferation of counterfeit drugs and the lack of transparency. It highlights the limitations of conventional tracing methods and introduces Geographic Information Systems (GIS) and blockchain technology as innovative solutions. The text explores the benefits of integrating GIS-enabled tracing with blockchain, such as real-time visibility, immutable records, and enhanced regulatory compliance. It also discusses the technical complexities and challenges associated with this integration. The chapter presents a proposed system architecture using Ethereum blockchain and smart contracts, detailing the roles of various stakeholders and the flow of drug supply chain processes. Additionally, it showcases results from a simulation of vehicle tracking using GIS technology. The conclusion emphasizes the need for further research to enhance the efficacy of the proposed approach, underscoring its potential to combat counterfeit drugs and improve transparency in pharmaceutical supply chains.AI Generated
This summary of the content was generated with the help of AI.
AbstractTraditional methods for tracking pharmaceuticals are unreliable, leaving the door open for counterfeits and theft. GIS and blockchain offer a powerful solution. In this paper, we aim to leverage the Ethereum blockchain and smart contracts to enhance traceability and transparency in the pharmaceutical supply chain, addressing counterfeit drugs. By registering stakeholders and providing verification, ownership transfers are immutably recorded on the blockchain. Integrating Geographic Information System (GIS) technology further enhances spatial data analysis, aiding data capture, storage, and decision-making. This approach reduces human intervention, ensures stakeholder accountability, and fosters trust, ultimately ensuring consumers receive authentic medicines. -
Enhancing Image Captioning Using Deep Learning
Sireesha Vikkurty, Nagaratna P. Hegde, Abhijith Koppula, Yagnan Reddy NimmaThis chapter delves into the intersection of computer vision and natural language processing, focusing on the advancements in image captioning through deep learning techniques. The study introduces an attention-based Encoder-Decoder architecture that combines convolutional features from pre-trained ImageNet models with object features from the Flick8K dataset. The research methodology involves a detailed process of image feature extraction, text preprocessing, and model training using CNN and LSTM networks. The proposed architecture is evaluated using the BLEU score, with ResNet outperforming VGG16 and Xception in generating accurate and contextually appropriate captions. The chapter also discusses future work, including the potential for fine-tuning CNN models and incorporating an attention module to further enhance performance. This comprehensive exploration provides valuable insights into the current state and future directions of image captioning technology.AI Generated
This summary of the content was generated with the help of AI.
AbstractIn image captioning, the challenge lies in creating coherent textual descriptions for images. Traditionally, image descriptions have been limited to single-word annotations based on object detection. Recent progress mainly relies on deep learning, notably Encoder-Decoder models that integrate Convolutional Neural Networks for feature extraction. However, there's limited exploration into integrating object detection features. An AI model, employing a regenerative neural architecture, seamlessly merges computer vision and NLP to autonomously generate descriptive phrases for images. By utilizing Convolutional Neural Networks and Recurrent Neural Networks the model accurately and fluently describes images, showcasing linguistic proficiency and comprehension of visual content. Evaluations across diverse datasets consistently validate the model's efficacy in delivering precise and articulate image descriptions. -
Enhanced Road Safety Through Video-Based Helmet Violation Detection
Susmith Reddy Duggimpudi, Rohan Reddy Solipuram, Mukesh Kottur, Saroja Kumar Rout, Nilamadhab Mishra, S. Eswar ReddyThis chapter explores the implementation of a video-based helmet violation detection system to improve road safety, particularly in regions with high motorcycle traffic. The system utilizes Faster R-CNN, a deep learning model, to accurately detect helmet violations in real-time surveillance footage. The article delves into the methodology, including data preprocessing, model training, and performance evaluation, demonstrating the system's effectiveness in challenging environments. It also compares various computational vision systems, emphasizing the superiority of the proposed solution. The results indicate promising accuracy and reliability, underscoring the potential of automated helmet detection in reducing motorcycle accidents and promoting compliance with safety regulations.AI Generated
This summary of the content was generated with the help of AI.
AbstractIn recent times, computer technology has been increasingly applied to recognize motorcycle helmets in real-time surveillance footage. Deep learning techniques, particularly for object detection and classification, have gained significant popularity for this purpose. However, several challenges, such as low resolution, poor lighting, unfavorable weather conditions, and occlusion, continue to impact the accuracy of existing models in detecting motorcycle helmets. To overcome these limitations, a novel approach utilizing the Faster R-CNN model has been proposed. Here, the input image is used to train the Region Proposal Network (RPN), then the Faster RCNN model is trained with the RPN weights. Live surveillance footage has been used to identify motorcycle helmets using this method, and the results have been promising, with a 95% accuracy rate. As a result, real-time helmet detection for motorcyclists is made possible by employing deep learning approaches, and the suggested strategy is an effective way to overcome the shortcomings of existing models. -
GreenWatch: Revolutionizing Farming with Plant Disease Detection by CNN
Aadithya Vikram Budarapu, John Hyde Gaddam, Ram Boyedi, Thoshan Kumar Muthyala, Sunanda YadlaThis chapter explores the critical role of early disease detection in improving agricultural productivity, particularly in India where farming is a significant economic sector. It delves into various deep learning models, including InceptionV3, ResNet50, and ResNet152v2, which are trained on extensive plant image datasets to identify diseases with high accuracy. The ensemble model, combining these architectures, achieves an impressive accuracy of 98.29%, revolutionizing crop protection. The project also emphasizes the importance of user-friendly technology, bridging the gap between complex deep learning and everyday plant care. By uploading an image, users receive diagnoses and treatment recommendations, empowering them to care for their plants effectively. The chapter discusses the strategic intent behind the technology choices and their collective impact on accessibility and user empowerment, ultimately contributing to sustainable agricultural practices and global food security.AI Generated
This summary of the content was generated with the help of AI.
AbstractThe project addresses a critical challenge in Indian agriculture: plant disease detection. It goes beyond simply identifying diseases by proposing a system that categorizes them by severity using statistical farming data. This additional information can be crucial for farmers in determining the most appropriate course of action. The project explores Convolutional Neural Networks (CNNs) – specifically ResNet50, ResNet-152v2, and Inceptionv3 – as a deep learning approach for disease identification. By analyzing large datasets of plant images, the project aims to develop a highly effective disease detection system. This system has the potential to revolutionize agricultural practices in India by offering real-time disease detection and surpassing the capabilities of existing methods. The possibility of integrating the system with sensors further enhances its potential for comprehensive farm monitoring. -
Modelling of Li-Ion Battery Pack and Simulation of BMS
M. Chakravarthy, Dheeraj Gurijala, Bhargavi Nagavarapu, Sannihith PerukaThis chapter delves into the modeling of Li-ion battery packs using Thevenin's equivalent circuit model, which effectively captures the dynamic and static characteristics of batteries. The study utilizes a high-capacity Lithium-ion battery pack with specific nominal capacity, voltage, and current ratings, critical for accurate performance and safety assessments. The pulse discharge method is employed to extract key parameters such as internal ohmic resistance and polarization resistance and capacitance, essential for battery simulation. The chapter also focuses on the simulation of Battery Management Systems (BMS), highlighting its core functions: status monitoring, state estimation, and optimization management. A detailed control circuit using universal logic gates is developed to manage charge and discharge processes, ensuring safe and efficient battery operation. The relationship between state of charge (SoC) and open circuit voltage (OCV) is explored, with various techniques applied for accurate SoC estimation. The chapter concludes with the successful implementation of the Thevenin model and the BMS control circuit, providing a versatile and cost-effective solution for battery management.AI Generated
This summary of the content was generated with the help of AI.
AbstractIn recent years, environmental pollution and energy issues have gained more attention. Li-ion batteries have gotten more attention recently due to their high energy density, safety, and low pollution. Therefore, building an accurate model is crucial and the battery management system (BMS) is crucial for the battery's safe operation. Accurate state of charge (SoC) estimation and protection of the battery are the fundamental functions of a battery management system (BMS), and it significantly improves battery life and enhances performance.The decision to use the first-order Thevenin equivalent circuit model for the cell model is based on balancing complexity with accuracy. Parameters such as open-circuit voltage (OCV), internal ohmic resistance, polarized internal resistance, and capacitance are crucial for characterizing cell behaviour. These parameters are determined using the pulse discharge method. These parameters complete the modelling of the battery.Managing both charging and discharging processes is one of the fundamental operations of BMS. Using a control circuit consisting mainly of universal logic gates this function is carried out, which is simple to develop and implement.In this paper, we will be focusing on the extraction of parameters using the pulse discharge method of a battery pack. And discharge and charge management using a control circuit in BMS. -
Performance Analysis of Effective Battery Management System for Electric Vehicles
M. T. L. Gayatri, Odela SathwikaThis chapter delves into the performance analysis of an effective battery management system for electric vehicles, focusing on the integration of supercapacitors to enhance system efficiency and durability. The text explores the dynamic power management system, highlighting the role of supercapacitors in handling high charging and discharging current peaks, thereby extending the battery pack's longevity. It also discusses the use of a fuzzy logic controller for regulating DC link voltage and the implementation of a bidirectional DC-DC converter for optimal power flow. The chapter presents simulation results that demonstrate the system's performance under various conditions, including irradiance and temperature variations. Additionally, it covers the control algorithms used for maximum power point tracking (MPPT) and the overall system output. The conclusion emphasizes the necessity of incorporating supercapacitors in electric vehicle battery designs to meet peak power demands and improve system durability.AI Generated
This summary of the content was generated with the help of AI.
AbstractAn Energy system (ESS) is used to store the extra power generated during high radiances and to provide the steady supply of power to meet the load demand during irradiances. Traditionally, energy storage system was made up of battery banks that could store and continuously supply power to the load. Although the batteries having high power density makes them ideal for consistent power sources delivering a significant current surge shortens the batteries lifespan. Combining batteries with high power density storages like supercapacitors. In such combination electric vehicle’s battery fulfils the provision of continuous energy while the supercapacitor offers the supply of instant load. A stand-alone photovoltaic battery with supercapacitor is suggested for electric vehicle in this concept. To regulate the energy supply and storage across the system energy management is suggested. -
Revolutionizing Rice Farming: A Hybrid Deep Learning Approach for Automated Detection of Plant Diseases from Leaf Images
B. Sarada, Siva Sankar Namani, K. Thirupathi RaoThis chapter explores the implementation of a hybrid deep learning system for the automated detection of rice plant diseases from leaf images. The study focuses on three major diseases: bacterial blight, brown spot, and leaf blast, which significantly impact rice yields. The system employs deep learning models such as DarkNet19, MobileNetV2, and ResNet18 for feature extraction, followed by traditional machine learning classifiers like SVM, KNN, and Ensemble methods for disease classification. The research highlights the importance of early disease detection in mitigating yield losses and improving crop management. The proposed method achieves high classification accuracy, demonstrating its potential as a valuable tool for farmers and agriculturalists. The study also discusses the future enhancements, including real-time disease identification and classification, to further improve the system's practicality and effectiveness.AI Generated
This summary of the content was generated with the help of AI.
AbstractRice cultivation is widespread globally, particularly in Asian countries, serving as a staple food for a significant portion of the world’s population. However, agricultural challenges such as rice diseases have plagued farmers for generations. Identifying these illnesses is essential, as severe infections may result in crop failures. Automated disease monitoring through rice plant leaf photos is essential for shifting from traditional, experience-based decision-making to data-driven approaches in agriculture. In this context, Artificial Intelligence (AI) has emerged as a promising instrument across multiple scientific fields, including plant pathology. This research presents a hybrid deep learning system developed for the automated identification of rice plant diseases with a curated collection of leaf images. The design utilizes MobileNetV2, a validated model, to extract profound information from input photos. These attributes are subsequently employed as inputs for various machine learning classifiers utilizing distinct kernel functions, implementing a rigorous 10-fold validation approach. The findings illustrate the efficacy of the suggested hybrid system, with an exceptional classification accuracy of 98.6%, a specificity of 98.85%, and a sensitivity of 97.25% utilizing a medium-sized neural network. The system not only outperforms traditional methods but also showcases computational efficiency and speed. Furthermore, the system is primed for testing with larger and more diverse datasets, paving the way for enhanced disease diagnosis in rice plants. -
DDoS Detection Using ML Algorithm
Ashok Kumar Nanda, S. Harshavardhan Reddy, S. Rahul, V. KarthikeshwarThis chapter delves into the critical role of machine learning in detecting and mitigating Distributed Denial-of-Service (DDoS) attacks, a growing threat in the digital landscape. It explores various machine learning algorithms, including Random Forest, XGBoost, and K-Nearest Neighbors, and their effectiveness in identifying and classifying network traffic as benign or malicious. The text provides a detailed workflow of the machine learning process, from data preprocessing to model validation, highlighting the importance of feature engineering and data splitting. It also discusses the performance metrics used to evaluate the models, such as accuracy, precision, recall, and the F1 score. The chapter concludes with a comparison of the algorithms, showing that Random Forest achieved the highest accuracy of 98%, followed by Decision Tree at 96% and K-Nearest Neighbors at 95%. This comprehensive analysis underscores the potential of machine learning in enhancing network security and protecting against evolving cyber threats.AI Generated
This summary of the content was generated with the help of AI.
AbstractThe surge in connected devices has led to an increase in distributed denial-of-service (DDoS) attacks, posing a threat to proper functionality. Machine learning (ML) techniques have shown promise in detecting and mitigating DDoS attacks by analyzing traffic patterns for anomalies. This paper presents an ML-based approach for detecting DDoS attacks, utilizing feature extraction to isolate relevant characteristics from network traffic data. The algorithm incorporates random forest, decision tree and k-nearest neighbors to distinguish between normal and anomalous traffic. Our proposed method stands as a reliable and effective means to identify DDoS attacks, enhancing overall security and preventing misuse for malicious activities. By showcasing the potential of machine learning, this approach addresses the pressing issue of cyber-attacks. In this paper, we suggest for more accurate and efficient mitigation that guarantees continuous network services, a proactive machine learning solution is essential. -
A Review on Intelligent Exam Monitoring System Using Deep Learning
T. Neha, T. Vennela, N. Ravithreni, A. Ch. S. Kanakadurga, T. SnehaThis chapter delves into the critical issue of exam integrity and the role of deep learning in combating cheating. It explores various deep learning models and systems designed to detect and prevent cheating behaviors, such as copying, using unauthorized aids, and unauthorized communication. The review highlights the limitations of traditional monitoring methods and the advantages of deep learning techniques, which can analyze large datasets with high speed and accuracy. Key topics include the use of computer vision and deep learning for suspicious activity detection, the integration of facial recognition and eye tracking for automated proctoring, and the development of hybrid deep learning techniques for comprehensive cheating detection. The chapter also discusses the challenges and future directions in this field, emphasizing the need for continuous improvement to ensure academic integrity. The conclusion underscores the potential of deep learning to revolutionize exam monitoring and maintain the integrity of offline exams.AI Generated
This summary of the content was generated with the help of AI.
AbstractThis paper analyzes the common issue of exam cheating in offline modes in classrooms and explores how deep learning approaches can offer viable solutions. Even though exam integrity is essential to fair evaluation and merit-based practices, conventional monitoring techniques often fail to spot complex forms of cheating, especially in the context of the COVID19 pandemic that increased opportunities for academic dishonesty as a result of the transition to remote learning. Deep learning techniques are able to quickly and accurately analyze large amounts of data by utilizing artificial intelligence and machine learning. This makes it easier to create effective monitoring systems that can identify different kinds of dishonest activity. The aim of this study is to show how deep learning techniques may effectively minimize cheating and maintain the integrity of offline exams. -
Revolutionizing Web Security: The Efficiency of a Reusable CAPTCHA Security Engine
Yamsani Vaishnavi, Touria Tanazzum, Hari Bharadwaj, S. Ranjithreddy, Kottu Santosh Kumar, Saroja Kumar RoutThis chapter delves into the critical role of CAPTCHA technology in distinguishing human users from automated bots, focusing on its application in enhancing online security. It explores various CAPTCHA methods, including text-based and image-based CAPTCHAs, and their respective AI challenges. The text provides a detailed overview of the implementation process, highlighting the integration of CAPTCHA within a job application web application built with Flask. It also discusses the ethical considerations and privacy implications of CAPTCHA development. The chapter concludes with a discussion on the effectiveness of traditional CAPTCHA methods and the potential of future vision-based CAPTCHAs, offering insights into the ongoing efforts to balance security, usability, and privacy in online platforms.AI Generated
This summary of the content was generated with the help of AI.
AbstractUsing CAPTCHA helps shield computers from malevolent bots. The vulnerability of modern text-based CAPTCHAs has been exposed by the emergence of deep learning techniques. Since image-based CAPTCHAs need a lot of labour to create, image-based visual reasoning is starting to emerge as a new direction in this progression. The Visual Turing Test (VTT) CAPTCHA was just released by Tencent. This appears to be the first time the visual reasoning method has been used. Other CAPTCHA service providers (such as Dingxiang, NetEase, and Geetest) also began to create visual reasoning frameworks of their own to combat robots. So, it makes sense to pose this fundamental query: Are CAPTCHAs that rely on visual reasoning as safe as their designers believe? In this paper, we introduce the first method for solving visual reasoning-based CAPTCHAs. On the VTT CAPTCHA, our modular strategy and holistic approach yielded overall success rates of 88.0% and 67.3%, respectively. According to the research, visual reasoning CAPTCHAs are not as secure as previously believed. Attempts to apply difficult, original AI tasks to CAPTCHAs have not been successful thus far. We also provide some recommendations for better secure, visually engaging CAPTCHAs based on our attacks. -
An Empirical Analysis of Indian Banks’ Spatial Efficiency Based on Performance and Production Approaches
Putha Viswanatha Kumar, Vulichi Pavankumari, Venkataramana Musala, S. DamodharanThis chaptere presents an empirical analysis of the spatial and intertemporal efficiency of Indian banks from 2005 to 2021. The study employs traditional and window-based Data Envelopment Analysis (DEA) methods to evaluate the efficiency of public, private, and foreign banks. Key topics include the use of six different efficiency types: Deposit Mobilization Efficiency (DME), Fund Conversion Efficiency (FCE), Off-Balance Sheet Activities Efficiency (OBAE), Cost Revenue Management Efficiency (CRME), Production Approach Efficiency (PAE), and Intermediate Approach Efficiency (IAE). The analysis reveals that public sector banks (PSBs) generally exhibit higher overall efficiency compared to private sector banks (PVTs) and foreign banks (FBs), with notable variations in specific efficiency types. The study also highlights the impact of ownership structure on bank efficiency, with PSBs showing more consistent performance due to unified government policies. Additionally, the chaptere provides detailed descriptive statistics and heatmaps for each bank type, offering a comprehensive overview of their efficiency scores. The results suggest that while some banks perform above average in certain efficiency types, there is still room for improvement, particularly in areas like production approach efficiency and cost revenue management. Overall, the study offers valuable insights into the efficiency dynamics of the Indian banking sector, with implications for policymakers, researchers, and industry leaders.AI Generated
This summary of the content was generated with the help of AI.
AbstractThe efficiency of business and income in the Indian banking sector during the 17-year period, from 2005 to 2021, is examined in this article. Non-parametric approaches are used to estimate the efficiency of banks, including data envelopment analysis (DEA); criterion DEA; policy-based statistical methods; and appropriate data analysis models. Data for the study have been derived from the DBIE (Database on Indian Economy), the Reserve Bank of India and the Ministry of Statistics and Program Implementation (MOSPI). We determined the income portfolios of Indian commercial banks in order to improve their performance. Banks must pay special attention to factors affecting the efficiency with which bank funds are utilized. We also analyzed the impact of COVID-19 (in the first and second ways). It is concluded that it slowed the growth of the Indian Banking Sector in 2019, which shows some of the banks had been relatively efficient in the study, with moderately high provisions for loans and credit expansion, resulting in lower profitability for them. -
Convolutional Neural Networks (CNN): Handwritten Digit Recognition
Aluka Madhavi, Samala Nandini, Potlakayala Deepthi, Manchala Bhavani, Kasapaka RubenRaju, Bomma Reddy SindhujaThis chapter delves into the world of Convolutional Neural Networks (CNNs) and their exceptional ability to recognize handwritten digits. It begins by highlighting the importance of this task in various applications, such as digitizing historical data and automating postal mailings. The chapter then explores the architecture of CNNs, including their convolutional, pooling, and fully connected layers, and how these components work together to extract and classify features from raw pixel data. Preprocessing techniques, such as normalization, noise reduction, and contrast enhancement, are discussed to improve the quality of input data and enhance model performance. The chapter also compares CNNs with other machine learning algorithms, demonstrating their superior accuracy and efficiency. Furthermore, it reviews related work in the field, providing insights into various methodologies and their findings. The chapter concludes with a look at future research avenues, including novel network architectures, improved noise resistance, and enhanced model interpretability. By reading this chapter, professionals will gain a comprehensive understanding of CNNs for handwritten digit recognition and practical insights into implementing and improving these systems.AI Generated
This summary of the content was generated with the help of AI.
AbstractClassical handwriting recognition algorithms relied heavily on historical data and individual traits. Training an OCR system is challenging when these factors are considered. Deep learning-based handwriting recognition research has advanced significantly in recent years. More research and identification accuracy are required as the volume of handwritten data increases and computer power becomes more readily available. Convolutional neural networks (CNNs) are the most effective technology for solving handwriting recognition difficulties because they automatically extract information from handwritten letters and words. The proposed research will concentrate on convolutional neural network-based handwritten digit detection. We will look at stride size, receptive field, kernel size, padding, and dilution. Our second aim is to determine how SGD optimisation strategies improve handwritten digit recognition accuracy. Ensemble design may improve network recognition accuracy. Ensemble topologies make testing more difficult and raise computational costs, thus we employ a pure CNN architecture to get the same accuracy. The goal of this research is to develop a convolutional neural network (CNN) architecture that can reduce operating procedure complexity and cost while enhancing accuracy over ensemble techniques. -
Sentiment Analysis and Classification Using Convolutional Neural Networks
Bomma Reddy Sindhuja, Aluka Madhavi, Samala Nandini, Potlakayala Deepthi, Manchala Bhavani, Kasapaka RubenRajuThis chapter delves into the application of Convolutional Neural Networks (CNNs) for sentiment analysis and classification, a technique that has shown promising results in understanding public opinion, consumer feedback, and social media trends. The text covers the preprocessing steps essential for effective sentiment analysis, including text tokenization, word embeddings, and data augmentation. It also explores advanced techniques such as pre-trained word embeddings, attention mechanisms, and ensemble methods to enhance model performance. The chapter compares CNNs with other deep learning models like Recurrent Neural Networks (RNNs) and Long Short-Term Memory (LSTM) networks, highlighting the strengths and weaknesses of each. Additionally, it discusses the potential for future improvements, such as integrating multi-modal information and leveraging transfer learning techniques. The text concludes that CNNs, originally developed for image recognition, have been successfully adapted to process textual data, capturing both local and global semantic information for accurate sentiment classification.AI Generated
This summary of the content was generated with the help of AI.
AbstractSentiment analysis, a branch of natural language processing, aims to discern the emotional tone behind a piece of text. In recent years, Convolutional Neural Networks (CNNs) have emerged as powerful tools for sentiment analysis due to their ability to capture local dependencies in text data. This paper presents a comprehensive study on sentiment analysis using CNNs, focusing on classification tasks. We explore various architectures, preprocessing techniques, and hyperparameter tuning strategies to optimize performance. Our experiments demonstrate the efficacy of CNNs in accurately classifying sentiment in text data across different domains and languages. Additionally, we compare CNN-based approaches with traditional methods and discuss their advantages and limitations. Overall, this work contributes to the advancement of sentiment analysis techniques, particularly in leveraging deep learning models like CNNs for robust and accurate sentiment classification. -
Deep Learning Based MPPT Control for 200 Watts PV System in Electrical Vehicles
M. T. L. Gayatri, G. Laxmi PrasannaThis chapter delves into the integration of deep learning techniques to enhance the performance of photovoltaic (PV) systems in electric vehicles. It focuses on a deep learning-based Maximum Power Point Tracking (MPPT) control system designed for a 200-watt PV system. The chapter covers the architecture of the neural network, which includes input, hidden, and output layers, and utilizes backpropagation for training. It also discusses the simulation and results of the MPPT integration, highlighting its ability to optimize power generation and efficiency under dynamic environmental conditions. The chapter concludes with a detailed analysis of the system's performance, demonstrating its effectiveness in improving energy harvesting and overall system reliability.AI Generated
This summary of the content was generated with the help of AI.
AbstractThe research aims to optimize and enhance the efficiency of the MPPT process using deep learning approaches. It focuses on the implementation of the MPPT system based on the Artificial Neural Network (ANN) of photovoltaic integrated into electrical vehicles. The ANN algorithms keep track of the accurate maximum point under varying environmental conditions. The adoption of alternative energy sources in electrical vehicles with advanced technology gives optimal performance and in addition reduces greenhouse emissions. Electrical Vehicles consume energy in Watthours/miles or Watthours/km. Power consumption of Electrical Vehicles depends on the vehicle’s size, weight, driving conditions, efficiency and speed. The deep learning controller utilizes inputs such as voltage, current or additional inputs temperature and solar irradiance to predict the optimal power output. Electric bicycles, electric scooters, electric ride-on toys, electrical skateboards and electrical wheelchairs consume below or up to 200 watts. The study highlights the adaptability of the MPPT system based on deep learning in electrical vehicles enhances the maximum energy utilization and system performance. The paper presents the deep learning training process, architecture and simulation results of the MPPT process of 200 watts using 2 inputs, 4 hidden layers, and 1 output. -
High Gain DC-DC Converter with Dual Input for Low Power Applications
B. Harshini, B. Naga Swetha, R. Geshma Kumari, K. Sravani, M. Naga Jyothi, O. SobhanaThis chapter delves into the design and implementation of a high-gain DC-DC converter with dual input, specifically tailored for low power applications. The focus areas include the problem identification of low voltage outputs from renewable energy sources and the need for efficient conversion to higher voltages. The proposed solution involves a novel converter design that integrates a two-stage booster with a switched inductor construction, resulting in a high voltage gain and minimal switching losses. The chapter also explores the use of a voltage multiplier circuit and a dual input two-switch boost converter, comparing their performance with the proposed high-gain DC-DC converter. The results demonstrate significant improvements in voltage gain and efficiency, making the proposed converter a promising solution for energy storage and renewable energy applications. The detailed analysis and comparative study provide a comprehensive understanding of the advantages and potential applications of the proposed design.AI Generated
This summary of the content was generated with the help of AI.
AbstractThe power electronic converter plays a vital role in today's efficient use of renewable energy sources. In addition to that a high gain boost converter is suitable in promising applications like electrical vehicle technology, communications, fuel cells and in multiple DC UPS. In this regard, a cost effective Dual High Gain DC-DC Converter (DHGDCC) is proposed with good efficiency. The proposed DHGDCC has a smaller number of switches, which is an advantage over voltage multiplier topology. To illustrate, a comparison is made of DHGDCC with voltage multiplier Circuit (VMC) in the MATLAB environment. HGDCC is realized with two and three MOSFET switches. The results depict that DHGDCC with dual input voltage of 24 V is considered to obtain up to a gain of 5 approximately as well as proposed topology ensures high efficiency up to 84% compared to other implemented topologies like voltage multiplier circuit and two switch boost converter topology. -
Nonlinear Companding Scheme PAPR Reduction of OFDM Signals
T. Y. Melligeri, Rajkumar L. BiradarThis chapter delves into the challenges posed by high PAPR in OFDM signals and introduces a novel non-linear companding scheme to address this issue. The text explores the existing PAPR reduction methods, highlighting their limitations, and presents a detailed formulation of the proposed system. Simulation results demonstrate the superior performance of the new technique in terms of BER and PAPR, even without de-companding at the receiver. The chapter also discusses the practical implementation of the proposed method using a graphical user interface in MATLAB, showcasing its application in image and audio transmission. The findings suggest that the proposed companding system can deliver satisfactory BER performance without the need for de-companding, making it a promising solution for enhancing the efficiency and reliability of OFDM-based communication systems.AI Generated
This summary of the content was generated with the help of AI.
AbstractOrthogonal frequency division multiplexing system’s Power Ratio from Peak to Average can be decreased simply and effectively using companding transform. For OFDM systems, a novel nonlinear companding approach is put forth in this study to lower the PAPR and raise the BER. By carefully selecting transform parameters, the proposed approach primarily focuses on maintaining a steady average power while compressing large signals. The suggested method can provide a strong BER performance without decompanding at the receiver. The simulation findings show that in terms of BER performance, PAPR reduction, and spectrum side lobes, the suggested solutions outperform the other competing schemes.
- Title
- Proceedings of the 7th International Conference on Communications and Cyber Physical Engineering
- Editors
-
Amit Kumar
Stefan Mozar
- Copyright Year
- 2026
- Publisher
- Springer Nature Singapore
- Electronic ISBN
- 978-981-9502-69-1
- Print ISBN
- 978-981-9502-68-4
- DOI
- https://doi.org/10.1007/978-981-95-0269-1
PDF files of this book have been created in accordance with the PDF/UA-1 standard to enhance accessibility, including screen reader support, described non-text content (images, graphs), bookmarks for easy navigation, keyboard-friendly links and forms and searchable, selectable text. We recognize the importance of accessibility, and we welcome queries about accessibility for any of our products. If you have a question or an access need, please get in touch with us at accessibilitysupport@springernature.com.