Skip to main content
main-content

Über dieses Buch

This book presents best selected papers presented at the International Conference on Paradigms of Computing, Communication and Data Sciences (PCCDS 2020), organized by National Institute of Technology, Kurukshetra, India, during 1–3 May 2020. It discusses high-quality and cutting-edge research in the areas of advanced computing, communications and data science techniques. The book is a collection of latest research articles in computation algorithm, communication and data sciences, intertwined with each other for efficiency.

Inhaltsverzeichnis

Frontmatter

Computing

Frontmatter

Chapter 1. Real-Time Implementation of Enhanced Energy-Based Detection Technique

The paper explains the development of an efficient energy detection technique and its real-time implementation using wireless access research platform (WARP). An improved energy detection technique is developed named multiple threshold energy detector (MTED). The performance of conventional energy detector is enhanced by using multiple threshold values to compare with the measured values in local sensing. Energy detection has the advantage of detecting the unknown signals. Despite of this advantage, the challenges faced during the hardware implementation of the proposed technique are also addressed. Cooperation-based sensing is implemented to increase the accuracy of detection. Cooperation is achieved using majority rule-based decision fusion scheme. The performance analysis of the implemented energy detection technique is done with respect to the detection probabilities.

Vatsala Sharma, Sunil Joshi

Chapter 2. Pipeline Burst Detection and Its Localization Using Pressure Transient Analysis

Water losses due to burst event causing leakages in the water pipeline are the main source of water losses. There is a need to detect and localize the burst event in the water distribution system to reduce water losses and the cost required for repairing it. Wireless sensor technology provides real-time monitoring of water pipeline infrastructure and related parameters such as pressure and flow rate. Analyzing these parameters system abnormalities such as pipeline burst detection can be identified. The presented study uses transient analysis of a pressure signal, for the detection of burst events in the pipeline network. Cumulative sum and wavelet analyses are utilized for the detection of burst events. Localization of burst events is performed by analyzing the time difference in the arrival of negative pressure wave format at different pressure sensing points. Nodal matrix analysis is proposed to minimize the burst localization error. The proposed algorithm is applied to the PVC water pipeline test bed. The result shows that the proposed algorithm successfully detects the pipeline burst event of 2–3 l/s with a localization error of 3 m only, which is better than the earlier proposed algorithm.

Aditya Gupta, K. D. Kulat

Chapter 3. Improving Machine Translation Using Parts-Of-Speech Tags and Dependency Parsing

Language is the most important tool for communication which in turn is necessary for growth. With diverse languages present in India, translation becomes an important tool. Current state-of-the-art models are mostly trained for foreign or well-known languages. These models require large data for training making it difficult to implement them for Indian dialects where sufficient literature is not available. Also, these models are not able to capture the rich morphology of Indian languages resulting in a reduced accuracy. We aim to increase the accuracy using syntactic features and also reduce the required dataset. We used the syntactic features: Parts-of-Speech tags and Dependency Parsing. We are able to achieve an incearsed BLEU score and generate target sentences that are more qualitative.

Apoorva Jha, Philemon Daniel

Chapter 4. Optimal Strategy for Obtaining Excellent Energy Storage Density in Polymer Nanocomposite Materials

Storage of electrical energy is the topic of major concern now a days due to the increased demand of electrical energy in day to day life. So there is need to develop the high performance energy storage devices. Enhancement of energy density of the dielectric capacitor is the major area of research. This paper deals with all the aspects and factors that affect the energy density of the material used as dielectric in dielectric capacitors. Keeping in mind the environmental safety multilayer structure of biodegradable polymer nanocomposite materials is concluded to be the best method to enhance the energy density of the dielectric material.

Daljeet Kaur, Tripti Sharma, Charu Madhu

Chapter 5. A Study of Aging-Related Bugs Prediction in Software System

Software aging refers to the problem of the deteriorated performance and increased failure rate in the long-running software systems. Typically, aging-related bugs (ARBs) that are activated due to the runtime garnering of error conditions caused the software aging problem. ARBs are difficult to detect when software testing is carried out. Therefore, early identification of these bugs can help in building a robust software system. However, one major issue in ARBs prediction is the skewness of the dataset (class imbalance problem) that may cause bias in the learning of classification algorithms and thus may lead to the higher misclassification rate. This paper aims to investigate the effect of instance filtering (resampling) and standardization techniques with various classification algorithms when predicting aging-related bugs in the software system. The experimental study is performed by using four different classification algorithms namely logistic regression, support vector classifiers (SVC), random forest, and artificial neural network (ANN) with Softmax function for three different datasets available in the open-source software repository. Results of the analysis show an increment in the performance of the used classification algorithms with reduced misclassification rate.

Satyendra Singh Chouhan, Santosh Singh Rathore, Ritesh Choudhary

Chapter 6. Mutual Authentication of IoT Devices Using Kronecker Product on Secure Vault

Internet of things provides a way to enable various Internet-enabled wireless devices to connect and communicate with each other, which is beneficial to collect and store the information on the cloud or on any server. This paper presents a novel mutual authentication technique, which uses a secure vault to store and generate session keys to empower the mutual authentication-based scheme. Secure vault is a matrix, which consists of integer values that are used to create the session keys, which are further used as authentication keys. At initial stage, the secure vault matrix is stored in the server and some part of secure vault is stored in the IoT devices, and further, the value of vault changes to provide a secure mechanism against side-channel as well as dictionary attacks. Our algorithm uses a Kronecker product method to establish the keys and helps to enhance the storage and communication cost for the resource-constrained devices. We reduced the storage cost of an authentication scheme to O(√N), and it is not required for a device to have any communication cost during the computation phase.

Shubham Agrawal, Priyanka Ahlawat

Chapter 7. Secure and Decentralized Crowdfunding Mechanism Based on Blockchain Technology

Crowdfunding is a mechanism of raising capital from investors for funding new business ventures. To facilitate crowdfunding, different crowdfunding platforms are available, such as Kickstarter and Indiegogo. Crowdfunding platforms provide a convenient way of raising funds for startups from investors. The major downside of the conventional crowdfunding platforms is that they require users to pay few percentages of the fundraised to the crowdfunding platforms as platform fees (e.g., Kickstarter charges 5% of the total fundraised as platform fee). In addition to platform fee, users are required to pay transaction fees to the payment processors. In this paper, we propose a secure and decentralized crowdfunding mechanism based on blockchain technology. The proposed crowdfunding mechanism eliminates the need of conventional crowdfunding platforms that charge entrepreneurs a sum of money as platform fees. The proposed mechanism allows entrepreneurs to make use of the total fundraised from investors and provides an immutable ledger of transactions between investors and entrepreneurs.

Swati Kumari, Keyur Parmar

Chapter 8. Efficient Use of Randomisation Algorithms for Probability Prediction in Baccarat Using: Monte Carlo and Las Vegas Method

Randomised algorithms use a degree of randomness in their logic. Such algorithms provide a solution which may not always be optimal, but is often produced faster than a brute force process. This paper explores the two classes of randomised algorithms: Las Vegas and Monte Carlo. A Las Vegas algorithm always produces the correct result, but its running time is based on a random value. A Monte Carlo algorithm has a deterministic running time and produces an answer that has a probability of  ≥ 1/3 of being correct. By means of this paper, we develop algorithms that predict the winning probabilities of the betting options in the casino table card game of Baccarat. Both classes of algorithms have been implemented in two ways each, varying in space and time complexities. We also propose and implement a novel approach to reduce the time complexity of a typical Las Vegas algorithm through the use of multithreading. A comparative study for the algorithms has also been done.

Avani Jindal, Janhvi Joshi, Nikhil Sajwan, Naman Adlakha, Sandeep Pratap Singh

Chapter 9. Comparative Analysis of Educational Job Performance Parameters for Organizational Success: A Review

Job performance in educational institutions is a major parameter to decide its success. Numerous parameters such as teaching methods, family background, student’s interest, student–teacher interaction, etc., are responsible to support the decision-making process in organizational success. Job performance is mainly related to student’s and educationist’s performance. Thus, there is a need to keep an eye on parameters associated with both student’s performance and educationist’s performance. This paper aims to provide a comparative analysis of tools, techniques, parameters, and algorithms along with different challenges, associated with monitoring performance of students and educationists. Various educational organizations apply data mining tools to analyze the performance of students and educationists. There are various versatile data mining algorithms available to serve the purpose. Thus, it becomes important to select the appropriate algorithm in an appropriate situation. The literature so far focused on student performance for analyzing the organizational success. In this paper, both the entities; i.e., student and educationist are being considered. The work presented highlights the possible benefits to the students, educationists, and management.

Sapna Arora, Manisha Agarwal, Shweta Mongia

Chapter 10. Digital Anthropometry for Health Screening from an Image Using FETTLE App

This paper presents a novel approach to quantify the human anthropometric information which includes estimating attributes such as height, weight, forearm, wrist, waist, mid-upper arm circumference (MUAC), knee, feet, head length, left foot, right ear, cheek width, middle finger, and the cubit (from the elbow to the tip of the middle finger) from an image using the augmented reality and transfer learning. FETTLE also practically determines the use of body measurements to assess nutritional status which is immediately applicable technique for assessing children's development patterns during the early years of life. FETTLE app can be used as a screening device to identify individuals at risk of under nutrition, followed by a more elaborate investigation using other techniques. FETTLE aimed to act as a forensic anthropometry tool for justice systems to utilize in conjunction with other forensic sciences and will continue to aid our criminal justice systems in finding the truth. The need for such anthropometric databases is becoming increasingly important and grows in parallel with the goal to achieve efficient system designs.

Roselin Preethi, J. Chandra Priya

Chapter 11. Analysis and Redesign of Digital Circuits to Support Green Computing Through Approximation

The massive power consumption by several computing devices places a heavy burden on the power grid. Beginning from the production of these devices to its disposal, it poses several disadvantages to the environment. Green Computing refers to the research and development of computing devices that do not pose any adverse impact on the environment. This paper is a contribution to the analysis of digital circuits and proposes a technique to redesign it with intent to support green computing. We use a gate-level netlist to carry out the study. The objective is to remove components (in terms of gates) from the primary circuit and propose a new modified circuit that consumes less power as well as area. In the electronic industry, these modified circuits are referred to as approximate circuits. The only problem with this kind of circuit is it does not produce an actual result; rather, it produces an approximate result (also known as a good-enough result). Because of the error-resilient property of several applications, these approximate results are quite useful. We propose the methodology to redesign the circuit and propose a case study by taking ISCAS’85 benchmark circuit to prove our methodology. The experimental result shows that we can reduce energy consumption by 25–40% with our proposed technique.

Sisir Kumar Jena, Saurabh Kumar Srivastava, Arshad Husain

Chapter 12. Similarity-Based Data-Fusion Schemes for Missing Data Imputation in Univariate Time Series Data

Handling missing values in time series data plays a key role in prediction and forecasting, as complete and clean historical data helps in achieving higher accuracy. Numerous research works are present in multivariate time series imputation but imputation in univariate time series data are least considered due to the unavailability of other correlated variables (attributes). However, these algorithms do not perform well when most of the tuples are clustered due to a lack of neighbors during imputation. This paper aims to propose an iterative imputation algorithm by clustering univariate time series data, considering the trend, seasonality, cyclical and residue features of the data. The proposed method uses a similarity-based nearest neighbor imputation approach on each cluster for filling missing values. The proposed method is evaluated on publicly available data set from the Data Market repository and UCI repository by randomly simulating missing patterns throughout the data series. The outcome of the proposed method is evaluated with metrics like MSE, MAE and RMSE and also validated through prediction accuracy and Concordance Correlation Coefficient (CCC) statistical test. Experimental results indicate that the proposed imputation method produces closer values to the original time series data set, resulting in low error rates compared to other existing imputation methods.

S. Nickolas, K. Shobha

Chapter 13. Computational Study on Electronic Properties of Pd and Ni Doped Graphene

In order to study the changes in properties on doping, the DFT study has been carried out for palladium doped and nickel doped graphene surfaces. The binding energies for the mentioned surfaces have been calculated. Each dopant modified the surface bonding arrangement inside graphene sheet resulting in property changes. Electronic properties presented the significant changes after substituting the dopant atom. Charge transferred has been specified using Mulliken charge analysis and Natural Bond Orbital (NBO) analysis. The properties were found to be improved after doping.

Mehak Singla, Neena Jaggi

Chapter 14. Design of an Automatic Reader for the Visually Impaired Using Raspberry Pi

Reading is the most challenging problem for visually impaired people. It makes life more troublesome for them. Though there are braille technologies for blind persons to read any text, they are time taking approach. Some other newly arrived technology is also there for reading purpose. Here we proposed a prototype which focuses on the development of an automatic image-to-speech device for the visually impaired persons which will help with some daily tasks. This prototype is built using Raspberry Pi 4 Model B. The main purpose to develop this automatic reader is to allow a blind user to “read” text (a paper or any book). Optical Character Recognition (OCR), Text to Speech Synthesis (TTS) and some image processing algorithms are integrated into Raspberry Pi 4 (credit card sized mini computer). Tesseract OCR and eSpeak text to speech synthesizer are used here. This model enables the user to take a picture using a Raspberry Pi camera, hear the text (which exists in the paper or the book) by using any speaker and the whole process is controlled by using push buttons. Proposed reader can read the texts which are written in English, Bengali and Hindi languages.

Nabendu Bhui, Dusayanta Prasad, Avishek Sinha, Pratyay Kuila

Chapter 15. Power Maximization Under Partial Shading Conditions Using Advanced Sudoku Configuration

Shading affects the solar photovoltaic panel output. Partial shading occurs due to dust, leaves, clouds, branches of trees, and buildings. It is the main reason behind hotspot formation, mismatch, and power loss. Due to variation of irradiance and temperature, photovoltaic (PV) efficiency and its performance decreases. The pattern of shading is the main cause of power loss during partial shading. To reduce the impact of shading, a reconfiguration is proposed and compared with traditional total cross-tied (TCT) configuration. In this paper, Advanced Sudoku reconfiguration pattern for 9 × 9 PV array is proposed. Five shading patterns are assessed for comparison of both reconfiguration techniques. The PV characteristics performance is executed on MATLAB/SIMULINK, and the results show that the proposed approach yields better result than the existing ones.

Gunjan Bharti, Venkata Madhava Ram Tatabhatla, Tirupathiraju Kanumuri

Chapter 16. Investigations on Performance Indices Based Controller Design for AVR System Using HHO Algorithm

In this paper, various standard performance indices are minimized for Proportional Integral Derivative (PID) controller based Automatic Voltage Regulator(AVR) system using Harris Hawks Algorithm (HHO). This paper also proposes the best performance index for HHO-PID based AVR system. Efficient minimization of performance indices yields better AVR system controller parameters. Using obtained controller parameters, we found step responses and compared values of $$t_{r} ,t_{s} ,M_{p}$$ t r , t s , M p and $$E_{ss}$$ E ss and then proposed the best efficient minimized performance index. HHO-PID based AVR system is also compared with GA-PID based AVR system for proving superiority of HHO algorithm. HHO-PID based AVR system with proposed best performance index showed great improvement in step response.

R. Puneeth Reddy, J. Ravi Kumar

Chapter 17. Ethereum 2.0 Blockchain in Healthcare and Healthcare Based Internet-of-Things Devices

Healthcare encompasses multiple domains of engineering and technology for its core functioning. Electronic health records of patients are used to store the medical history of patients and make them readily available and consumable for the user. This data is also prone to cyber-attacks and malicious activities. A Proof-Of-Stake implementation of Blockchain is proposed to design a decentralized healthcare management system that provides interoperability between hospitals besides being safe and secure using a 256-Bit encryption. This provides sufficient decentralization with better speeds of handling data in a secure environment, readily access of information to all users concerned in a private Blockchain network for a healthcare system.

Vaibhav Sagar, Praveen Kaushik

Chapter 18. IoT-Based Solution to Frequent Tripping of Main Blower of Blast Furnace Through Vibration Analysis

Blower is a very critical machine in the iron making process as it provides cold blast at the desired flow and pressure for utilization at the hot stoves of a Blast Furnace. Therefore, all the Blowers at any industryis to be kept under predictive schedule of vibration monitoring. Routine check-up of Blowers of Blast furnace for preventive maintenance should be carried out at regular interval. During checkup if it is observed that there is a sudden increase in vibration level and abnormal sounds are coming from motor bearings, then necessary maintenance can be carried out to avoid failure of motor. An IoT-based fault analysis sensor system can be installed near the motor for vibration analysis and sending timely alert message to the maintenance team before complete shutdown of the system. The reasons of faults are very common yet very complex, timely detection and transmission of alert through IoT-based system can help to take the corrective actions immediately. Thereafter, the Blower did not trip and the Reliability of the system has improved.

Kshitij Shinghal, Rajul Misra, Amit Saxena

Chapter 19. A Survey on Hybrid Models Used for Hydrological Time-Series Forecasting

Accurate forecasting of any time series has always been the top goal for us especially in the field of hydrology as it provides a scientific basis for water resource management, flood control, weather forecasting and plays a chief role in financial and supportable advancement. However, in any case, forecasting of hydrological time series by independent conventional statistical models despite everything stays in trouble on account of time-varying and non-linear attributes of the series. In the light of above, researches have applied machine learning methods especially Neural Networks (NNs) along with traditional forecast methods thus giving rise to a hybrid modeling approach where the former can mine the non-linear relationship in the time series sequence and latter can deal with the linear part of it, thus both complementing each other. In this paper, we present a review of such hybrid models where advantages of both conventional and machine learning methods are incorporated for better real-time forecasting of hydrological time series.

Shivashish Thakur, Manish Pandey

Chapter 20. Does Single-Session, High-Frequency Binaural Beats Effect Executive Functioning in Healthy Adults? An ERP Study

Executive functioning encompasses some core cognitive qualities like taking the appropriate time to think before acting on a decision, efficiently dealing with novel and unanticipated challenges, resisting temptations and staying focused. Response inhibition, conflict control, working memory, and cognitive flexibility are some of the sub-cognitive processes which have a detrimental effect on the executive functioning of an individual. There has been considerable amount of research on different methods/interventions through which executive functioning could be enhanced through time. But the effect of high-frequency binaural beats on executive functioning is lacking and much needed in literature. The objective of the present study was to evaluate the effect of single-session, high-frequency binaural beats on executive functioning in healthy adults. Subjects recruited were divided into two groups: experimental and control. Participants in the experimental group were subjected to beta frequency binaural beats (25 Hz) for 20 min. Both the groups then executed a computerized version of the Stroop task. Behavioral measures (Accuracy, reaction time) and various Event related potentials (ERP) features like P300 and N400 were extracted and considered for analysis. Data analysis revealed that the participants in the experimental group recorded higher accuracy and slower reaction times in the Stroop task compared to the control group. The behavioral results were also complemented by higher averaged P300 amplitudes and lower N400 amplitudes recorded for the experimental group compared to the control group. We intend to use these results as substantial evidence to imply high-frequency binaural beats as an effective means of enhancement of executive functioning in healthy adults over sham-controlled, longitudinal studies.

Ritika Mahajan, Ronnie V. Daniel, Akash K. Rao, Vishal Pandey, Rishi Pal Chauhan, Sushil Chandra

Chapter 21. Optimized Data Hiding for the Image Steganography Using HVS Characteristics

Steganography algorithms used to secure sensitive data on the Internet. It hides the secret data in the cover image and provides imperceptibility to the attacker. The least significant bit (LSB) is the most preferred data hiding technique in the steganography. In this technique, the least significant bit of the cover image pixel replaced with data bits. In the literature, the data hiding achieved without considering the human visual system (HVS) characteristics that degrade the visual quality of the image. In this paper, the optimized data hiding has done while considering the HVS characteristics. The colour image contains three planes known as the red–green–blue plane. The green plane is the most sensitive, and blue is less sensitive to the human eyes. Thus, in the proposed technique, the green plane used as a reference plane for data hiding in the red and blue planes. In the optimized data hiding technique, the secret data bits match with the cover pixel bits. If the bits match, then the corresponding optimal index determines, else data hiding done in the LSB bits of the cover pixel. After that, the optimal indexes hide in the cover image using a 2-bit LSB technique. In the proposed technique, image smooth and edge region characteristics explored before data hiding. There is a high correlation between the consecutive pixels in the smooth region and the minimum correlation on the edges. Thus, we have matched the secret data bits with the cover pixel bits in the smooth region and hide the optimal indexes on the edges. The experimental results performed on the standard dataset images downloaded from the USC-SIPI image database. The experimental results show that the proposed technique provides better visual quality as compared to the existing techniques.

Sahil Gupta, Naresh Kumar Garg

Chapter 22. Impact of Imperfect CSI on the Performance of Inhomogeneous Underwater VLC System

The inhomogeneous environment due to variation of temperature and salt causes turbulence in the ocean. The strength of turbulence changes with ocean depth. In this paper, we study the performance of underwater visible light communication (UWVLC) system for different depths. We derive novel closed-form analytical expressions for the outage probability and average symbol error probability considering the practical case of imperfect channel state information (CSI). We conduct numerical investigation to evaluate the performance of the presented UWVLC system for pulse amplitude modulation and rectangular quadrature amplitude modulation schemes. It is shown that the system performance degrades with the imperfection in CSI and increase in depth. Numerical results are compared with the computer simulations to validate the accuracy of theoretical analysis.

Rachna Sharma, Yogesh N. Trivedi

Chapter 23. Pre-configured (p)-Cycle Protection for Non-hamiltonian Networks

The protection of wavelength division multiplexed (WDM) networks can be done very efficiently by the pre-configured protection cycles (p-cycles). The individual wavelengths carry the working traffic and gets protection via p-cycles requiring 100% wavelength conversion ability at every node. This will require wavelength converters to be present at all the nodes. We are investigating the p-cycle protection for non-hamiltonian networks without the need of wavelength converters. It is found that such networks can be protected by multiple p-cycles on separate fibers. In case of failure, switching can be done on multiple fibers without the concern of individual wavelength. With our approach, the costlier wavelength converters will be replaced by the additional fibers to be included. But this cost of fiber will be much less as compared to the cost of converter. Also, converters degrade the signal performance and add the complexity in the network which can be fully overcome with our approach.

Vidhi Gupta, Rachna Asthana, Yatindra Nath Singh

Chapter 24. A Novel Approach to Multi-authority Attribute-Based Encryption Using Quadratic Residues with Tree Access Policy

Attribute-based encryption has been a trend in the security field due to its higher efficiency and its compatibility with cloud computing, mobile ad hoc systems and the recent computing advancements like fog computing, social networking, IoT, etc. The attribute-based encryption (ABE) can be defined as a type of public key encryption in which keys are directly linked to the attributes of the user. These attributes can be anything like the place of access, the email address of the user, or the position of the user, or the type of subscription the user has taken. This obviously limits the user access, but it reduces the excess overhead and also secures the system to higher efficiency. Sahai and Waters put on the light the vulnerabilities of a single authority ABE scheme and further gave a proposition of a multi-authority ABE scheme. After that, some very useful techniques were given to cope up with the limitations in the existing techniques. There are three major schemes for ABE, i.e., the bilinear pairings, the quadratic residues and the lattices. All these three have certain advantages and limitations in some or the other proposals. One such focus for the implementation of the ABE is the Quadratic Residues (QR). The proposed scheme extends this implementation to a multi-authority attribute-based encryption scheme. A tree-based access approach is utilized to diminish the reliance on the central authority. Thus, the proposed scheme considers three major models of cryptography, namely multi-authority, tree-based policy and the quadratic residues to reduce the limitations of the existing models.

Anshita Gupta, Abhimanyu Kumar

Chapter 25. An Improvement in Dense Field Copy-Move Image Forgery Detection

The advancements in image editing software besides enhanced multi-media storage and transmission systems have increased the number of counterfeit images being shared. These can have serious consequences; therefore, it becomes important to detect tampering. Copy-move image forgery being a simple yet, quite elusive way, is one of the most frequently used forgery techniques. In this type of forgery, a small region of the source image is copied and pasted somewhere else within that image. Many algorithms have been proposed to detect copy-move attack but those are computationally complex making them unfit for real-time systems. Also, accuracy in detection systems still needs to be improved to rely on these. In this paper, an effort has been made to enhance the accuracy of already existing five dense field detection techniques by experimental analysis of the minimum size of duplication and linear fitting error used in the post-processing stage. After the former analysis, a combination of morphological operations dilation and closing, and sharpening technique unsharp masking, has been proposed in the pre-processing stage for the improvement of results. The results have been tested and compared for accuracy, precision, recall, and F1 score by using the CoMoFoD dataset for each technique.

Harsimran Kaur, Sunil Agrawal, Anaahat Dhindsa

Chapter 26. Scheduling-Based Energy-Efficient Water Quality Monitoring System for Aquaculture

Aquaculture is one of the fastest-growing food sectors with great significance on the economy. The production is directly dependent on the quality of water used. Monitoring the water quality parameter help to reduce the cost as well as to improve the yield. Efficient management of energy is crucial for water quality monitoring system for aquaculture. Especially, since, long-term monitoring is required in a remote environment. The monitoring system can be sustained by harvesting renewable energy from sources such as solar and wind. Still, without efficient energy management, it is not possible to run the system for a long time. This work aims to improve the energy efficiency of the water quality monitoring system for aquaculture by implementing sleep and wake-up scheduling. In this work, we present a simple sleep and wake-up scheduling technique to the sensor nodes in the water quality monitoring system. The developed system has hardware as well as software part which help us for collecting, storing, and communicating the data. The experimental results show that the energy can be saved, and battery life can be prolonged using this technique.

Rasheed Abdul Haq, V. P. Harigovindan

Chapter 27. A Study of Code Clone Detection Techniques in Software Systems

Code clones are defined as identical or similar program structures that are created by copy-paste or modifying existing codes. They are acquainted with software systems due to the lack of programming skills, time restrictions, and other different constraints on the system as well as on the programming languages. Detection of code clones has many benefits in software development such as decreased maintenance cost, improved software quality, and simple settlements of new changes. In this paper, we have explored different code clone detection techniques and figure out their pros and cons. It assists in understanding the clone detection process and choosing appropriate techniques for detecting possible types of clones whose detection can help in the refactoring and maintenance processes.

Utkarsh Singh, Kuldeep Kumar, DeepakKumar Gupta

Chapter 28. An Improved Approach to Secure Digital Audio Using Hybrid Decomposition Technique

Unlicensed access to digital audio is found to be very frequent today. Copyright and ownership issues are very common. Obsolete schemes are looking to fail to preserve the ownership of digital audio. A watermark data validates the correct belongingness of an audio file. Inculcation of any data into an audio signal is not always tolerated by audio. In this work, a robust method is proposed based on a hybrid decomposition technique in which discrete wavelet transform (DWT), discrete cosine transform (DCT), and singular value decomposition (SVD) applied to successfully perform watermarking in audio signals. A watermark image of size 16 by 16 pixels is used to inculcate in the digital audio of sampling rate 44.1 kHz. The watermark image is first undergone into a cyclic encoding process in which watermark bits are encoded using the redundant bit. Then the bits are further scrambled using Arnold’s cat map. After the robust encryption, the encrypted watermark bits are embedded into the host audio using hybrid decomposition. The reverse process is applied to extricate the watermark bits. The watermarked audio is tested under various signal processing attacks and the quality of the extracted watermark image is checked using standard parameters. The quality assessment of the watermark image is found satisfactory that declares the robustness of the scheme.

Ankit Kumar, Shyam Singh Rajput, Vrijendra Singh

Chapter 29. Study on the Negative Transconductance in a GaN/AlGaN-Based HEMT

The presented work mainly intends on reinvestigating the origin of the negative transconductance at higher gate voltages, in a GaN/AlGaN-based high-electron-mobility transistor (HEMT). The analysis is done using TCAD simulations, where the simulated GaN/AlGaN-based HEMT is calibrated to mimic the transfer characteristics of an experimentally verified GaN/AlGaN-based HEMT. As these GaN/AlGaN-based HEMTs are highly susceptible to negative transconductance at lower gate voltages, careful considerations are taken to limit the possibility of obtaining this negative transconductance at lower gate voltages. The study makes use of the 3D conduction band behavior of the AlGaN and GaN layers, the electron concentration and the electric field behavior of the GaN layer to justify the existence of this negative transconductance and alongside validate the legitimacy of the pre-existing theories for the same.

Sujit Kumar Singh, Awnish Kumar Tripathi, Gaurav Saini

Chapter 30. Hybrid Anti-phishing Approach for Detecting Phishing Webpage Hosted on Hijacked Server and Zero-Day Phishing Webpage

Phishing is an information security issue of stealing confidential information (login credentials, credit card information, and identification information like SSN, etc.) using a fake website. In current scenario, search engine-based methods and machine learning-based methods are proposed for phishing detection. Search engine-based methods are highly efficient because of its fast responsive nature compare to machine learning approaches. We proposed a hybrid phishing detection technique that is a combination of a search engine-based approach and a hyperlink similarity-based methods for detecting phishing webpage hosted on the legitimate hijacked server and zero-day phishing webpage. We use title, domain, and copyright of webpage to create an efficient search query for a search engine-based approach. Based on the presence of a brand name in title or copyright and title language decide for the similarity phase. Our proposed method is practical and adaptable for detecting a phishing webpage hosted on the legitimate hijacked server and zero-day phishing webpage.

Ankush Gupta, Santosh Kumar

Chapter 31. FFT-Based Zero-Bit Watermarking for Facial Recognition and Its Security

The security of a system through biometric features is one of the acceptable trends to which every researcher recommends. Such biometric features contain features of iris, fingerprint, face, etc. Today, these security traits are widely exploited by an imposter to trace unlicensed access to the system. So it demands to secure these features since foremost alarming challenges are raises when these trustworthy features are compromised. This paper endorses the design of zero-bit watermarking in which a user’s unique id is integrated with his facial features in the multi-transform domain. The user’s unique ID is used to establish true authentication of the host facial features and also to recognize the face without any segmentation techniques. To make this effort tangible, Singular values are calculated of some particular frequency coefficients of the host image that are generated using FFT transformation in which a number of appropriate regions of the host image are selected based on their frequency of information. These Singular values of host facial image are calculated using SVD in which watermark (image) user’s unique ID is integrated correctly while maintaining the equilibrium among imperceptibility, robustness, and payload. The resultant watermarked image is tested against various image processing attacks and satisfying results are conducted to assure robustness of the model. Thus, the model is indeed satisfying security of the host biometric of the user.

Ankita Dwivedi, Madhuri Yadav, Ankit Kumar

Chapter 32. Comparative Analysis of Various Simulation Tools Used in a Cloud Environment for Task-Resource Mapping

In a cloud environment, the IaaS model comprises of successful execution of a wide range of client applications dependent on resource planning. Broad research on all issues identified with cloud computing in a physical world is irksome on the grounds that it anticipates that researchers should think about the system structure and the conditions related with it, which might be uncontrollable. Essentially, the condition of the framework can’t be assessed or even overseen. Moreover, determining the performance of service models at various loads and assessing the cloud user’s methodologies under various conditions is annoying. These issues can be resolved by creating virtual cloud test systems. In the present paper, six cloud simulators are audited to create and test different cloud applications, with a view to utilize them in future. This paper gives a thought about different cloud simulators and compared them dependent on different parameters, for example, programming languages, availability and SLA support, and so forth. Finally, based on analysis, it is concluded that cloudsim is an effective and efficient simulator among all and can be used for further research and explorations.

Harvinder Singh, Sanjay Tyagi, Pardeep Kumar

Communication

Frontmatter

Chapter 33. Study of Spectral-Efficient 400 Gbps FSO Transmission Link Derived from Hybrid PDM-16-QAM With CO-OFDM

The present study reports a spectral-efficient free-space optics communication link. Polarization division multiplexing-16-quadrature amplitude modulation and coherent detection-orthogonal frequency division multiplexing techniques are realized to attain 400 Gbps at 42 km for clear weather conditions. Further, we investigate the proposed link performance under varying level of fog weather conditions. The overall link performance is investigated by evaluating bit error rate, maximum link range, and receiver sensitivity.

Mehtab Singh, Jyoteesh Malhotra

Chapter 34. 4 × 10 Gbps Hybrid WDM-MDM FSO Transmission Link

The present work is focused on a high-speed free space optics transmission link deploying hybrid wavelength division multiplexing and mode division multiplexing techniques. Through numerical simulations, we report 4 × 10 Gbps transmission for clear weather over 7 km using Laguerre Gaussian modes at 850 and 850.8 nm. Further, we investigate the impact of varying level of fog conditions on the proposed link performance.

Mehtab Singh, Jyoteesh Malhotra

Chapter 35. Task Scheduling in Cloud Computing Using Hybrid Meta-Heuristic: A Review

In recent years with the advent of high bandwidth internet access availability, the cloud computing applications have boomed. With more and more applications being run over the cloud and an increase in the overall user base of the different cloud platforms, the need for highly efficient job scheduling techniques has also increased. The task of a conventional job scheduling algorithm is to determine a sequence of execution for the jobs, which uses the least resources like time, processing, memory, etc. Generally, the user requires more services and very high efficiency. An efficient scheduling technique helps in proper utilization of the resources. In this research realm, the hybrid meta-heuristic algorithms have proven to be very effective in optimizing the task scheduling by providing better cost efficiency than when singly employed. This study presents a systematic and extensive analysis of task scheduling techniques in cloud computing using the various hybrid variants of meta-heuristic methods, like Genetic Algorithm, Tabu Search, Harmony Search, Artificial Bee Colony, Particle Swarm Optimization, etc. In this research review, a separate section discusses the use of various performance evaluation metrics throughout the literature.

Sandeep Kumar Patel, Avtar Singh

Chapter 36. Modulation Techniques for Next-Generation Wireless Communication-5G

The current generation wireless communication technologies are not completely capable of fulfilling the demand of high-speed services like Internet of Things, machine-to-machine communication, and smart homes. This paper proposes an evaluation of the next-generation wireless technologies (i.e., 5G). Cyclic prefix orthogonal frequency division (CP-OFDM), filtered OFDM (F-OFDM), universal filtered multicarrier (UFMC), and filter bank multicarrier (FBMC) are considered as potential candidates for 5G applications. The performance analysis of modulation techniques has been carried out using computational complexity, bit error rate, and spectral efficiency. The analysis has shown that all waveform candidates acquired higher spectral efficiency than CP-OFDM. However, FBMC and UFMC are more flexible than other modulation techniques.

Sanjeev Kumar, Preeti Singh, Neha Gupta

Chapter 37. Muscle Artifact Detection in EEG Signal Using DTW Based Thresholding

Electroencephalographic (EEG) readings are generally contaminated with inevitable artifacts. One such artifact is muscle artifact that is engendered by certain muscle activities from subject itself. Preprocessing biosignals has been a progressive field of investigation. This paper emphasizes on utility of nonlinear distance technique, namely dynamic time warping (DTW) for detection of muscle artifacts in EEG recordings. The paper compares DTW with existing higher-order statistics (HOS)-based threshold method. An overall performance of 80.769% has been attained by DTW at an optimum value of threshold. Present work is expected to assist in delivering improved performance in detecting muscle related artifacts, ensuring efficient EEG analysis.

Amandeep Bisht, Preeti Singh

Chapter 38. Human Activity Recognition in Ambient Sensing Using Sequential Networks

In the past few years, with the boost in the ambient-assisted living environment using the Internet of Things (IoT), there is a need to analyze human activities using automated intelligent mechanisms. Machine learning has proved that a great resource is efficiently supervising activities of daily living (ADL), and especially in multi-resident environments where activities are more complicated and need involvements of multiple residents. Public healthcare departments are significantly helped by these automated systems, as they play a vital role in examining patients, old-age peoples, and behavioral habits. In this paper, we have presented a state-of-the-art machine learning algorithms on the real-world ARAS multi-resident dataset, which consists of data from two houses, each with two residents. We have used deep LSTM and mini-batch LSTM, and each is applied to different residents with slightly different architecture. We tried to improve the work previously done by other researchers, using the machine learning sequence models.

Vinay Jain, Divyanshu Jhawar, Sandeep Saini, Thinagaran Perumal, Abhishek Sharma

Chapter 39. Towards the Investigation of TCP Congestion Control Protocol Effects in Smart Home Environment

Due to the proliferation of the smart gadgets and the development of wireless technologies, the number of devices connected to the Internet in the Smart Home Environment (SHE) through WiFi Access Point (AP) is increased. The data traffic generated by these devices leads to an explosive growth of data in the AP. Unfortunately, due to the high data rate, transmission hinders Transmission Control Protocol (TCP) to achieve full bandwidth utilization in the wireless networks. This paper investigates the behavior of the existing congestion control protocols such as TCP Westwood (used in mobile hosts), TCP NewReno (used in Windows XP), TCP Cubic (used in Linux) and Compound TCP (used in Vista and Windows OS) in Smart Home Environment. We comparatively study and analyze the throughput performance of the existing protocols under varying parameters. From the experimental results, it is found that Compound TCP (CTCP) used as congestion control protocol achieves 8% higher throughput and 13% drop in packet loss rate than the existing congestion control protocols.

Pranjal Kumar, P. Arun Raj Kumar

Chapter 40. Efficient Information Flow Based on Graphical Network Characteristics

In today’s environment of rapid information generation, getting precise information about an event is not only sufficient but getting that information in time (or as early as possible) is rather more important. Further, in certain situations, it may also be equally important to put a control to a sort of outbreak in the network. This calls for the processes and techniques which can be developed to make the information to flow from one source to another as per the desired necessities. Social networks, VANETs, electrical circuit networks, etc. are prime examples where channelized information flow is required to accomplish the desired task. For this, we need to map or mimic the real information flow scenario to some simulative environment or model. In this paper, basic graph theory concepts have been used to model various scenarios of these suggested networks to depict how network characteristics can play a vital role to make the information flow happen in a more systematic and optimized way so that the desired task can be accomplished. The paper takes into consideration degree distribution of the graph, PageRank scores, strongly connected components, weakly connected components, average path length, etc. and other desired characteristics that will guide node to node information flow so that various parameters remain optimized like path length, no. of nodes visited, etc. The graphical model will be mimicked using Stanford Network Analysis Platform (SNAP) to evaluate these basic measure’s characteristics and help us propose different algorithms depending upon the scenarios. Finally, the paper will be concluded with an insight that how the algorithms can be worked out to already existing framework using these network characteristics.

Rahul Saxena, Mahipal Jadeja, Atul Kumar Verma

Chapter 41. Tunable Optical Delay for OTDM

A tunable optical delay line filter is designed for Optical Time Division Multiplexer (OTDM) by using three-stage All Pass Filter (APF) based on coupled resonators. The All Pass Filter is designed and simulated. Properties of wide bandwidth, a long tunable delay and low distortion is observed.

P. Prakash, K. Keerthi Yazhini, M. Ganesh Madhan

Chapter 42. Game Theory Based Cluster Formation Protocol for Localized Sensor Nodes in Wireless Sensor Network (GCPL)

Wireless Sensor Networks (WSNs) are collection of a large number of randomly distributed sensor nodes, which are integrated with individual electro-mechanical system (MEMS). Sensor nodes use the MEMS to interact with the physical environment and extract data regarding events. The random deployment of sensor nodes makes it difficult for the sink node to identify the geographical location of those nodes without GPS connection. The use of GPS for all nodes raises the cost of network deployment. The proposed scheme (GCPL) detects the geographical location of sensor nodes without using GPS. Energy conservation is another important issue in WSN for energy constraint of sensor nodes. Clustering is a well-known technique to reduce energy dissipation in an effective way and increase network life time. In WSN, selection of cluster head is very difficult as in cluster heads energy depletes quickly due to their extra responsibilities. The early break down of cluster heads force to choose new cluster head regularly which leads to decrease of network lifetime. The proposed scheme applies game theory-based clustering technique, in which all sensors declare its own strategy to become a cluster head (CH) or not. The simulation result reveals that GCPL efficiency is increased by contrast with CROSS, LGCA, and HGTD.

Raj Vikram, Sonal Kumar, Ditipriya Sinha, Ayan Kumar Das

Chapter 43. SG_BIoT: Integration of Blockchain in IoT Assisted Smart Grid for P2P Energy Trading

The demand for energy has increased exponentially over the last decade, directing the trend towards modernizing the electricity requirements using renewable energy resources. The demand has led to the development of smart grid infrastructure which identifies and responds to native electricity requirements using grid automation, intelligent substations, and smart meters. Smart grids have led to the evolution of microgrid technology which allows the user to distribute generation on small scale rather than centralized generation. In microgrids, the centralized control is provided through renewable resources. However, the use of IoT devices in the grid system increases the vulnerability to compromise the grid system because of it’s centralized behavior. This paper proposes a system that provides resilience and security by integrating blockchain in smart grids. The decentralization of smart grid impacts the electricity distribution that drives the energy market towards an optimal resource sharing environment which enables an efficient energy trading scheme with high security. The backtraceability nature of blockchain eases the process of accounting for customers transaction and electricity billing. Thus, electricity can be provided whenever and wherever.

J. Chandra Priya, V. Ramanujan, P. Rajeshwaran, Ponsy R. K. Sathia Bhama

Chapter 44. Software Defined Network: A Clustering Approach Using Delay and Flow to the Controller Placement Problem

Software Defined Network (SDN) is the most significant model for the administration of large-scale complex systems, which may require reconfigurations occasionally. It involves the separation of the network control plane and the forwarding plane, where a control plane controls many devices. In this way, the switches forwards packets by the rules defined in the flow table by the controller. Despite the fact, SDN is a new area, it has pulled in a lot of consideration from both the scholar world and industry. This paper investigates the multi-controller placement problem from the viewpoint of cost and load balance factors. The paper proposes an algorithm named Load Balancing with ECP-LL that places controllers by using clustering. The algorithm uses Shannon entropy which avoids poor local minima problems.

Anilkumar Goudar, Karan Verma, Pranay Ranjan

Chapter 45. Netra: An RFID-Based Android Application for Visually Impaired

For a visually impaired person, the position and orientation of daily objects such as furniture and other personal items are based on routine. It is very difficult for visually impaired persons to locate missing assets as well as to get the information of an asset present in front of them without disturbing anyone. The most difficult task is to choose a pair of clothes that matches on a daily basis. An RFID based android application proposed in this paper is an attempt to solve problems. It will give information of the asset present in front of a blind person. From a group of objects, the system will help identify a particular object. It will also help blind people to choose the appropriate clothes from a set of clothes without any assistance.

Pooja Nawandar, Vinaya Gohokar, Aditi Khandewale

Chapter 46. Efficient Routing for Low Power Lossy Networks with Multiple Concurrent RPL Instances

With the advent of Internet of Things (IoT) and its subsequent developments, routing in Low Power Lossy Networks (LLNs) has been drawing continuous research interest. IPv6 Routing protocol for LLNs (RPL) is often considered as the most suited for LLN routing. In order to cope up with the increasing demands of IoT paradigm, several RPL enhancements have been made especially in mobility and efficient utilization of the resources. LLNs, being an enabling technology of the IoT, may be subject to different Quality of Service (QoS) requirements depending on applications being served. This results in the presence of multiple RPL instances, each serving a unique application, coexisting in the same LLN. This paper proposes a variant of RPL which uses a unique Objective Function (OF) named Multiple instances ETX-Hop count Objective Function (MEHOF) so as to handle multiple instances. The proposed protocol has been simulated and tested, and the results show an improved performance compared to its existing counterparts.

Jinshiya Jafar, J. Jaisooraj, S. D. Madhu Kumar

Chapter 47. Deep Learning-Based Wireless Module Identification (WMI) Methods for Cognitive Wireless Communication Network

Nowadays, the internet of things (IoT) enabled event monitoring, controlling and managing systems employ different kinds of short range, medium-range and long-range wireless technologies. Wireless communication modules of most of the commercial IoT enabled automation systems operates in industrial, scientific and medical (ISM) frequency band. Therefore, identification of active wireless modules has become most essential to timely detect and track unauthorized users within the restricted zones. Further, it is also useful for finding density of wireless modules operating in the same and/or different frequencies with different and/or same communication protocols. In this paper, we attempt to present an automated wireless module identification (WMI) system based on the communication protocols of the three wireless modules, such as ZigBee, Bluetooth and Wi-Fi which are operated in the ISM frequency band. In this study, three WMI systems are developed based on deep learning networks (DLN), such as two-dimensional convolutional neural network (2D-CNN), long short-term memory (LSTM), and convolutional long short-term deep neural network (CLDNN). We evaluated three DLN based WMI methods on real-time RF signals recorded by using Blade RF, a software defined radio (SDR).

Sudhir Kumar Sahoo, Chalamalasetti Yaswanth, Barathram Ramkumar, M. Sabarimalai Manikandan

Chapter 48. Style Transfer for Videos with Audio

In the art of painting, from early era of beginning of the human civilization, human beings have been creating artistic images with content from the real world but style from their imagination. Consider one such art called The Starry Night by painter van Gogh. Here the mountains, moon and houses are content taken from the real world but the style of painting is totally from the painter’s imagination and is unique. Style Transfer is the problem of taking content of an image and style of other image to create third image having content of the former but style of the latter. Clearly, such work cannot be obtained by simple overlapping the two images. Until recently, due to not so optimized GPUs and slower hardware, image processing was a time consuming computation problem. But now we can use technological optimizations to use Convolutional Neural Networks (CNNs) to do the Style Transfer. In this paper, we discuss how style transfer can be done in videos having audio in them. We shall also compare one of the existing methods with our implementation. Our proposed work has potential applications in the domains of social media communication, entertainment industry and mobile applications.

Gaurav Kabra, Mahipal Jadeja

Chapter 49. Development of Antennas Subsystem for Indian Airborne Cruise Missile

The Microstrip patch antennas are designed for a generalized airborne application that requires a system for its navigation and flight aspects communication with the ground-based control/guidance. It is suitable for installation on a small guided missile. 9.14 dB gain across 60° boresight with a center frequency of 4.3 GHz is obtained for Radar Altimeter. For satellite telemetry in S-band around 3.15 GHz, the gain of 8.96 dB obtained at 54 MHz circular polarization bandwidth. Two antennas are designed for the IRNSS receiver application; the L5 band providing a gain of 4.5 dB and S-band providing 7.66 dB gain for RHCP. The above-simulated designs were fabricated, and the results obtained were consistent with the simulations. For efficiency at these high frequencies, RT/Duroid 5880 with a thickness of 3.175 mm was utilized. Post fabrication measurement results were obtained in anechoic reflection-less environment.

Ami Jobanputra, Dhruv Panchal, Het Trivedi, Dhyey Buch, Bhavin Kakani

Chapter 50. A Literature Survey on LEACH Protocol and Its Descendants for Homogeneous and Heterogeneous Wireless Sensor Networks

The lifetime and energy efficiency of the system are the main aspects of the wireless sensor network. WSN composed of various sensors with a controlling base station. The sensor nodes are having a restricted consumption of energy. To utilize this energy effectively in such a way so that data passed on to the base station from sensing nodes will be realized effectively has becomes a burning matter nowadays. LEACH is a conventional hierarchical routing protocol that is widely used in WSN. The parent LEACH routing protocol is having some drawbacks like it does not consider leaving out the energy of nodes present in the network for the cluster head selection process. To overcome these drawbacks, much advancement is done in the standard LEACH protocol. The principle target of this paper is to give a brief description of improved versions of the basic LEACH protocol. At last, comparisons are made between various improved versions of the LEACH protocol.

Anish Khan, Nikhil Marriwala

Chapter 51. Performance Study of Ultra Wide Band Radar Based Respiration Rate Measurement Methods

Objective: This paper investigates the performance of ultra wide band (UWB) impulse radar based respiration rate (RR) measurement methods for estimating RR from a single person behind four types of wall, including concrete-wall, wood-wall, glass-wall, and brick-wall. Methods: We present two-stage variational mode decomposition (VMD) scheme with different data fidelity constraint ( $$\alpha $$ α ) values for suppressing the effect of baseline drifts and extracting the candidate respiratory signal. Three RR estimation approaches, such as mode center frequency (MCF), fast Fourier transform (FFT) and autocorrelation function (ACF) are investigated for extracting RR parameter accurately. Validation: The UWB radar based respiration signal database is created with ten subjects with recording scenarios of concrete-wall, wood-wall, glass-wall, and brick-wall. Evaluated three RR measurement methods in terms of absolute error(AE), root mean-square error (RMSE) and processing time. Results: The VMD-FFT based method had the lower AE and RMSE values for these types of walls as compared with the VMD-MCF and VMD-ACF based methods. Conclusion: This study showed that UWB sensing module is capable of estimating the RR with acceptable error for the subjects behind the wood, brick, concrete and glass.

P. Bhaskara Rao, Srinivas Boppu, M. Sabarimalai Manikandan

Chapter 52. Secure Architecture for 5G Network Enabled Internet of Things (IoT)

5G network is designed to meet today’s modern society by integrating with internet of things communicate with billions of connected devices, data and upcoming application support, and user requirements. At present, 4G network managing the present generation oriented services but 5G initially operate with existing 4G network then planning to involve full-fledged standalone networks for future and upcoming network generation. The innovative of 5G is reliable connections with fast response and manage latency. Latency is the time factor that taken for devices to respond to each other over the wireless network. 3G networks have response time of 100 ms, 4G networks have around 30 ms, and 5G networks designed that will be as low as 1 ms. 5G technology fast, reliable mobile data services to deliver wireless services to the end user across multiple access platforms and multi-layer networks. 5G technology is a flexible framework of multiple advanced technologies supporting a variety of applications. 5G support for upcoming technical features like intelligent architecture, radio access networks (RANs), 3rd generation partnership project (3GPP), long term evolution advance(LTE-A) and machine-to-machine communication(M2M) technologies with having simplicity and flexibility without complex infrastructure. This proposed 5G network enabled IoT architecture for secure design for protection of layers in effective and efficient manner for better utilization of upcoming applications support as well as support for customer essential services.

Voore Subba Rao, V. Chandra Shekar Rao, S. Venkatramulu

Data Sciences

Frontmatter

Chapter 53. Robust Image Watermarking Using DWT and Artificial Neural Network Techniques

Content in this digital world plays an important role which is available in the form of text, image, video, etc. So protecting this from unauthorized access is an important issue. Hence researcher works, in this field from last few decades and proposed number of methods. The proposed work offers a robust and fast algorithm for embedding watermark in an image. In the proposed work, DWT feature was used where embedding was done in LL band. Security of data was improved by Inverse S-order before embedding data. Embedded data is transformed into training vector for neural network model, while corresponding watermark bit is desired output. Embedded data is passing in neural network for learning which gives a corresponding watermark bit. Sothis involvement of neural network for watermark detection increases the robustness of the work. Real set of images was utilized for performing the experiment. Evaluation parameters show that proposed work has enhances PSNR, SNR, NC parameters as compared to existing methods.

Anoop Kumar Chaturvedi, Piyush Kumar Shukla, Ravindra Tiwari, Vijay Kumar Yadav, Sachin Tiwari, Vikas Sakalle

Chapter 54. Fraud Detection in Anti-money Laundering System Using Machine Learning Techniques

Money laundering (ML) is a process of fabricating large amount of capital gained from any illegal or unethical actions consisting of serious offences such as drug trafficking, mafia, terrorism, etc. By taking its advantages, some fallacious persons transform their illegal source of assets into a legal one. The process of identifying these kinds of money laundering activities is called Anti-Money laundering (AML). This process is becoming more complex in recent years due to the advancement of technology. In recent years, many financial institutions have shown their interest to develop some techniques to fight against various levels of fraud in transactions. Since a large number of transactions take place every day, it is difficult to identify the fraudulent ones manually. In this study, an architectural model has been proposed to determine the fraud patterns in past transactions and to detect the lawless ones in real-time. It is observed that Machine learning techniques help in distinguishing and recognizing these transaction patterns. Eight machine learning techniques like Support Vector Machine (SVM), Logistic regression, Average perceptron, Neural networks, Decision trees, and Random forest are implemented on a set of AML data and observed that random forest works best amongst all.

Ayush Kumar, Debachudamani Prusti, Daisy Das, Shantanu Kumar Rath

Chapter 55. A Smart Approach to Detect Helmet in Surveillance by Amalgamation of IoT and Machine Learning Principles to Seize a Traffic Offender

In this era of smart world every atomic entity we talk about is smart and intelligently structured, out of which one of the most eminent topic this paper depicts is the Smart Road safety measures for two-wheeler, as roads are the most prominent means to commute for humans now a days. This paper describes about a smart system used to detect a rider without helmet on a two-wheeler and report the concerned authority. The paper has four sections in which the first section is the introduction depiction the problem statement. Second section describes the literature review Third section describes the system and its architecture. Fourth section defines the system’s verification and validation and section fifth describes the conclusion with the future scope.

Gaytri, Rishabh Kumar, Uppara Rajnikanth

Chapter 56. Botnet Detection Using Machine Learning Algorithms

In recent years, the security threats from Internet are spreading at a fast rate. These threats are coming from different fraudulent email, websites and malicious software which are working independently. All these threats are of different categories (virus, malware, trojan etc). Every threat has a different techniques of spreading the malicious code and we also need different categories for detecting all these threats. At the focal point of a considerable lot of these assaults are accumulations of traded off PCs, or on the other hand Botnets, remotely constrained by the aggressors, and whose individuals are situated in homes, schools, businesses and government around the globe. Botnets have been a serious threat in present day and attacking lot of organization and performing cyber-crimes. Botnet have been working on the method of carry and spread. In this way it transfers malicious codes or software to different computers. Spam, denial of service attack and click fraud are some of the methods through which Botnet are attacking the system. Detection of Botnet is a typical task which can be carried out in an efficient way by using Machine Learning. This paper’s focus is on different Machine Learning algorithm and their analysis method for detection of Botnet. Different Machine Learning algorithm are implemented and their ability to detect botnet has been found out. All the algorithms are implemented on an existing dataset and therefore the results shows the ability of algorithm in detecting botnet.

Chirag Joshi, Vishal Bharti, Ranjeet Kumar Ranjan

Chapter 57. Estimation of Daily Average Global Solar Radiance Using Ensemble Models: A Case Study of Bhopal, Madhya Pradesh Meteorological Dataset

The paper presents an ensemble model to calculate the daily averaged global solar radiation prediction for the City of lakes Bhopal, in the Central region of India. Bhopal has a very diverse climate. During the wet season (commonly called Monsoon), the weather is mostly cloudy with ample rains and thunderstorms, whereas in the dry seasons of winter and summer, the sky is clear, and the sun shines brightly for most part of the day leading to solar radiation in a finite range every season. Thus, the climate of this city depicts the climate of the country and is suitable as solar radiation-based case study for further analysis of renewable energy source. We use meteorological variables like day of the year, sunrise and sunset time, maximum and minimum temperature, solar irradiance, precipitation, humidity, wind direction and wind speed to train the machine learning ensemble model which is built using Python scilearn kit code and trained over the dataset. With good correlation factors of independent variables which have strong influence on local weather, the experimental setup demonstrated good accuracy, and smaller values of root-mean-square error (RMSE), and good relative to mean absolute percentage error (MAPE) of the predictions with normalization of values for the given case study. To the best of our knowledge, this is the first attempt at analyzing the numerical weather prediction data for India where the only factors considered before setting up solar panels include inclination angle. With these set up prerequisite, solar energy estimation is less accurate as compared to that using ML and weather-related information, which can be acquired from reliable sources. The execution of this project will result in a relatively easier and less expensive way to solar radiance estimation and setup of solar panels across the country.

Megha Kamble, Sudeshna Ghosh

Chapter 58. Text Localization in Scene Images Using Faster R-CNN with Double Region Proposal Networks

The problem of text extraction is an interesting area of research in computer vision domain. In the recent years, emergence of various applications on smart hand-held devices such as translation of text from one language to another in real time, computerized aid for visually impaired, user navigation & track monitoring and driving assistance systems, has stimulated the renewed research interest in this domain. Although various Convolutional Neural Network (CNN) based methods have been explored for text localization in scene images, method using Faster R-CNN with double region proposal network (RPN) has not been explored yet. The conventional Faster R-CNN produces regions-of-interest (ROIs) through a single RPN utilizing the feature matrix of the last convolutional layer, whereas the present investigation proposes an end-to-end method of scene text localization where ROIs are generated by double RPNs using the feature matrices of thirteen different convolutional layers and four poolings. Both of these RPNs have then been merged, which enables the system to locate the text regions in the scene images. The performance of the present system has been assessed using ICDAR 2013/2015 RRC test dataset and it has outperformed all existing studies on scene text detection.

Pragya Hari, Rajib Ghosh

Chapter 59. Event Classification from the Twitter Stream Using Hybrid Model

During natural disaster, user-generated social media is used for monitoring and analyzing valuable information to take appropriate actions. Tweets posted on twitter discussed everything from the tales of everyday life to the latest events and news. Detecting disaster-related tweets in the early stage can help in proper planning for rescue and relief operations and mitigate the damage. In this work, different disaster-related tweets such as earthquake, hurricane, flood, wildfire and general tweets related to car, sports and cricket are used to train a hybrid model of gated recurrent unit (GRU) and long short-term memory (LSTM) network to first classify tweets into the different classes, and then this trained system can be used to detect an event. Proposing this tweet classification from general tweets and events-related tweets from the twitter using the technique used in paper is different because no prior paper has used this task. For detecting events, if there is an unexpected rise in the volume of tweets of the particular event, that means there is a great chance of happening of the event. The proposed hybrid-based network can classify tweets with the F1-score of 0.99 which shows the effectiveness of the proposed system.

Neha Singh, M. P. Singh, Prabhat Kumar

Chapter 60. Using an Ensemble Learning Approach on Traditional Machine Learning Methods to Solve a Multi-Label Classification Problem

This paper explores an interesting way to solve a multi-label classification problem using an ensemble learning with natural language processing approach. The multi-label classification problem selected is that of movie genre prediction where each movie can possibly belong to multiple genres which essentially differentiates it from a much more classical multi-class classification problem. In the current machine learning environment, there are a host of different classifiers, some which are more suited to handle more than binary classes at once such as random forests and decision trees while some which are modelled around solving an inherently binary classification problem. As we are using textual data in the form of synopsis of various movies, we use Word2Vec to vectorize the textual data in order to fit the classifiers. In this project, we use classifiers belonging to both these types, namely Naive Bayes, random forest classifier and XGBoost classifier to individually predict the multiple labels for each movie, and then we construct an ensemble model using a voting classier in the purview of ensemble learning by adjusting the weights of the combining classifiers and show that the ensemble model performs much better than each of the individual classifiers. The final micro-f-score achieved is 0.6738 which is commendable with using non-neural traditional machine learning techniques and is an improvement from the referred work for this paper.

Siddharth Basu, Sanjay Kumar, Sirjanpreet Singh Banga, Harshit Garg

Chapter 61. Automatic Building Extraction from High-Resolution Satellite Images Using Deep Learning Techniques

With the enhancement in satellite technologies, it has become easy to get satellite datasets which can be used for different kinds of analysis about our earth. A lot of research work is going on in different directions; one of them is building extraction from remote sensing images. It is an area which has a wide variety of applications including smart city planning, insurance and tax assessment, disaster management and socioeconomic information estimation. In this research work, two popular deep learning architectures, viz. U-Net and ResUnet, with hyper-parameter tuning have been implemented and tested over the high-resolution satellite images using NVIDIA DGX-1 v100 supercomputer. The performance of these two deep learning models has also been assessed based on accuracy, precision, recall, F1-score and mean Intersection over Union (mean IoU). The results of the experiment reveal that U-Net-based model has out-performed ResUnet-based model in terms of achieving better accuracy as 0.8739 and mean IoU as 0.745 on test data. The results may further be improved by incorporating data augmentation to generate more images for training deep learning models.

Mayank Dixit, Kuldeep Chaurasia, Vipul Kumar Mishra

Chapter 62. Epileptic Seizures Classification Based on Deep Neural Networks

Epileptic seizure is a chronic and non-communicable disease which occurs in people of all ages. In the detection of epileptic seizures, electroencephalography (EEG) plays a vital role. Due to advantages like low cost, portability, etc., EEG is preferred over other brain acquisition techniques. To analyze EEG data with the increased number of segments, it will take additional time for the neurophysician to classify the seizures. To reduce the time in analyzing EEG data, two deep neural network-based models are proposed. In this paper, one model using the “long short-term memory (LSTM) model” and another using “bi-directional long short-term memory (Bi-LSTM)” are anticipated to classify whether the given data is normal or partial seizures or generalized seizure data. The EEG data of 40 patients is collected, and each patient’s data is split into 10-second segments. The algorithm is written in MATLAB® by giving the 10-second segments of all patients. The training is done on the personal computer with an improved system configuration which reduces training time from days to a few hours. The training time to train the network with the LSTM model is 82 min, and the network with Bi-LSTM model is 122 min. The accuracy achieved with LSTM and Bi-LSTM model is 90.89% and 90.92%, respectively.

Annangi Swetha, Arun Kumar Sinha

Chapter 63. Analysis for Malicious URLs Using Machine Learning and Deep Learning Approaches

The Malicious Uniform Resource Locator (URL) is the primary mechanism for hosting unsolicited content, for example, spam, malicious ads, phishing, drive-outs, etc. Blacklisting, signature matching and regular expression techniques have been used by past studies. These methods are totally ineffective when detecting variants of the existing malicious URLs or new URLs. The machine-based learning approach provides a simple solution to this problem. This type of approach requires a broad analysis in feature engineering and security artifact type representation, for example, URLs of this Web site. These conventional mechanisms are based on the combination of keywords and URL syntax. Such traditional methods cannot however deal efficiently with ever-changing developments and techniques of Web access. This article also provides a timely and comprehensive survey to a variety of audiences to improve their knowledge of the state of the art and to encourage their own work and practices, not just machine learning researchers and academic engineers, but also cybersecurity professionals and specialists.

Santosh Kumar Birthriya, Ankit Kumar Jain

Chapter 64. Engaging Smartphones and Social Data for Curing Depressive Disorders: An Overview and Survey

Depression is one of the significant causes of disability throughout the world. If untreated, depression can lead to suicide, which is a leading cause of death among youngsters. Treatment for depression is not accessible due to various factors such as social stigma, unawareness, and shortage of medical professionals. With the development of technology, studies have been done to provide health care for mental illnesses through smartphones. The widespread use of smartphones provides an opportunity for continuous monitoring of the patients. Smartphone usage patterns, along with social media data, provide insights about the mental state of the patients, and they can be used to predict depression and recommend activities that help in treating depression. We first review such systems that aim to predict and cure depression using smartphones. We then propose our system that helps in predicting, monitoring, and treating depression. We use machine learning for analyzing the sentiment of social media posts and then perform depression prediction and activity recommendations using analyzed social media data, wearable sensor data, electronic health records, and smartphone usage patterns.

Srishti Bhatia, Yash Kesarwani, Ashish Basantani, Sarika Jain

Chapter 65. Transfer Learning Approach for the Diagnosis of Pneumonia in Chest X-Rays

Equitable and early diagnosis of the disease is a very crucial aspect in the medical industry for the prevention and correct treatment of the disease. Pneumonia is an infection that can take place in the lungs of the human body, and it inflames the air sacs present in the lungs. In this paper, deep learning has been used in the exploration and prediction of the people affected by pneumonia. The main grounds of this paper are to detect whether the person is suffering from the pneumonia or not. In this proposed paper, Xception and Vgg16 are used to predict the chest X-ray images. Chest X-ray dataset used in this paper is publicly available containing 5856 chest X-ray images for the diagnosis of pneumonia in individual chest X-rays. Xception produces the best results in the prediction of the chest X-ray images on the test set.

Kuljeet Singh Sran, Sachin Bagga

Chapter 66. Physical Sciences: An Inspiration to the Neural Network Training

Behind every discovery, observation plays a vital role. The neural networks (NNs) are the result of a very minute neuron observation. This led to a revolution in the field of artificial intelligence. On the contrary of the biological inspirations, we present some of the physical sciences models that could be further related to neural network training phase. These relations are the momentum, solar orbits and the shape of a water droplet. Momentum is related to the weight optimization during NNs training phase and Solar orbits helps to understand the epoch’s concept. At the end, shape of a water droplet concept gives us the motivation behind the loss function optimization. These three are just some of the many inspirations that we can obtain from physical sciences. This paper can be the beginning of a whole new theory to the neural networks.

Venkateswarlu Gudepu, Anshita Gupta, Parveen Kumar

Chapter 67. Deep Learning Models for Crop Quality and Diseases Detection

Deep Learning is acquiring momentum in the agricultural field for crop disease detection using image processing due to its computational power. Several deep learning techniques have been implemented in different domains and recently introduced in the field of agriculture to classify and predict the diseases of crops. Based on images of banana crops in the early stages of development, the objective of this research study is to create a prediction model using two types of Convolutional Neural Networks (CNN) architectures, namely, AlexNet and ResNet50. In order to carry out the empirical study, the PlantVillage dataset for the Banana plant with 510 images of banana leaves was used to train and test the networks. Results were analyzed using four parameters namely; training accuracy (TA), training loss (TL), validation accuracy (VA), and validation loss (VL). It was observed that ResNet50 outperformed the other one with better results at 88.54% when validation accuracy is considered as a performance evaluation measure. The results of this study will be useful for farmers as they can make timely interventions in the case of Banana Black Sigatoka (BBS) and Banana Bacterial Wilt (BBW) diseases.

Priyanka Sahu, Anuradha Chug, Amit Prakash Singh, Dinesh Singh, Ravinder Pal Singh

Chapter 68. Clickedroid: A Methodology Based on Heuristic Approach to Detect Mobile Ad-Click Frauds

A number of apps are available in app stores for different types of online services and are growing with the use of mobile devices in the public. Advertisements on mobile through apps become a popular trend in online businesses. Ad frauds is a major concern for the advertising industry, and the most affected ones are the advertisers. The advertiser pays revenue to the ad network for each click on his ad. If ghost clicks are not captured as fraud or invalid clicks, then the advertiser has to pay to the ad network for ghost clicks also. Hence, return on investment (ROI) of the advertiser becomes very low and not achieving the goal as expected. Detecting fraud clicks on ads is very important to achieve the expected goal of the advertiser. The proposed methodology follows a heuristic approach to detect mobile ad-click frauds. A click fraud detection algorithm has been developed and implemented on the cloud server. The algorithm is triggered for each entry of the ad click to detect the click belongs to fraud or not fraud. The obtained results have been verified by popular machine learning (ML) algorithms namely as support vector machine (SVM), random forest, and k-nearest neighbors (k-NN) with the accuracies of 91%, 84% and 85%, respectively.

Pankaj Kumar Keserwani, Vedant Jha, Mahesh Chandra Govil, Emmanuel S. Pilli

Chapter 69. Machine Translation System Using Deep Learning for Punjabi to English

Machine Translation (MT) is ongoing research from the last decades. Research started with a simple word to word replacement from source (e.g. English) to target language (e.g. Hindi). Then research moved to statistical-based machine translation (SBMT) which is based on the parallel corpus. Now from the last few years, deep learning is used to develop an MT system. In this paper, an artificial neural network-based machine translation (ANMT) system is trained and tested for Punjabi to the English language. To train the proposed system a parallel corpus of Punjabi-English language is prepared and based on this corpus three models for Punjabi to English NMT system have been developed. In this work, the BLEU score is used to evaluate the performance of the system. The proposed system had shown a BLEU score of 36.98 for smaller sentences, 34.38 for medium sentences and 24.51 for large sentences using model 1, BLEU score of 36.62 for smaller sentences, 35.51 for medium sentences and 26.61 for large sentences using model 2, BLEU score of 60.68 for smaller sentences, 39.22 for medium sentences and 26.38 for large sentences using model 3.

Kamal Deep, Ajit Kumar, Vishal Goyal

Chapter 70. The Agile Deployment Using Machine Learning in Healthcare Service

Hospitals are in search of industry-proven processes through which an increasing gamut of complicated operations can be managed most prudently and also reduce the operative costs to a considerable degree coupled with better care for quality. The Agile methodologies provide adequate in healthcare using machine learning. The Agile is now ubiquitous in many work and organization. The ‘Agile’ redesign deals with improving system responsiveness to the patient through an improvised source of flexibility and coordination. Machine learning help in healthcare to diagnose disease, recommend the treatment, online consultations improve, speeding up drug development and improve the training of doctor and medical student. In hospital healthcare services are good but the quality of service provided in rural areas as a comparison to urban is not up to the mark. In this research an actuarial model of healthcare system is build which is very cost efficient. The adoptions of a higher number of computerized tools have resulted in the advantage identified with the procedure for taking care of patient. According to W.H.O less than one (0.8) doctor assigned for 1000, patients that’s why heavy workload on doctors in India and has placed 112th position among 191 countries of the world. In this research work design smart and secure healthcare service for rural areas to provide better healthcare service and healthcare cost estimation using Agile Methodology in machine learning.

Shanu Verma, Rashmi Popli, Harish Kumar

Chapter 71. Pneumonia Detection Using MPEG7 for Feature Extraction Technique on Chest X-Rays

Pneumonia is a lung inflammatory condition that transpires due to excessive fluid in the air sacs (alveoli) of the lungs. It is trivially caused due to infection from viruses and bacteria like Streptococcus pneumoniae, Haemophilus influenzae and Ligionella pneumophila. According to World Health Organization (WHO), pneumonia is responsible for more than 0.8 million deaths in 2017. Chest X-rays are predominantly used for detecting pneumonia, but diagnosis necessitates a well-trained and experienced radiotherapist. So, developing an automated system for diagnosing pneumonia with a better distinguishing ability will be much more benevolent for people, especially living in distant areas. Segmentation and other approaches can be used for detecting pneumonia, but it is quite computationally intensive. So, to ameliorate the technique, we have used dimensionality reduction stratagem which is acquired using principal component analysis (PCA) on the features of the image extracted using MPEG7 and PCA assisted us to extract the dominant features from the image in terms of principal components after which, with the help of ANN, we got the state-of-the-art solution on the CheXNet dataset.

Abhishek Sharma, Nitish Gangwar, Ashish Yadav, Harshit Saini, Ankush Mittal

Chapter 72. Comparative Study of GANs Available for Audio Classification

GANs, a class of AI algorithm, basically include two neural networks of which one acts as a generator while the other acts as a discriminator. With several applications of GANs, this paper is focused especially on the comparative study of GANs for audio classification. WaveGAN, SpecGAN, and SEGAN are the three most evolving GANs used in the domain of audio classification. The very first attempt to synthesis the audio’s raw waveform with the help of GANs was made by WaveGAN. The SpecGAN is based on the frequency domain audio generation model in which the audio is represented as an image using the spectrogram (time-frequency representation). Most of the times, the quality of the speech gets contaminated by various types of noise present. The SEGAN is used to improve the quality of the contaminated speech and also its intelligibility. From the comparative study of audio GANs, it will be observed that which among these three is the best approach to process the audio signal. However, in the future, more research can be directed in the field of speech enhancement than in speech generation.

Suvitti, Neeru Jindal

Chapter 73. Extractive Summarization of EHR Notes

Electronic Health Records (EHRs) capture all information regarding the treatment of a patient. In the case of patients with chronic diseases that needs periodic interdepartmental evaluations, the volume of EHR becomes very large and clinicians find it difficult to find all the required information within the constrained time. In such scenarios, automated EHR summarization can be very useful. Summary of EHR can help healthcare provider to see the most important clinical decisions and history details in brief. This will help in saving time, reducing medical errors like prescribing an existing medication and can help in better clinical decision making. This paper reviews the existing approaches for extractive summarization and proposes a technique to provide extractive summary for a single EHR note.

Ajay Chaudhary, Merlin George, Anu Mary Chacko

Chapter 74. Feature Selection and Hyperparameter Tuning in Diabetes Mellitus Prediction

Diabetes mellitus commonly called as type-2 diabetes is the common disease caused by the inefficient production of insulin by the pancreas which causes high concentration of glucose in our body. This paper focuses on feature extraction and parameter estimation on diabetes Pima Indian dataset. The feature selection technique and tuning of hyperparameters has proved to be the best techniques on these types of datasets and as a result gives very high accuracy on logistic regression algorithm than decision tree classifier, random forest classifier and SVM.

Rashmi Arora, Gursheen Kaur, Pradeep Gulati

Chapter 75. Contribution Title a Correlational Diagnosis Prediction Model for Detecting Concurrent Occurrence of Clinical Features of Chikungunya and Zika in Dengue Infected Patient

Vector-Bonne diseases are major challenges in the society worldwide. Every country is suffering from vector borne disease every year. This study is conducted to detect concurrent occurrence of clinical features of Chikungunya and Zika in Dengue infected Patient. This paper has proposed correlational Diagnosis Prediction Model for detecting Concurrent occurrence of Clinical Features of Chikungunya and Zika in Dengue infected Patient. In this propose model, patient data is evaluated with the help of propose diagnosis algorithm for predicting these diseases. An experimental environment is setup for evaluating the performance of propose model. The future scope of this research can be used for curing other Vector Borne Diseases.

Rajeev Kapoor, Sachin Ahuja, Virender Kadyan

Chapter 76. Image Filtering Using Fuzzy Rules and DWT-SVM for Tumor Identification

Image processing is an extensive area covering the medical image processing as its application. MRI scan is a traditional technique to find the brain tumor region but MRI images are not enough to diagnose the tumor effectively. The most efficient and popular image segmentation method is Fuzzy C means, but output of Fuzzy C means algorithm contains some noisy parts. In this proposed work, Median filter is used at first stage to remove unwanted parts and in the later stage identification of tumor is done using the discrete wavelet transform with Support Vector Machine. Median filter is used in later stage which improves the performance of SVM classifier. The proposed method has achieved an accuracy of 94.44%.

Rahul Dubey, Anjali Pandey

Chapter 77. Multi-Class Classification of Actors in Movie Trailers

A trailer is a short version of the movie which gives the viewer information about the movie such as its genre, the cast and what the audience needs to expect from the movie. You should never judge a book by its cover but you can always judge a movie by its trailer. So in this paper, we aim to classify actors in movie trailers by extracting key frames from trailers and obtaining features from it through convolutional neural network and then classifying it using the output function. A trailer emphasises the type of movie it is marketing. Actors are one of the most important aspects in a movie and play a decisive role when analysing the overall popularity of the movie. A trailer always gives a sneak-peak of the actors that are going to play a crucial role in the movie. Recognising actors in a movie trailer thus becomes an important task as much of the viewer’s pre-judge a movie based on the actors that glorify its cast.

Prashant Giridhar Shambharkar, Gaurang Mehrotra, Kanishk Singh Thakur, Kaushal Thakare, Mohammad Nazmud Doja

Chapter 78. Analysis of Machine Learning and Deep Learning Approaches for DDoS Attack Detection on Internet of Things Network

With the advancement in technology, Internet of things has become a new way to communicate with various smart machines and technologies running platforms independently. With this rapid growth of IoT, it often serves serious concerns about safety, particularly in the areas of security and privacy. Due to the exploitation of the vulnerabilities in IoT resulting from various restraints such as restricted resources, weak authentication, not changing default passwords, scarcity of important security protocols, malicious entities gain access to authorized devices and, therefore, can take the shape of various attacks. A distributed denial of service (DDoS) is an attack targeted at the servers to shut it down partially or completely, by flooding the internet traffic. This paperwork is based on the “distributed” DoS attack in IoT and analysis of work done in its defense. In this paper, we discuss the concept of botnets, which is used to create a DDoS attack. We discuss the motivation for attacking IoT devices and use them for creating a botnet. We broadly describe various DDoS defense techniques based on machine learning and deep learning and compared them to find out the security gaps presented. Furthermore, we suggest implementing four stages of defense mechanism for mitigating DDoS attacks. Furthermore, we list out the open challenges and research issues that need to be tackled for stronger as well as intelligent DDoS defense.

Aman Kashyap, Ankit Kumar Jain

Chapter 79. Image Retrieval Systems: From Underlying Feature Extraction to High Level Intelligent Systems

In this digital era, the profound amounts of complex images are being produced due to the up gradation of image capturing devices. So there is a huge demand of an efficient retrieval system for indexing and retrieving these images. Content based image retrieval (CBIR) system has been an active and promising research field in the area of image retrieval and processing. This system aims at retrieving the most appropriate and visually similar images from the large databases with the extraction of low level features of the images like color, edge and texture by various extraction techniques. This paper analyzes the basic CBIR system and the various achievements obtained in these systems mainly in the areas of feature extraction, indexing and intelligent CBIR systems. The most of the research in this area is now being focussed in developing of an advanced and intelligent CBIR system by using various deep learning algorithms which includes Convolutional neural network, auto encoders; long short term neural networks etc. so that the accuracy of the system can be improved. Finally, in the paper our insights and challenges are also provided for future research.

Shefali Dhingra, Poonam Bansal

Chapter 80. A Combined Model of ARIMA-GRU to Forecast Stock Price

Deep learning has gained more interest and attention in comparison with machine learning in recent years. It has been successfully applied to many applications of the real world to solve problems on a daily basis. Time series forecasting for the stock market is the most challenging area in the time series forecasting domain. In this paper, we propose the use of deep learning technique in stock price forecasting and compare the same with machine learning technique. We present hybrid models of auto-regressive integrated moving average support vector machine and auto-regressive integrated moving average gated recurrent unit, respectively. The closing stock prices act as inputs for the auto-regressive integrated moving average model that gives residuals of the same as output. The residuals along with the original closing stock prices are used to compute the new closing stock prices, that is, put to the deep learning and machine learning models, respectively, to predict the next day’s closing price. By experimentation, we are able to conclude that auto-regressive integrated moving average gated recurrent unit is able to give a better accuracy in comparison with auto-regressive integrated moving average support vector machine on long-term forecasting of the closing price of stock for a large data that includes three different datasets taken from the Indian National Stock Exchange.

Sangeeta Saha, Neema Singh, Biju R. Mohan, Nagaraj Naik

Backmatter

Weitere Informationen