Skip to main content

2018 | Buch

Information and Communication Technology for Sustainable Development

Proceedings of ICT4SD 2016, Volume 2

herausgegeben von: Dr. Durgesh Kumar Mishra, Dr. Malaya Kumar Nayak, Amit Joshi

Verlag: Springer Singapore

Buchreihe : Lecture Notes in Networks and Systems


Über dieses Buch

The book proposes new technologies and discusses future solutions for design infrastructure for ICT. The book contains high quality submissions presented at Second International Conference on Information and Communication Technology for Sustainable Development (ICT4SD - 2016) held at Goa, India during 1 - 2 July, 2016. The conference stimulates the cutting-edge research discussions among many academic pioneering researchers, scientists, industrial engineers, and students from all around the world. The topics covered in this book also focus on innovative issues at international level by bringing together the experts from different countries.


Green IT and Environmental Sustainability Issues

The climate change catastrophe, global warming, and environmental sustainability are some of the major challenges our globe is facing today. There is an urgent requirement of the global rules and policies to ensure environment sustainability. The Green IT is a genuine effort initiated from computer science world toward sustained environment. It is a worldwide rung which lays the foundations for the proficient use of computers and other similar computing resources so as to produce minimal carbon emissions. Because of its direct impact on environment, Green IT has also emerged as a research topic of prime importance. In this paper, we have studied and quantified these varied impacts. As well as we have proposed an action plan to diminish these awful impacts by changing the designs and development strategies of various silicon valley products. The stated action plan has also been verified on a sample test case study in this manuscript.

Priyanka Paliwal, Divya Kumar
Design and Simulation of Microstrip Array Antenna with Defected Ground Structure Using ISM Band for Bio Medical Application

For detecting the malignant cancerous tissues in the breast, various techniques are used like Mammography, MRI, Ultra Sound, etc. But such types of technology are miserable while examining. The Microwave Breast Imaging technique has generated scattered signals after transmission of Microwave signals. This causes a loss of information. Therefore 1 × 4 array antenna is design, to detect the tumor at early stage and this array antenna is more specific to detect the malignant tissues. The array antenna has 1 × 4 array, with Defected Ground Structure which will increase the efficiency of this model. This model is used in ISM (Industrial, Scientific, and Medical) band having frequency 2.4 GHz. This band is unlicensed frequency band.

Deepali Chalak, Sujit Dharmapatre
Design and Implementation of a 32-Bit Incrementer-Based Program Counter

The paper presents the design and implementation of a 32-bit program counter that has been used in DLX-RISC processor which uses Tomasulo Algorithm for out of order execution and a 32-bit program counter based on incrementer logic that was self designed on Virtex-7 FPGA board. The results for power, delay, and area were compared in order to obtain optimal results for the program counter. The power delay product (PDP) of the program counter design based on incrementer logic was found to be 94.4% less than that of the program counter used in DLX-RISC processor. Thereby, the improvised program counter enhances the overall performance of any processor it is used in as the power and delay have been substantially reduced in the proposed design. The designs are simulated and synthesized on Xilinx Vivado 2015.4 using VHDL and are implemented on Virtex-7 FPGA.

Nathaniel Albuquerque, Kritika Prakash, Anu Mehra, Nidhi Gaur
Application of Remote Sensing Technology, GIS and AHP-TOPSIS Model to Quantify Urban Landscape Vulnerability to Land Use Transformation

This study demonstrated the efficacy of remote sensing technology, GIS and AHP-TOPSIS model to quantify vulnerability of different segments of urban landscape to land use transformation. Six different factors such as Accommodating Index (AI), Mean Heterogeneity Index (MHI), Landscape Shape Index (LSI), Division Index (DI), Cohesion Index (CI), and Distance (D) were identified as inputs to the AHP-TOPSIS model. The Landsat 8 satellite data was classified using supervised classification to determine the aforementioned factors. The influencing factors may have varying intensity in triggering land use conversion. Therefore, the relative importance of the aforementioned factors was quantified using AHP. Furthermore the influence of factors can be either positive or negative on the phenomenon. Thus, Technique of Order of Preference for Similarity to Ideal Solution (TOPSIS) was employed in the present study. The results succeeded in handling the varying characteristics of the variables, and are very close to the actual field-scenario.

Alok Bhushan Mukherjee, Akhouri Pramod Krishna, Nilanchal Patel
Enhanced Feature Selection Algorithm for Effective Bug Triage Software

For developing any software application or product it is necessary to find the bug in the product while developing the product. At every phase of testing the bug report is generated, most of the time is wasted for fixing the bug. Software industries waste 45% of cost in fixing the bug. For fixing the bug one of the essential techniques is bug triage. Bug triage is a process for fixing the bugs whose main object is to appropriately allocate a developer to a novel bug for further handling. Initially manual work is needed for every time generating the bug report. After that content categorization methods are functional to behavior regular bug triage. The existing system faces the problem of data reduction in the fixing of bugs automatically. Therefore, there is a need of method which decreases the range also improves the excellence of bug information. Traditional system used CH method for feature selection which is not give accurate result. Therefore, in this paper proposed the method of feature selection by using the Kruskal method. By combining the instance collection and the feature collection algorithms to concurrently decrease the data scale also enhance accuracy of the bug reports in the bug triage. By using Kruskal method remove noisy words in a data set. This method can improve the correctness loss by instance collection.

Jayashri C. Gholap, N. P. Karlekar
Reliable Data Delivery on the Basis of Trust Evaluation in WSN

Different applications have come out as in the field of Wireless Sensor Network (WSN) by years of research. At present, Lossy Networks (LLNs) are in the center of area of studies. LLNs are made up of Wireless Personal LANs, low-power Line Communication Networks, and Wireless Sensor Networks. In such LLNs, for sending protected data, routing IPv6 routing protocol is used for minimum-power as well as not reliable networks (RPL) controlled by Internet Engineering Task Force (IETF) routing protocol. A route created by rank-based RPL protocol for the sink based in the rank value and link quality of its neighbors. But, it has few site backs for example, high packet loss rates, maximized latency, and very low security. Packet losses as well as latency get maximized as the length of path (in hops) increases as with every hope a wrong parent link is chosen. For increasing the RPL system and to solving issue stated above, a Trust Management System is proposed. Every node is having a trust value. Trust value will increase or decrease depending to behavior of node and trusted path is selected for delivering the data. For increasing energy efficiency, at the time of data transferring “compress then send” method is used, this results in minimum utilization of energy reduced data size. By making use of cryptography, we gained data security which is the key concern. By analyzing the test outcomes conducted on JUNG simulator shows, our proposed system have increased the packet delivery ratio by having trust management system while transferring the data, increases energy efficiency by utilizing the data compression, network security is improved by utilizing encryption decryption method as compared to present system.

Deepak Gadde, M. S. Chaudhari
K-Mean Clustering Algorithm Approach for Data Mining of Heterogeneous Data

The increasing rate of heterogeneous data gives us new terminology for data analysis and data extraction. With a view toward analysis of heterogeneous sources of data, we consider the challenging task of developing exploratory analytical techniques to explore clustering techniques on heterogeneous data consist of heterogeneous domains such as categorical, numerical, and binary or combination of all these data. In our paper, we proposed a framework for analyzing and data mining of heterogeneous data from a multiple heterogeneous data sources. Clustering algorithms recognize only homogeneous attributes value. However, data in the every field occurs in heterogeneous forms, which if we convert data heterogeneous to homogeneous form can loss of information. In this paper, we applied the K-Mean clustering algorithm on real life heterogeneous datasets and analyses the result in the form of clusters.

Monika Kalra, Niranjan Lal, Samimul Qamar
User Authentication System Using Multimodal Biometrics and MapReduce

Establishing the identity of a person with the use of individual biometric features has become the need for the present technologically advancing world. Due to rise in data thefts and identity hijacking, there is a critical need for providing user security using biometric authentication techniques. Biometrics is the science of recognizing a person by evaluating the distinguished physiological and biological traits of the person. A unimodal biometric system is known to have many disadvantages with regard to accuracy, reliability, and security. Multimodal biometric systems combine more than one biometric trait to identify a person in order to increase the security of the application. The proposed multimodal biometric system combines three biometric traits for individual authentication namely Face, Fingerprint, and Voice. MapReduce is the technique used for analyzing and processing big data sets that cannot fit into memory.

Meghana A. Divakar, Megha P. Arakeri
Fake Profile Identification on Facebook Through SocialMedia App

In today’s life almost everyone is in association with the online social networks. These sites have made drastic changes in the way we pursue our social life. But with the rapid growth of social networks, many problems like fake profiles, online impersonation have also grown. Current announces indicate that OSNs are overspread with abundance of fake user’s profiles, which may menace the user’s security and privacy. In this paper, we propose a model to identify potential fake users on the basis of their activities and profile information. To show the effectiveness of our model, we have developed a Facebook canvas application called “SocialMedia” as a proof of concept. We also conducted an online evaluation of our application among Facebook users to show usability of such apps. The results of evaluation reported that the app successfully identified possible fake friend with accuracy 87.5%.

Priyanka Kumari, Nemi Chandra Rathore
Disease Inference from Health-Related Questions via Fuzzy Expert System

Automatic disease inference is vital to shorten the gap for patients seeking online remedies for health-related issues. Some of the persistent problems in offline medium are hectic schedules of doctors engrossed in their workload which abstain them to supervise on all health-related aspects of patients seeking advice and also community-based services which may be trivial in nature because of factors such as vocabulary gaps, incomplete information, and lack of available preprocessed samples limiting disease inference. Thus, we motivate users with proposed expert system by answering the underlying challenges. It is an iterative process working on multiple symptoms and compiles overall symptoms and causes required for inference of diseases. First, symptoms are mapped from extracted raw features. Second, fuzzy inference is made from weight-based training and catalyst factors. Thus, fuzzy-based expert systems will be boon for online health patrons who seek health-related and diagnosing information.

Ajay P. Chainani, Santosh S. Chikne, Nikunj D. Doshi, Asim Z. Karel, Shanthi S. Therese
Spectral Biometric Verification System for Person Identification

Automatic person identification is possible through many biometric techniques which provide easy solution like identification and verification. But there may be chances of spoofing attack against biometric system. Biometric devices can also be spoofed artificially by plastic palmprint, copy medium to provide a false biometric signature, etc., so existing biometric technology can be enhanced with a spectroscopy method. In this paper, ASD FieldSpec 4 Spectroradiometer is used to overcome this problem, the palmprint spectral signatures of every person are unique in nature. Preprocessing technique including smoothing was done on the palmprint spectra to remove the noise. Statistical analysis were done on preprocessed spectra, FAR (False acceptance Rate), and FRR (False Rejection Rate) values against different threshold values were obtained and equal error rate was acquired. EER of the system is approximately 12% and the verification threshold 0.12.

Anita Gautam Khandizod, Ratnadeep R. Deshmukh, Sushma Niket Borade
A Proposed Pharmacogenetic Solution for IT-Based Health Care

Health care is a vast domain and has a large-scale effect on population. It has been facing critical issues of safety, quality, and high costs. Technical innovations in health care since the last decade have led to emergence of various computational, storage and analysis tools and techniques which are high quality, easily accessible, and cost-effective. In this paper, we have summarized the emerging trends of IT in medical domain. Further, we have proposed a pharmacogenetic solution for health care which can act as an aid to customized medicine.

Rashmeet Toor, Inderveer Chana
Approach to Reduce Operational Risks in Business Organizations

Identifying and managing risks in business processes is of major concern for almost all organizations. Risks can be a major threat to the organization and can hamper its reputation and cause them financial loss. Organizations need ways to deal with various types of risks. This paper provides an approach to deal with operational risks. The proposed approach will reduce the time to resolve an incident. Robobank dataset has been used to determine the risks in their incident and problem management systems and then an algorithm has been proposed to minimize risks.

Heena Handa, Anchal Garg, Madhulika
Web-Based Condition and Fault Monitoring Scheme for Remote PV Power Generation Station

Photovoltaic power generation stations are located at remote places and are unmanned. Electronic power processing circuitry plays vital role in conversion of electrical energy generated by PV module. The components of such circuitry work under stress and are prone to failure due to numerous reasons. Considering condition monitoring and fault monitoring issues, this communication proposes web-based condition and fault monitoring scheme for PV power generation systems. Scheme proposed here monitors present condition and nature of fault occurring in power generation station. Use of CAN-based controller is suggested to transmit data to master controller which is interfaced with dedicated server. Master station communicates recent information to dedicated server, which sends web page to remote node on request arrival. Nature and type of fault in any equipment or device can be displayed on web page. Further the scheme can be used to study the daily performance of PV power generation station.

Shamkumar Chavan, Mahesh Chavan
Handling User Cold Start Problem in Recommender Systems Using Fuzzy Clustering

Recommender engines have become extremely important in recent years because the count of people using Internet for diverse purposes is growing at an overwhelming speed. Different websites work on recommender systems using different techniques like content-based filtering, collaborative filtering, or hybrid filtering. Recommender engines face various challenges like scalability problem, cold start problem and sparsity issues. Cold start problem arises when there is no sufficient information for the user who has recently logon into the system and no proper recommendations can be made. This paper proposes a novel approach which applies fuzzy c-means clustering technique to address user cold start problem. Also, a comparison is made between fuzzy c-means clustering and the traditional k-means clustering method based on different set of users and thus it has been proved that the accuracy of fuzzy c-means approach is better than k-means for larger size of dataset.

Sugandha Gupta, Shivani Goel
Text Extraction from Images: A Review

Multimedia, natural scenes, images are sources of textual information. Textual information extracted from these sources can be used for automatic image and video indexing, and image structuring. But, due to variations in text style, size, alignment of text, as well as orientation of text and low contrast of the image and complex background make challenging the extraction of text. From the past recent years, many methods for extraction of text are proposed. This paper provides with analysis, comparison of performance of various methods used for extraction of text information from images. It summarizes various methods for text extraction and various factors affecting the performance of these methods.

Nitin Sharma, Nidhi
Cross-Domain Sentiment Analysis Employing Different Feature Selection and Classification Techniques

The paramount work of information mustering has been to find out what is the opinion of the people. Sentiment analysis is errand discerning the polarity for the given content which is dichotomized into two categories—positive and negative. Sentiment analysis operates on colossal feature sets of unique terms using bag of words (BOW) slant, in which case discrete attributes do not give factual information. This necessitates the elimination of extraneous and inconsequential terms from the feature set. Another challenging fact is most of the times, the training data might not be of the particular domain for which the perusal of test data is needed. This miscellany of challenges is unfolded by probing feature selection (FS) methods in cross-domain sentiment analysis. The boon of cross-domain and Feature Selection methods lies in significantly less computational power and time for processing. The informative features chosen are employed for training the classifier and investigating their execution for classification in terms of accuracy. Experimentation of FS methods (IG, GR, CHI, SAE) was performed on standard dataset viz. Amazon product review dataset and TripAdvisor dataset with NB, SVM, DT, and KNN classifiers. The paper works on different techniques by which cross-domain analysis vanquishes, despite the lower accuracy due to difference in domains, as better algorithmic efficient method.

Chakshu Ahuja, E. Sivasankar
Efficient Facial Expression Recognition System Based on Geometric Features Using Neural Network

In this paper, facial expression recognition (FER) system is presented using eigenvector to recognize expressions from facial images. One of the distance metric approaches called Euclidean distance is used to discover the distance of the facial features which was associated with each of the face images. A comprehensive, efficient model using a multilayer perceptron has been advanced whose input is a 2D facial spatial feature vector incorporating left eye, right eye, lips, nose, and lips and nose together. The expression recognition definiteness of the proposed methodology using multilayer perceptron model has been compared with J48 decision tree and support vector machine. The final result shows that the designed model is very efficacious in recognizing six facial emotions. The proposed methodology shows that the recognition rate is far better than J48 and support vector machine.

Ankita Tripathi, Shivam Pandey, Hitesh Jangir
Extended Four-Step Travel Demand Forecasting Model for Urban Planning

For years, the traditional four-step travel demand forecasting model has helped the policy makers for making decisions regarding transportation programs and projects for metropolitan regions. This traditional model predicts the number of vehicles of each mode of transportation between the traffic analysis zones (TAZs) over a period of time. Although this model does not suggest a suitable region where transportation project can be deployed. Therefore, this paper extends one more step to traditional four-step model for suggesting the zones which are in higher need of a highway transportation program. The severity of traffic load is the basis for results of the added step.

Akash Agrawal, Sandeep S Udmale, Vijay K. Sambhe
Authentication Framework for Cloud Machine Deletion

Today Digital Investigation on the cloud platform is a challenging task. As cloud follows notion of pay as you demand people are attracted to adopting this technology. But an unclear understanding of control, trust and transparency are the challenges behind the less adoption by most companies. Investigation in the cloud platform is hard to collect the strong evidences because resource allocation, dynamic network policy are facilitated on demand request fulfillment. This is why, it is difficult to perform forensic analysis in such a virtualized environment because the state of the system changes frequently. Even to prevent the deletion of system on cloud is a tough task. This paper will cover all the theoretical concepts of cloud along with the challenges presented in NIST guidelines. Through this paper, we explore the existing frameworks, loopholes, and suggest some possible solutions that will be a roadmap for forensic analysis in future.

Pooja Dubey, Vineeta Tiwari, Shweta Chawla, Vijay Chauhan
Predicting the Size of a Family Based upon the Demographic Status of that Family

Size of a family is very much affected by the demographic factors of that family like education and occupation of family members. Socio-Economic status and type of a family also play a major role in determining the family size. Data of more than half a thousand families was studied to find an association between family size and the above listed factors. A neural network was trained using TRAINGD and LEARNGDM as the training and the learning adaptive functions respectively. Later, a GUI was developed for the easy prediction of family size.

Kanika Bahl, Rohitt Sharma
A New Approach Toward Sorting Technique: Dual-Sort Extraction Technique (DSET)

There is a huge demand for efficient and high scalable sorting techniques to analyze data. Earlier many researchers have proposed various sorting algorithms such as heap sort, merge sort, and bubble sort,. Some of them are very popular in achieving efficient data sorting at a great extent like heap sort and merge sort. Increase in the amount of data has also lead to increase in complexity of sorting algorithms leading to decreased speed and efficiency. For this a new sorting technique named “Dual-Sort Extraction Technique (DSET)”, with two levels of data sorting extraction is proposed which enhances the performance and efficiency of the algorithm.

Darpan Shah, Kuntesh Jani
Student’s Performance Evaluation of an Institute Using Various Classification Algorithms

Machine learning is the field of computer science that learns from data by studying algorithms and their constructions. The student’s performance based on slow learner method plays a significant role in nourishing the skills of a student with slow learning ability. The performance of the students of Digital Electronics of University Institute of Engineering and Technology (UIET), Panjab University (PU), Chandigarh is calculated by applying two important classification algorithms (Supervised Learning): Multilayer Perceptron and Naïve Bayes. Further, a comparison between these classification algorithms is done using WEKA Tool. The accuracy of grades prediction is calculated with these classification algorithms and a graphical explanation is presented for the BE (Information Technology) third semester students.

Shiwani Rana, Roopali Garg
Speaker Identification in a Multi-speaker Environment

Human beings are capable of performing unfathomable tasks. A human being is able to focus on a single person’s voice in an environment of simultaneous conversations. We have tried to emulate this particular skill through an artificial intelligence system. Our system identifies an audio file as a single or multi-speaker file as the first step and then recognizes the speaker(s). Our approach towards the desired solution was to first conduct pre-processing of the audio (input) file where it is subjected to reduction and silence removal, framing, windowing and DCT calculation, all of which is used to extract its features. Mel Frequency Cepstral Coefficients (MFCC) technique was used for feature extraction. The extracted features are then used to train the system via neural networks using the Error Back Propagation Training Algorithm (EBPTA). One of the many applications of our model is in biometric systems such as telephone banking, authentication and surveillance.

Manthan Thakker, Shivangi Vyas, Prachi Ved, S. Shanthi Therese
Fault Detection and Classification Technique for HVDC Transmission Lines Using KNN

In this paper, we have introduced a novel fault detection and classification technique for high-voltage DC transmission lines using K-nearest neighbours. The algorithm makes use of rectifier end AC RMS voltage, DC line voltage and current measured at both poles. These signals are generated using PSCAD/EMTDC and are further analysed and processed using MATLAB. For fault detection, the signals are continuously monitored to identify the instant of occurrence of fault, whereas for fault classification, the standard deviations of the data over a half cycle (with respect to AC signal) before and after the fault inception instant are evaluated. The algorithm hence makes use of a single-end data only, sampled at 1 kHz. The technique has proven to be 100% accurate and is hence reliable.

Jenifer Mariam Johnson, Anamika Yadav
Retinal Disease Identification by Segmentation Techniques in Diabetic Retinopathy

Detection of microaneurysms before diabetes increases is an essential stage in diabetic retinopathy (DR) which damages the eye, so it is big medical problem. A need arises to detect it at an early stage. It is not showing any symptoms, so it can only be a diagnosed by oculist. This paper presents the study and review of various techniques used in detection of microaneurysms as well as new approach to increasing sensitivity and reducing computational time for detection and classification of microaneurysms from the diabetic retinopathy images. This new strategy to detect MAs is based on (1) Elimination of nonuniform part of an image and standardize grayscale content of original image. (2) Performed Morphological operations for detection of Region of Interest (ROI) and elimination of blood vessels. (3) To identify real MAs two features extracted where one feature of shape which discriminates normal eye image and abnormal eye image and second feature of texture. So, this increases sensitivity and also availability. So for this whole process of new technique used publically available database called DiaretDB1 database. (4) To discriminate normal and abnormal images different clustering algorithm used.

Priyanka Powar, C. R. Jadhav
High Dimensionality Characteristics and New Fuzzy Versatile Particle Swarm Optimization

Technological developments have reshaped the scientific thinking, since observation from experiments and real world are massive. Each experiment is able to produce information about the huge number of variables (High dimensional). Unique characteristics of high dimensionality impose various challenges to the traditional learning methods. This paper presents problem produced by high dimensionality and proposes new fuzzy versatile binary PSO (FVBPSO) method. Experimental results show the curse of dimensionality and merits of proposed method on bench marking datasets.

Shikha Agarwal, Prabhat Ranjan
E-commerce Recommendation System Using Improved Probabilistic Model

Recommender system is the backbone of e-commerce marketing strategies. Popular e-commerce websites use techniques like memory-based collaborative filtering approach based on user similarity with only rank as an attribute. This paper proposes a model-based collaborative filtering recommender system based on probabilistic model using improved Naive Bayes algorithm. Proposed system uses Naive Bayes algorithm with bigram language model to improve search query analysis. Therefore, search query, click time and query time are used as features for Naive Bayes algorithm model. This model is trained on 1.2 million customer data over a 3-month period for 1.8 million products. Proposed system predicts the probability of products and products will be recommended to the user to make top-N recommendations. Results of the proposed system show the model recommends products with 14% more accuracy as compared to simple Naive Bayes model.

Rahul S. Gaikwad, Sandeep S. Udmale, Vijay K. Sambhe
Extraction of Top-k List by Using Web Mining Technique

In present days, finding relevant and desired information in less time is very crucial, however, problem is that very small proportion data on internet is interpretable and meaningful and need lot of time to extract. The paper provides solution to problem by extracting information from top-k websites, which consist top-k instances of a subject. For Example “top 5 football teams in the world”. In comparison with other structured information like web tables top-k lists contains high quality information. It can be used to enhance open-domain knowledge base (which can support search or fact answering applications). Proposed system in paper extract the top-k list by using title classifier, parser, candidate picker, ranker, content processor.

Priyanka Deshmane, Pramod Patil, Abha Pathak
Improving the Accuracy of Recommender Systems Through Annealing

Matrix factorization is a scalable approach used in recommender systems. It deals with the problem of sparse matrix ratings in datasets. The learning rate parameter in matrix factorization is obtained by using numerical methods like stochastic gradient descent. Learning rate affects the accuracy of the system. In this paper, we make use of annealing schedules which will impact the value of learning rate. Five annealing schedules namely exponential annealing, inverse scaling, logarithmic cooling, linear multiplicative cooling and quadratic multiplicative cooling have been used to affect the learning rate and thus the accuracy of our recommender system. The experimental results on Movielens ( dataset with different sizes show that minimum mean absolute error for the system is obtained by exponential annealing at a lower value of learning rate and by linear multiplicative cooling at higher learning rate values. Apache Mahout (http://www.mahout/ 0.9 is chosen as the platform for conducting the experiments.

Shefali Arora, Shivani Goel
Classifying Gait Data Using Different Machine Learning Techniques and Finding the Optimum Technique of Classification

The Classification of Humanoid locomotion is a troublesome exercise because of nonlinearity associate with gait. The high dimension feature vector requires a high computational cost. The classification using the different machine learning technique leads for over fitting and under fitting. To select the correct feature is also the difficult task. The hand craft feature selection machine learning techniques performed poor. We have used the deep learning technique to get the trained feature and then classification we have used deep belief network-based deep learning. Classification is utilized to see Gait pattern of different person and any upcoming disease can be detected earlier. So in this paper we first selected the feature and identify the principle feature then we classify gait data and use different machine learning technique (ANN, SVM, KNN, and Classifier fusion) and performance comparison is shown. Experimental result on real time datasets propose method is better than previous method as far as humanoid locomotion classification is concerned.

Anubha Parashar, Apoorva Parashar, Somya Goyal
Adaptive Kalman Filter Approach and Butterworth Filter Technique for ECG Signal Enhancement

About 15 million people alive today have been influenced by coronary illness. This is a major and critical issue in recent days. There are so many people have been lost their lives due to heart attack and other heart related issues. So, early on analysis and proper cure of heart disease is required to minimize the death rate due to heart disease. For better diagnosis we need exact and consistent tools for determine the fitness of human hearts to analysis the disease ahead of time before it makes around an undesirable changes in human body. For heart diagnosis one of the tools is Electrocardiogram (ECG) and the obtained signal is labeled ECG signal. This ECG signal contaminated by an amount of motion artifacts and noisy elements and deduction of these noisy elements from ECG signal must important before the ECG signal could be utilized for illness diagnosis purpose. There are various filter methods available for denoising ECG signal and select the best one on the dependence of performance parameter like signal to noise ratio (SNR) and power spectrum density (PSD).

Bharati Sharma, R. Jenkin Suji, Amlan Basu
Bayesian Approach for Automotive Vehicle Data Analysis

Streaming network data can be analyzed by advance machine data methods. Machine data methods are ideal for large scale and sensor concentrated applications. Prediction analytics can be used to support proactive complex event processing while probabilistic graphical model can be extensively used to ascertain data transmitted by sensors. The structure of probabilistic graphical models encompasses variety of different types of models and range of methods relating to them. In this paper, real time sensor (OBD ha-II) device data has been used from telematics competition organized by for driver signature. This device is highly equipped to extract sensors related information such as Accelerometer, Gyroscope, GPS, and Magnetometer. Data cleaning, pre-processing, and integration techniques are performed on data obtained from OBD-II device. We have performed various classification algorithms on sensor data using data mining and machine learning open source tool “WEKA 3.7.10” and have identified that Bayes Net classification technique generates best results.

Heena Timani, Mayuri Pandya, Mansi Joshi
Design of Single Supply and Current Mirror-Based Level Shifter with Stacking Technique in CMOS Technology

The paper presents a new design for low power application of single supply and current mirror-based level shifter which is a 45 nm CMOS technique. Those recommended circuits use the benefits for Stacking technique with smaller leakage current and reduction in leakage power. The circuit is deliberate using 45 nm CMOS method. The modified stacking structure is more appropriate to system for high voltage supply. In such SoC’s, level shifters play an important part in translating the signals from one voltage level to another. Single supply level shifter has been modified with two additional NMOS transistors. Another circuit, current mirror level shifter (CMLS) has been modified with three additional NMOS transistors. Performances of the proposed level shifters are compared in terms of power, noise, and leakage parameters. For single supply and current mirror-based level shifter, Supply voltage VddH is initialized as 0.7 V and VddL is initialized as 0.2 V.

Kamni Patkar, Shyam Akashe
Privacy Preserve Hadoop (PPH)—An Implementation of BIG DATA Security by Hadoop with Encrypted HDFS

As data is growing exponentially than linearly, the rising abuse of large data set emphasizes the need to preserve and protect the Data. Hadoop, a big data solution, has increasingly become popular and adopted by most of the trades. However, Hadoop by default does not contain any security mechanism. Though, it does not support data encryption which makes data privacy and security becomes a cardinal concern. The generally extensively compliant methodology of preservation and protection of data is through cryptography algorithms which is computationally intensive. Exploiting cryptography with apportioning the processing with MapReduce framework will improve the security of Hadoop. This paper presents two applications which disseminate the cryptographic process among MapReduce jobs. The first application will handles encryption of an input file that is resides in HDFS and second application will handle decryption of encrypted input file. Our experimental results show the comparison between the two cryptographic algorithms.

Prashant Johri, Sanchita Arora, Mithun Kumar
Design a Circular Slot Patch Antenna with Dual Band Frequency

Locale of research work indicates some basic concepts related to patch antenna. The techniques for increasing bandwidth of circular patch antenna are explained with other parameters. Patch antenna is basically used for wireless communication systems. Design a circular slot patch antenna of dual band frequency. Each type of antenna is good in their properties and usage. Antennas are those backbones also almost all that in the wireless communication without which the world could have not arrived at in this period of technology. The proposed micro-strip patch antenna has FR4 lossy as a dielectric substrate with thickness of 1.6 mm and relative permittivity εr is 4.3. The simulation results of directivity, gain, and return loss of designed patch antenna are determined successfully. It is designed the dual band frequency having a return loss −30 dB at 1.5 GHz and second one is −40 dB at 2.5 GHz, analyzed in CST software.

Karishma Patkar, Neelesh Dixt, Anshul Agrawal
A Methodology to Segment Retinal Vessels Using Region-Based Features

Analysis of the retinal blood vessels has become remarkable area of research in biomedical field. This paper presents fundus image blood vessel segmentation approach using region-based features. In the pre-processing phase, the input fundus image is segmented as major vessel and minor vessel region. Further, to enhance segmentation accuracy region-based features are extracted from minor vessels by applying morphological operations. Fuzzy entropy measure is used to select the relevant features and for classification, a k-NN classifier is employed. The proposed algorithm is evaluated using two openly available data sets DRIVE and CHASE_DB1. The method presented is independent of training samples and achieves 96.75% of classification accuracy.

Vinita P. Gangraj, Gajanan K. Birajdar
Recommendation System for Learning Management System

Education field is core area of recent researches in electronic technology. There are various aspects which are included in the researches for analysis and evaluation like, learners, learning management system, evaluators, etc. Taking one aspect into consideration, i.e., learning management system, there are very few techniques available for the evaluation of learning management system. The approach using the learner’s behavior and teaching evaluation for the evaluation and recommendation of Learning Management System has already been proposed. The paper presents an implementation and the results of the proposed approach. Apart from the results, the paper also discusses further improvements for the learning management system.

Phalit Mehta, Kriti Saroha
Classification of Hematomas in Brain CT Images Using Support Vector Machine

Hematoma is caused due to traumatic brain injuries. Automatic detection and classification system can assist the doctors for analyzing the brain images. This paper classifies the three types of hematomas in brain CT scan images using Support vector machine (SVM). The SVM has been simulated and trained according to the dataset. Trained SVM classifiers performances were compared on the basis of parameters, i.e., classification accuracy, mean square error, training time, and testing time. The classification process depends on the training dataset and results are based on simulation of the classifiers.

Devesh Kumar Srivastava, Bhavna Sharma, Ayush Singh
Secured Neighbour Discovery Using Trilateration in Mobile Sensor Networks

In a wireless network, the initial step after deployment of a network is identifying the nodes neighbours. Neighbour discovery is the building block of a network applications and protocols and its responsibility is to identify the one hop neighbours or the nodes that are in the direct communicational range. A minor vulnerability in the neighbour discovery can lead to severe disruptions in the functionalities of the network. In this paper we propose a novel technique to identify the adversary nodes that disrupt the networks functionalities. We have modified the trilateration technique to identify the adversary node. Our security mechanism has been carried out along with the localization procedure without causing any additional overhead. Our technique can achieve successful neighbour discovery even in the presence of cheating nodes. We have also identified the probability of detecting malicious nodes for two different scenarios.

Jeril Kuriakose, Ravindar Yadav, Devesh Kumar Srivastava, V. Amruth
MAICBR: A Multi-agent Intelligent Content-Based Recommendation System

This study aims at proposing an intelligent and adaptive mechanism deploying intelligent agents for solving new user and overspecialization problems that exist in Content Based Recommendation (CBR) systems. Since the system is designed using software agents (SAs), it ensures highly desired full automation in web recommendations. The proposed system has been evaluated and the results suggested that there is an improvement in positive feedback rate and the decrease in recommendation rate.

Aarti Singh, Anu Sharma
HMM-Based IDS for Attack Detection and Prevention in MANET

MANETs are wireless networks which communicate without BS and centralized control nodes. Due to its mobile nature of nodes, topology of the network changes frequently. So it is most difficult to stimulate this network. The main task is to provide an efficient and effective routing in MANETs with limited resources. As MANET is an open medium, it is open to numerous attacks by the attackers. To avoid attacks, a good intrusion detection and prevention system is developed. This paper gives a brief survey about different IDS developed to protect attacks in MANET have been briefed. To strengthen the security of IDS, we propose a hidden Markova model-based IDS for MANET for preventing network from attacks. HMM implements learning on the nodes of the network. Based on this learning, the results show the best possible positions and probability of the attacker node.

Priya Pathak, Ekta Chauhan, Sugandha Rathi, Siddhartha Kosti
Academic Dashboard—Descriptive Analytical Approach to Analyze Student Admission Using Education Data Mining

Every academic year the institution welcome’s its students from different location’s and provides its valuable resources for every student to attain their successful graduation. At the present scenario, the institution maintains the details of students’ manually. It becomes tedious task to analyze those records and fetching any information at short time. Data mining computational methodology helps to discover patterns in large data sets using artificial intelligence, machine learning, statistics, and database systems. Education Data Mining addresses these sensitive issues using a significant technique of data mining for analysis of admission. In this research paper, the analysis of admission is done with respect to location wise and comparison is done based on the year wise admission. The total admission rate for the current academic year and frequency of student admission across the state is calculated. The result of analyzed data is visualized and reported for the organizational decision making.

H. S. Sushma Rao, Aishwarya Suresh, Vinayak Hegde
A New Approach for Suspect Detection in Video Surveillance

Face recognition is one of the most relevant applications of image analysis. Humans have very good face identification ability but not enough to deal with lots of faces. But computers have lots of memory and processing power to work with high speed. Our problem focused on detection of face from a video frame, extraction of the face, and to calculate the eigenface after normalizing the face image to match with the database of eigenfaces for the verification or identification propose. Here we are taking Vola johns algorithm into consideration for the face detection and eigenface algorithm for matching face. Face matching operation must be fast enough in video surveillance. We proposed these two methods in video surveillance for detection of suspect in video surveillance.

Manjeet Singh, Ravi Sahran
Improved Segmentation Technique for Underwater Images Based on K-means and Local Adaptive Thresholding

In many cases, images are influenced by radiance and environmental turbulences owing to temperature variation, specifically in the case of underwater images. Due to lack of stableness in underwater circumstances, object identification in underwater is not easy in any aspect. As we know, the process of segmenting the image is a quite essential in automated object recognition systems. Subsequently, there is a necessity of segmenting the images. By means of segmenting the image, we split the image in meaningful fragments in a way to detect the concerned regions to annotate the data. We also need to process the image to eradicate the radiance effect. In this paper, we propose the improved technique to eradicate the effect of radiance and identify the object with more precision and accuracy. According to the proposed improved technique, the two segmentation techniques, k-means segmentation and local adaptive thresholding method, are merged. K-means deals with object detection whereas local adaptive thresholding eradicates the radiance effect. Lastly, the performance of improved technique is evaluated using objective assessment parameter namely, entropy, PSNR, and mutual information.

Agrawal Avni Rajeev, Saroj Hiranwal, Vijay Kumar Sharma
HMM-Based Lightweight Speech Recognition System for Gujarati Language

Speech recognition system (SRS) is growing research interest in the area of natural language processing (NLP). To develop speech recognition system for low resource language is difficult task. This paper defines a lightweight speech recognition system approach for Indian Gujarati language using hidden Markov model (HMM). The aim of this research is to design and implement SRS for routine Gujarati language which is difficult due to language barrier, complex language framework, and morphological variance. To train the HMM-based SRS we have manually created speech corpora that contained 650 routine Gujarati utterances which are recorded from total 40 speakers of South Gujarat region. Total numbers of speakers are selected on the basis of gender. We have achieved accuracy of 87.23% with average error rate 12.7% based on the word error rate (WER) computing.

Jinal H. Tailor, Dipti B. Shah
Despeckling of SAR Image Based on Fuzzy Inference System

Synthetic Aperture Radar (SAR) is a type of imaging radar system that is widely used for remote sensing of Earth. It is observed that the images obtained from the SAR systems are often corrupted with speckle noise which reduces the visibility of the image. Preprocessing such images is often essential to enhance the clarity of the image for acquiring the required information present in it. A nonlinear filtering technique is proposed in this work, based on fuzzy inference rule-based systems, which uses fuzzy sets and fuzzy rules that operates on the luminance difference between the central pixel and its neighbors in a 3 × 3 window to reduce the presence of speckle noise in the SAR images. A comparative evaluation is performed on the proposed filter with three other existing filtering methods namely Mean, Median and FIRE filter for mixed noise to evaluate its performance.

Debashree Bhattacharjee, Khwairakpam Amitab, Debdatta Kandar
A Study of Representation Learning for Handwritten Numeral Recognition of Multilingual Data Set

Handwritten numeral recognition, a subset of handwritten character recognition is the ability to identify the numbers correctly by the machine from a given input image. Compared to the printed numeral recognition, handwritten numeral recognition is more complex due to variation in writing style and shape from person to person. The success in handwritten digit recognition can be attributed to advances in machine-learning techniques. In the field of machine learning, representation-based learning in deep learning context is gaining popularity in the recent years. Representative deep learning methods have successfully implemented in image classification, action recognition, object tracking, etc. The focus of this work is to study the use of representation learning for dimensionality reduction, in offline handwritten numeral recognition. An experimental study is carried out to compare the performance of the handwritten numerals recognition using SVM-based classifier on raw features as well as on learned features. Multilingual handwritten numeral data set of English and Devanagari numbers is used for the study. The representation learning method used in the experiment is restricted Boltzmann machine (RBM).

Solley Thomas
Abandoned Object Detection and Tracking Using CCTV Camera

With the increase in crime and terror rate globally, automated video surveillance, is the need of the hour. Surveillance along with the detection and tracking has become extremely important. Human detection and tracking is ideal, but the random nature of human movement makes it extremely difficult to track and classify as suspicious activities. The primary objective of this is to detect the suspiciously abandoned object recorded by the closed-circuit television cameras (CCTV). The main aim of this project is to ease the load on the controller at the main CCTV station by generating and alarm, whenever there is a detection of an abandoned object. To solve the problem, we first proceeded by the background subtraction such that we obtain the foreground image. Further, we calculated the inter-pixel distance and used area-based thresholding so as to differentiate between the person and the object. The object will further be tracked for a previously set time, which will help the system to decide whether or not the object is abandoned or not. Such a system that can ease the load on single CCTV controller can be deployed in places which require high discipline and security and are more prone to suspicious activities like Airports, Metro station, Railway Stations, entrances and exits of buildings, ATMs, and similar public places.

Parakh Agarwal, Sanaj Singh Kahlon, Nikhil Bisht, Pritam Dash, Sanjay Ahuja, Ayush Goyal
Accurate and Robust Iris Recognition Using Modified Classical Hough Transform

Circle Hough Transform (CHT) is a robust variant of the Hough Transform (HT) for the detection of circular features. Accurate iris recognition is one of the application areas of this technique. Robustness and accuracy are of utter significance but at the expense of high-computational time and space complexity as the method processes the entire image provided as an input. The present work formulates a computationally more efficient suggested solution as a modified Circle Hough Transform (MCHT) by fixating time–space complexity in terms of reducing the area of the image to cover (the number of pixels to process), and hence significantly decreasing computational time without compromising the accuracy of the method. The modified method is tested on a sample set of collected iris images. Each image is divided into three different sized skeleton grids of size 3$$\,\times \,$$3, 5$$\,\times \,$$5 and 7$$\,\times \,$$7 (pixels). The center of each grid type applied on the image gives the Region of Interest (ROI) of the image sufficient to detect circular parameters as center and radius of the iris using CHT. The experiment shows the comparison of computational time required to detect the iris from CHT applied to the whole image versus the computational time required to detect the iris from just the ROI of the image using the grids. Additionally, the results of the comparison of the expected time and observed time of detection of the iris over a large number of images is presented. There is a substantial reduction in computational time complexity up to 89% using the 3$$\,\times \,$$3 sized grids, and up to 96% using 5$$\,\times \,$$5 sized grids and up to 98% in 7$$\,\times \,$$7 sized grids with equally fair amount of reduction in space utilization. The experiment was performed to observe which grid size gave the most accurate center and radius values along with the most efficient performance. The results showed that the 3$$\,\times \,$$3 and 5$$\,\times \,$$5 sized grids provided better results as compared to the 7$$\,\times \,$$7 sized grids, the results of which lacked accuracy for some images. From the results of the experiments with varying grid sizes, the conclusion obtained is that the accuracy is compromised by grid sizes 7$$\,\times \,$$7 and higher, and grid sizes of 3$$\,\times \,$$3 or 5$$\,\times \,$$5 provide the most accurate and efficient iris detection.

Megha Chhabra, Ayush Goyal
Fault Detection and Mutual Coordination in Various Cyborgs

During multi-cyborgs area exploration fault may occur at any time. In case, if one cyborgs fails, another cyborgs can take over the task which is assigned to the failed cyborgs. In fact, fault tolerance and robustness to any cyborgs failures are major issues in favor of cyborgs systems, many studies have been devoted to this subject. If fault occurs, it is the duty of other cyborgss to take over his task and complete the process without any error in less time. So we have to design an approach for fault detection and tolerance which can work in any condition. Each cyborg updates its information in the global database and retrieves the information of other cyborgss from that database. Since cyborgs that are going through failures cannot update its information in the database periodically, they can be detected. Then any other cyborgs went there and complete the failed Cyborgs task then only the process stops. We show that down cyborgs are detected by functional cyborgss, and we assign their tasks to Working cyborgs using task allocation algorithm. Any one of the cyborgs, which is working takes over the failed cyborgs task and completes the process.

Amrita Parashar, Vivek Parashar
Comparative Study of Inverse Power of IDW Interpolation Method in Inherent Error Analysis of Aspect Variable

This paper deals with inherent error analysis of aspect variable using IDW interpolation method and its various power values. In the first section, it shows aspect analysis method and algorithm. In the second section, it creates a DEM model from various measured and erroneous elevated points which becomes input to calculate aspect of interpolated DEMs separately then error is calculated by calculating difference of the aspect for true and erroneous aspect. It explores error analysis of aspect with its practical implementation in ArcGIS. In the last section, it explains the comparison of errors on aspect for various power of IDW method. Result shows that aspect error decreases with the increment in the inverse power of distance in IDW method.

Neeraj Bhargava, Ritu Bhargava, Prakash Singh Tanwar, Prafull Chandra Narooka
The Study of the Relationship Between Internet Addiction and Depression Amongst Students of University of Namibia

The aim of this preliminary study was to explore the impact of Internet usage on individual functioning by exploring the relationship between Internet usage and depression amongst the students of University of Namibia. An exploratory study was conducted amongst 36 conveniently selected males and females’ students. This study investigated prevalence of internet addiction and its association with depression. In this study two tests were used, the Young Internet Addiction Test (YIAT) and the Patient Health Questionnaire (PHQ-9). The males scored an average of 36.6 on the YIAT whilst females scored an average of 33.9. On the PHQ-9, the males scored 14.7 and females scored 16 on average. The differences in hours spend per day for females were 4.5 h and males spend 6.4 on average. The result of this exploratory study reveals that there is a correlation between Internet addiction scores, depression scores and time spent online.

Poonam Dhaka, Atty Mwafufya, Hilma Mbandeka, Iani de Kock, Manfred Janik, Dharm Singh Jat
Information and Communication Technology for Sustainable Development
herausgegeben von
Dr. Durgesh Kumar Mishra
Dr. Malaya Kumar Nayak
Amit Joshi
Springer Singapore
Electronic ISBN
Print ISBN