Skip to main content
main-content

About this book

The two-volume set CCIS 827 and 828 constitutes the thoroughly refereed proceedings of the Third International Conference on Next Generation Computing Technologies, NGCT 2017, held in Dehradun, India, in October 2017.

The 135 full papers presented were carefully reviewed and selected from 948 submissions. There were organized in topical sections named: Smart and Innovative Trends in Communication Protocols and Standards; Smart and Innovative Trends in Computational Intelligence and Data Science; Smart and Innovative Trends in Image Processing and Machine Vision; Smart Innovative Trends in Natural Language Processing for Indian Languages; Smart Innovative Trends in Security and Privacy.

Table of Contents

Frontmatter

Smart and Innovative Trends in Computational Intelligence and Data Science

Frontmatter

Recommender System Based on Fuzzy C-Means

Modern E-Commerce sites require a concrete method of retaining their user base besides keeping a wide variety of items. In order to maintain user interest, it is necessary to suggest users the items that would help them to retain and increase their attraction towards products. This not only means showing items that would interest the users but also help the e-commerce companies to get profits out of sales. Thus, recommender systems come into picture. These systems are designed to help ecommerce companies help retain their user base. The recommender systems deploy a variety of different algorithms to study user preferences and make smart suggestions. Modern recommender engines are able to address only a single issue at a time. It is a trade-off between response time and accurate results that take into account variety of factors. This paper talks about the techniques that are used to build reliable and fast recommender systems as well as it discusses their working techniques.

Priya Gupta, Aradhya Neeraj Mathur, Kriti Kathuria, Rishabh Chandak, Satyam Sangal

A Noval Approach to Measure the Semantic Similarity for Information Retrieval

In this fast paced multitasking world, internet users are increasing day by day so is our database is increasing and manually maintaining similarity between words of database is a troublesome task. Maintaining semantic similitude between words is substantial chore in chromatic areas such as Natural Language processing tasks like Word sense disambiguation, query expansion as well as web chore such as document bunching, community excavating and automatic metadata breeding. With its wide area applications and usage, in a document still it is very tough to calculate the measure for any two words or entities. We propound a formula using Google, computing semantic similarity employing page count (retrieved by Google only) as a metric. The bounced method outperforms or contribute almost same results to chromatic base lines and compared with the previously proposed web-based semantic similarity methods. The results obtained compared with various online tools like UMBC, SEMILAR etc. Moreover, proposed method has less computation complexity as well as significantly improves the exactitude and efficiency of calculating semantic similarity between two words.

Shelly, Mamta Kathuria

Comparative Analysis of Metaheuristics Based Load Balancing Optimization in Cloud Environment

Cloud Computing Technology provides computing resources as a utility service. The objective is to achieve maximum resource utilization with minimum service delivery time and cost. The main challenge is to balance the virtual machines (VM) load in cloud environment and it requires distributing the load between many virtual machines while avoiding underflow and overflow conditions, which depend on capacity of VMs. In this paper, load balancing of VMs have been done based on Ant Colony Optimization (ACO) and Bat algorithm for underflow and overflow VM identifications respectively. As cloud applications involve huge computations and are highly dynamic in nature, so Directed Acyclic Graph (DAG) files of various scientific workflows have been used as input data during implementation of the proposed methodology. Workflows used for experiments are Cybershake, Genome, Ligo, Montage, Sipht and VMs vary from 2 to 20 on a single host configuration. Initially, the workflows are parsed through Predict earliest Finish time (PEFT) heuristic which initializes the metaheuristics rather than using random initialization. Thus, metaheuristics are providing optimal initial parameters which further optimize the VM utilization by balancing their load. The performance of metaheuristics on the basis of makespan and cost metrics has been evaluate, analyzed and compared with the Particle Swarm Optimization (PSO) approach used for load balancing.

Amanpreet Kaur, Bikrampal Kaur, Dheerendra Singh

Twitter Recommendation and Interest of User Using Convolutional Neural Network

There is a tremendous growth in the area of social networking in the last couple of years. It is a way of having a huge amount of information where users can see other users’ opinions, which are further divided into various recommender Classes, that are growing fast as a main component in decision making in recommender system’s. In this paper, we took one popular micro blog called Twitter. The tweets are taken from twitter gives opinion and interests about different subjects. A Recommender System is basically made up of different software tools and techniques related to deep machine learning which gives meaningful information’s about those tweets which are tweeted. Recommender system is very popular in every field like research, commercial fields and industrial areas. A lot of methods have been proposed for classifications in recommendations. Certainly, recommendation systems have various unique characteristics that its needs to experiences the different user such as users ‘preference, prediction accuracy, confidence, trust, etc. In our work, we proposed an algorithm and work is done on feature set by reducing the sparseness of feature by phrase features and classified high non linear dataset by deep learning because deep learning use different pattern layer all possibility of existing pattern in feature set.

Baljeet Kaur Nagra, Bharti Chhabra, Amit Verma

Classification of Hyperspectral Imagery Using Random Forest

In this paper the classification of hyperspectral images is investigated by using a supervised approach. The spectral feature are extracted with well known decision boundary feature extraction (DBFE) and non-parametric weighted feature extraction (NWFE) techniques. The most informative features are fed to random forest (RF) classifier to perform pixel-wise classification. The experiments are carried out on two benchmark hyperspectral images. The results show that RF classifier generates good classification accuracies for hyperspectral image with smaller execution time. Among feature extraction techniques, DBFE has produced better results than NWFE.

Diwaker Mourya, Ashutosh Bhatt

A Model for Resource Constraint Project Scheduling Problem Using Quantum Inspired PSO

Resource Constrained Project Scheduling Problem (RCPSP) is a NP-hard project planning scheduling problem. It has been widely applied in real life industrial scenarios to optimize the project makespan with the limitation of the resources. In order to solve RCPSP, this paper suggests a Quantum inspired Particle Swarm Optimization (Q-PSO) probabilistic optimization technique. A classical PSO is very hard to map to the RCPSP because of its solution lies in continuous values position vector. To overcome this, Sequence Position Vector (SPV) rule is incorporated into PSO. Since, the activities of the project follows dependency constrains, due to updation in position vector, the dependency constrains are violated. To handle this situation, Valid Particle Generator (VPG) is used. With an assembling of these operators, a Q-PSO is introduced to solve RCPSP effectively. The effectiveness of the QPSO is verified on a standard dataset of PSPLIB for J30. Results show that Q-PSO has significant improvement in the performance over number of state of the arts. Since it uses probabilistic particle representation in terms of quantum bits (Q-bits) and thus replaces the inertia weight tuning and velocity update method in classical PSO.

Reya Sharma, Rashika Bangroo, Manoj Kumar, Neetesh Kumar

Detection and Estimation of 2-D Brain Tumor Size Using Fuzzy C-Means Clustering

Brain tumor is the main cause of mortality among children and adults which cause the abnormal growth of mass of tissues. So exact segmentation of this tissue mass is required for the exact diagnosis of tumor. This paper presents a technique for brain tumor segmentation and detection in magnetic resonance image (MRI) using a hybrid approach of fuzzy c-means clustering followed by mathematical morphology. Further length & width is calculated based on Euclidean distance measure and an approach is applied based on calculating the height and width of tumor to approximately detect the size of tumor in 1D and 2D for estimating the cancer stages.

Rahul Chauhan, Surbhi Negi, Subhi Jain

A Computational Approach for Designing Tiger Corridors in India

Wildlife corridors are components of landscapes, which facilitate the movement of organisms and processes between intact habitat areas, and thus provide connectivity between the habitats within the landscapes. Corridors are thus regions within a given landscape that connect fragmented habitat patches within the landscape. The major concern of designing corridors as a conservation strategy is primarily to counter, and to the extent possible, mitigate the effects of habitat fragmentation and loss on the biodiversity of the landscape, as well as support continuance of land use for essential local and global economic activities in the region of reference.In this paper, we use game theory, graph theory, membership functions and chain code algorithm to model and design a set of wildlife corridors with tiger (Panthera tigris tigris) as the focal species. We identify the parameters which would affect the tiger population in a landscape complex and using the presence of these identified parameters construct a graph using the habitat patches supporting tiger presence in the landscape complex as vertices and the possible paths between them as edges. The passage of tigers through the possible paths has been designed using an Assurance game, with tigers as an individual player. The game is recursively played as the tiger passes through each grid considered for the model. The iteration causes the tiger to choose the most suitable path signifying the emergence of adaptability.As a nominal explanation of the game, we design this model through the interaction of tiger with the parameters as deterministic finite automata, for which the transition function is obtained by the game payoff.

Saurabh Shanu, Sudeepto Bhattacharya

Enhanced Task Scheduling Algorithm Using Multi-objective Function for Cloud Computing Framework

Cloud computing era refers to a dynamic, scalable and pay-per-use distributed computing model empowering designers to convey applications amid task designation and storage distribution. The cloud computing mainly aims to give proficient access to remote and geographically distributed resources. The essential advantage of moving to Clouds is application versatility. It is exceptionally advantageous for the applications which are sharing their assets on various hubs. The cloud computing for the most part plans to give capable access to remote and geographically distributed resources. As cloud innovation is advancing step by step and confronts various difficulties, one of them being revealed is scheduling. To accomplish distinctive objectives and high performance of cloud computing framework, it is expected to configure, create, and propose a scheduling algorithm that outperforms the appropriate allocation of tasks with different factors. Algorithms are vital to schedule the tasks for execution. Task scheduling algorithms believed to be the most hypothetical problems in the cloud computing domain. This paper proposed a multi-objective task scheduling algorithm that considers wide variety of attributes in cloud environment and uses non-dominate sorting for prioritizing the tasks. The proposed algorithm considers three parameters i.e. Total processing cost, total processing time and average waiting time. The main objective of this paper is to enhance the performance and evaluate the performance with FCFS, SJF and previously implemented multi-objective task scheduling algorithm.

Abhikriti Narwal, Sunita Dhingra

Descriptive, Dynamic and Hybrid Classification Algorithm to Classify Engineering Students’ Sentiments

The social networking sites have brought a novel horizon for students to share their views about the learning process. Such casually shared information has a great venue in decision making. However, the growing scale of data needs automatic classification method. Sentiment analysis is one of the automated methods to classify huge data. The existing sentiment analysis methods are extremely used to classify online reviews to provide business intelligence. However, they are not useful to draw conclusions on education system as they classify the sentiments into merely three pre-set categories: positive, negative and neutral. Moreover, classifying students’ sentiments into positive or negative category does not provide concealed vision into their problems and perks. Unlike traditional predictive algorithms, our Hybrid Classification Algorithm (HCA), makes the sentiment analysis process descriptive. The descriptive process helps future students and education system in decision making. In this paper, we present the performance evaluation of HCA under four datasets collected by different methods, different time spans, different data dimensions and different vocabulary and grammar. The experimental results show that the hybrid, dynamic and descriptive algorithm potentially outperforms the traditional static and predictive methods.

Mitali Desai, Mayuri A. Mehta

Comparison of Runtime Performance Optimization Using Template-Metaprogramming

Programs capable of generating code are known as meta-programs and the technique of writing these programs is known as meta programming Meta programming is supported by various programming languages such as C#, where reflection is used; Ruby allows defining classes and methods at runtime using meta-programming; the first language to introduce the concept of meta-programming was LISP. The meta-programs written using these languages are generally parsers, theorem proofs and interpreters. In this paper, we’ll be demonstrating the use of meta-programming in C++ through template meta-programming (TMP). We pick up common mathematical operations, creating a run time code of them along with a compile time based equivalent code done through TMP. The two set of codes are then benchmarked on the basis of their execution time and a bar-graph is generated to compare the TMP and non-TMP programs.

Vivek Patel, Piyush Mishra, J. C. Patni, Parul Mittal

Comparative Performance Analysis of PMSM Drive Using ANFIS and MPSO Techniques

The major problem in permanent Magnet Synchronous Motor (PMSM) drive systems are the nonlinear behavior which arises mainly from motor dynamics and load characteristics. So, the speed control technique should be adaptive and robust for successful industrial applications. The conventional proportional integral derivative (PID) controllers used to control speed of the drive are tuned mainly using Ziegler-Nichols’ (Z-N) tuning technique. Since, PID controller works well under linear operating condition and underperforms when nonlinearity arises. So Artificial Intelligence (AI) techniques are being implemented to achieve better performance i.e. Adaptive Neuro Fuzzy Inference System (ANFIS), Modified Particle Swarm Optimization (MPSO). This paper proposes novel design of ANFIS and MPSO- AI techniques based PID speed controller which has been incorporated in PMSM drive to improve its dynamic performance. A model of PMSM drive is simulated under various operating conditions to analyze its performance in terms of transient response specification such as rise time, settling time, peak overshoot and peak time. The results obtained give much better performance as compared to conventionally used Z-N technique.

Deepti Yadav, Arunima Verma

Developing the Hybrid Multi Criteria Decision Making Approach for Green Supplier Evaluation

Environment-friendly policies have developed the awareness to protect the environment from hazardous effects. ISO 9000 certification, legal laws and public demands enhance the green corporate social responsibilities. Organizational traditional processes changed into green concepts. A dramatic shift from supplier selection to green supplier selection has been adopted in organizations to reduce the product impact on environment. Supplier selection is the most important organizational strategic decision to design, produce and distribute green. This paper is focused to develop the criteria for green supplier selection. This decision is based on both quantitative and qualitative methods. So, Multi Criteria Decision Making (MCDM) has been used to take the rational decisions. In this study, a hybrid multi criteria decision making approach based on Decision Making Trial and Evaluation Laboratory Model (DEMATEL), the Analytical Network Process (ANP), and Technique for Order Performance by Similarity to Ideal Solution (TOPSIS) in fuzzy environment have been used to take the ideal decisions. This study will focus on the development of hybrid MCDM model for green supplier selection.

Muhammad Nouman Shafique

A Quadratic Model for the Kurtosis of Decay Centrality

Decay centrality (DEC) is a measure of the closeness of a node to the rest of the nodes in the network, with the importance given to the distance weighted on the basis of a decay parameter, δ (0 < δ < 1). Kurtosis has been traditionally used to evaluate the extent of fat-tailedness of the degree distribution of the vertices in a complex network. In this paper, we compute the Kurtosis of the decay centrality of the vertices in real-world networks for the entire range of δ values and observe the Kurtosis(DEC δ )/Kurtosis(DEG) ratio to be simply a quadratic function of δ (of the form: a * δ2 + b * δ + c, with a > 0, b < 0 and c in the vicinity of 1.0). We estimate the coefficients a, b and c of the quadratic models for a suite of 70 real-world networks of diverse degree distributions, with R2 values of at least 0.99. Using the coefficients a, b and c of the quadratic model for a real-world network, we could estimate the δ critical value (given by: −b/2a) for which the Kurtosis(DEC δcritical )/Kurtosis(DEG) ratio is the lowest (given by: c − b2/4a) for the real-world network.

Natarajan Meghanathan

Development of an Automated Water Quality Classification Model for the River Ganga

Recently, Water Quality (WQ) comes out to be the central point of concern all around the globe. The purpose of this work is to develop an automated procedure that can be used to classify the water quality of the River Ganga proficiently in the stretch from Devprayag to Roorkee Uttarakhand, India. The monthly data sets of five water quality parameters temperature, pH, dissolved oxygen (DO), biochemical oxygen demand (BOD) and total coliform (TC) for the time period from 2001 to 2015 is used for this research work. The proposed method involves developing various water quality classification models using one of the concept of data mining called decision tree (DT) for evaluating the WQ classes. The experiments are conducted using Weka data mining tool. Models first developed using (60–40)% data division approach and then using (80–20)% approach of data division. Five different decision tree models are developed named J48 (C4.5), Random Forest, Random Tree, LMT (logistic model tree) and Hoeffding Tree. These classifiers were analyzed to determine the most accurate classifier model for the present dataset by evaluating their performance via measures like Accuracy, Kappa Statistics, Recall, Precision, F-Measure, Mean absolute error and Root mean squared error. The results concluded that the random forest model outperforms all other classifiers with a great accuracy rate of 100% in both approaches and least error rate when developed using the second approach. Such a highly acceptable and attractive results may be helpful for the decision makers in water management and planning.

Anil Kumar Bisht, Ravendra Singh, Ashutosh Bhatt, Rakesh Bhutiani

A Data Mining Approach Towards HealthCare Recommender System

Healthcare recommender systems are meant to provide accurate and relevant predictions to the patients. It is very difficult for people to explore various online sources to find some useful recommendations as per their medical conditions. So the proposed approach is an effort to provide accurate recommendations to the patients on the basis of their current medical condition as well as their medical history and constraints. First of all, patients are categorized into different groups based on their profiles and then rules predicting the medical condition of each group are mined. The proposed approach is unique in the way that it provides accurate treatments to the patients in the form of recommendations based on content based matching. It also considers the preferences of the patient, which are stored in the system as mined rules or estimated from the medical history of patient. The results of experimental setup also demonstrate that the proposed system provides more accurate outcomes over other healthcare recommendation systems.

Mugdha Sharma, Laxmi Ahuja

Root Table: Dynamic Indexing Scheme for Access Protection in e-Healthcare Cloud

The popularity and large-scale adoption of cloud computing have accelerated the development of e-healthcare systems. Outsourcing electronic health records (EHRs) demands the assurance of search privacy, given that data and its access are not in control of the EHR owner. Here, we adopted a different approach to the problem of access privacy in EHRs by addressing the problem of information leakage based on revealing access patterns in semi-trusted cloud servers. In this paper, we proposed a dynamic data structure called a Root table (R-table) to create a storage index, which ensures access privacy while querying the outsourced database. The objective of R-table is to hide access pattern from a honest-but-curious server. R-table is an adaptation of dynamic arrays and randomized binary search trees, which randomly shuffle locations of data blocks following each access. This model provides access privacy with minimum communication and storage overhead and enables the EHR owner to perform a private read or write without revealing the type of operation and target data fragment processed. The results of our experiments showed limited performance overhead, indicating that R-table is suitable for practical use.

M. B. Smithamol, Rajeswari Sridhar

Efficient Artificial Bee Colony Optimization

Artificial bee colony (ABC) algorithm is one of the proficient meta-heuristic technique in the field of nature inspired algorithms to solve the optimization problems. ABC has been proven itself as better candidate in the field of nature inspired algorithms. But, still it shows some limitations like improper balance betwixt exploration and exploitation, premature convergence and stagnation problem. To overcome these limitations, a new variant of ABC algorithm named as Efficient Artificial Bee Colony Optimization (EABC) algorithm. In the proposed EABC, three new strategies are incorporated named as Self-Adaptive Strategy, Self-Adaptive Mutual Learning Strategy, and Exploring Strategy. The Self-Adaptive Strategy is incorporated in the employed bee phase and it help to improve the balance betwixt exploration and exploitation. The Self-Adaptive Mutual Learning Strategy is applied on onlooker phase and it help remove premature convergence. And last Exploring Strategy is applied on scout bee phase to remove stagnation and improve the optimal searching ability. The EABC is compared over 21 test benchmark functions and there results are compared with basic version of ABC, its significant variants namely, Best So Far ABC (BSFABC), Modified ABC (MABC), Black Hole ABC (BHABC), Memetic ABC (MeABC) and one recent swarm intelligence based Spider Monkey Optimization (SMO). The examination of the outcomes demonstrates that the proposed EABC Algorithm is a competitive variant of ABC.

Ankita Rajawat, Nirmala Sharma, Harish Sharma

Gaussian Scale Factor Based Differential Evolution

Differential Evolution (DE) is a easy and basic populace based probabilistic approach for global optimization. It has reportedly outperformed very well as compared to different nature inspired algorithms like Genetic algorithm (GA), Particle swarm optimization (PSO) when tested over both benchmark and real world problems. In DE algorithm there are crossover rate (CR), and scale factor (SF) are two control parameters, which play a crucial role to retain the proper equilibrium betwixt intensification and diversification abilities. But, DE, like other probabilistic optimization approaches, sometimes behave prematurely in convergence. Therefore, to retain the proper equilibrium betwixt exploitation and exploration capabilities, we introduce a modified SF in which the Gaussian distribution function and a flexible parameter (N) are introduced in mutation process of DE. The significant advantage of Gaussian distribution is full scale searching. The resulting algorithm is named as Gaussian scale factor based differential evolution GSFDE algorithm. To prove the efficiency and efficacy of GSFDE, it is tested over 20 benchmark optimization problems and the results are compared with the basic DE and advanced variants of DE namely, Gbest-guided differential evolution (Gbest DE), L‘evy Flight based Local Search in Differential Evolution (LFDE) and some swarm intelligence based algorithms like Modified artificial bee colony algorithm (MABC), Best-so-far ABC (BSFABC), Particle swarm optimization (PSO), and spider monkey optimization (SMO). The obtained results depict that GSFDE is a competent in the field of optimization.

Rashmi Agarwal, Harish Sharma, Nirmala Sharma

An Efficient Parallel Approach for Mapping Finite Binomial Series (Special Cases) on Bi Swapped Network Mesh

The efficient parallel mapping of numerical problems (over different parallel architectures) is necessary for fast execution of massive data. In fact, it is a challenging task and significant ongoing subject of research. The Bi Swapped Network (BSN) is recently reported 2-level hybrid and symmetrical optoelectronic network architecture, offer major improvements especially for drawbacks aroused due to the asymmetrical behavior of the well known swapped/OTIS network. In this paper, the efficient parallel mapping of binomial function (special cases: Taylor series and Negative Binomial Series) of 2n4 + 1 terms is presented on n × n Bi Swapped Network mesh in 2T + 24(n − 1) electronic and 8 optical moves, Here, we assume T is the time required to compute parallel prefix for each identical cluster (mesh).

Ashish Gupta, Bikash Kanti Sarkar

Parallel Multiplication of Big Integer on GPU

In this article, we present a method for implementation of FFT based big integer multiplication acceleration for RSA encryption and decryption. The DIF (Decimation-In-Frequency) based FFT was utilized to compute the multiplication operation for two big integers. The algorithm is implemented on Graphics Processing Units (GPUs) by utilizing the CUDA programming model. This DIF method is similar to divide-and-conquer approach. In addition, we have concentrated on the efficiency of our proposed method by dealing with the optimal time use, efficient memory utilization and the data transfer time between host (CPU) to device (GPU) which has an impact on the complete computation time on the GPU. We compare our implementation between GPU and CPU and present the performance results.

Jitendra V. Tembhurne

Multi Objective Task Scheduling Algorithm for Cloud Computing Using Whale Optimization Technique

The new and emerging IT paradigm, Cloud computing provides different options to customers to compute the tasks’ based on their choice and preference. Cloud systems provide services to customers as a utility. The customers are interested in the availability of service at low cost and minimization of task completion time. The performance of cloud systems depends on efficient scheduling of tasks. When cloud server receives multiple user requests, it is necessary for the service provider to schedule the tasks to the appropriate resources to realize the customer satisfaction. In this paper we propose Multi objective Whale Optimization Algorithm (WOA) to schedule tasks in cloud environment. WOA schedules the tasks based on a fitness parameter. The fitness parameter depends on three major constraints: resource utilization, quality of service and energy. The proposed WOA schedules the tasks based on above three parameters such that the task execution time and cost involved in the execution on virtual machines is minimal. The efficiency of the scheduling algorithm depends on minimum fitness parameter. The experimental results show that proposed WO scheduling algorithm provides superior results when compared with existing algorithms.

G. Narendrababu Reddy, S. Phani Kumar

Multilayered Feedforward Neural Network (MLFNN) Architecture as Bidirectional Associative Memory (BAM) for Pattern Storage and Recall

Between two popular ANN architectures – feedforward and feedback, also known as recurrent – feedback architectures have been extensively used for memorization and recall task due to their feedback connections. Due to the inherent simpler dynamics of FNNs, these structures have been explored in the present work for association task. Variation of standard BP algorithm, two-phase BP algorithm has been proposed for training MLFNNs to behave as associative memory. The results thus collected show that with the proposed algorithm, MLFNN start behaving as associative memory and the recall capability for corrupted versions of the stored patterns is at par with BAM but with lesser time.

Manisha Singh, Thipendra Pal Singh

Critical Success Factors and Critical Barriers for Application of Information Technology to Knowledge Management/Experience Management for Software Process Improvement – Findings from Literary Studies

Present-day Software Organizations are gradually changing into knowledge based organization as Knowledge Management (KM) is important for their viability. Regardless of this KM is considered as a challenging endeavor and its application is considered critical by the organizations. Studies in literature report KM Initiatives have not been successful in many cases. Thus in depth knowledge of the Critical Success Factors (CSFs) is prerequisite for successful KM implementation. A wide-ranging set of CSFs assist organizations to be aware of the important concerns that should be taken care of when designing and implementing a KM initiative for enhancing the Software quality.Different models proposed by SEI exist which provide the insights and standards for the improvement of Software quality. However, Software Process Improvement is also evolving as an important area of study for the enhancement of software quality. There are various models suggested in literature for the application of knowledge based/Experience based SPI, there is lack of consent on factors that contribute for the success of SPI. This research paper explores the Critical Success Factors for the efficacious application of knowledge based SPI. Research methodology used is an extensive literature survey for identification of critical success factors. Identified CSFs are used to design a survey for Indian Software Engineering Organizations for Application of IT to experience based SPI.

Mitali Chugh, Nitin

Traffic Prediction Using Viterbi Algorithm in Machine Learning Approach

Road traffic snarl-up is a major issue in metropolitan area of both developing and developed countries. In order to diminish this imperative, traffic congestion states of road systems are assessed, so that congested path can be avoided and flipside path can be chosen while traveling from one place to another. Information’s are gathered by the GPS gadgets and offers new open doors for traffic and route prediction, particularly in urban city systems. The core purpose of this research work is to build up an Android application which gives a deliberate approach in providing the best route between a source and destination to the drivers so that driver will not be caught in the traffic. Android application uses Machine Learning algorithms. In this paper, Hidden Markov Model (HMM) is used for predicting traffic states which performs better and more robust than the other models. The best path from source to destination is predicted using Viterbi algorithm taking into the account of road traffic at the time and the driver will be directed to the best path. This application takes Json request as input to interface with the local server through Internet for predicting the traffic state and the best path. The output is returned back from the server as a Json response to the Android application.

D. Suvitha, M. Vijayalakshmi, P. M. Mohideen Sameer

Application of Grey Wolf Optimizer for Optimization of Fractional Order Controllers for a Non-monotonic Phase System

In this paper, an optimized fractional order proportional-integral (FOPI), proportional-derivative (FOPD) and proportional-integral-derivative (FOPID) controllers are proposed for controlling the non-monotonic phase DC-buck regulator system. The parameters of controllers are optimized using a new meta-heuristic technique known as Grey Wolf Optimizer (GWO). Integral time absolute error (ITAE) has been taken as the fitness function for optimizing the parameters of the controller. Output response shows that the FOPI controller provides faster closed-loop performance and enhances the robustness of the system. Moreover, the FOPI controller also preserves the monotonic phase behavior of the system within the desired bandwidth. The results have validated by comparing the time domain as well as frequency domain characteristics of the system with another technique in the literature.

Santosh Kumar Verma, Shyam Krishna Nagar

Forecasting Hydrogen Fuel Requirement for Highly Populated Countries Using NARnet

Petroleum is being consumed at a rapid pace all over world, but the amount of petroleum is constant in earth crust and production to consumption requirement is not up to mark. It is expected that a day may come when this world will witness the crisis of this oil. For this our paper addresses the prediction of petroleum crisis in two most populated country of the world i.e. India and China using novel Artificial Neural Network (ANN) based approach. The set of observation comprising three features like population, petroleum production and petroleum consumption are being considered to design the predictive model. Our work shows that petroleum production over consumption with respect to sharp increase of population, leads to a decisive issue in production of an alternative fuel like Hydrogen fuel. In our analysis, we used the data provided by different government sources over a period of more than 30 years and then simulated by a multistep ahead prediction methodology, i.e. nonlinear autoregressive Network (NARnet) to predict petroleum crisis in near future. The results of present study reveals that for India, the Normalized Mean Square Error (NMSE) values found for population petroleum production and consumption are 0.000046, 0.2233 and 0.0041 respectively. Similarly for China the corresponding values are 0.0011, 0.0126 and 0.0041 respectively, which validates the accuracy of the proposed model. The study forecasts that by 2050 hydrogen fuel may be a suitable replacement for petroleum, and will not only reduce pollution, but also enhance the fuel efficiency at lower cost as compared to that of petroleum.

Srikanta Kumar Mohapatra, Tripti Swarnkar, Sushanta Kumar Kamilla, Susanta Kumar Mohapatra

Automatic Diagnosis of Dental Diseases

Not taking care of oral health may lead to many health problems, mouth lesions or other oral problems are most often the sign of systemic diseases such as cardiovascular, stroke and pulmonary diseases. The most common oral and dental disease which affects the teeth are dental abscess, dental caries, gingivitis, periodontal disease, pericoronitis, pulpitis, crowding, malocclusion, oral cancer, acid erosion, bad breath, dental plaque. Dental diseases are majorly diagnosed by X-ray imaging. Since the X-rays are harmful as they emit radiation. Exposed levels are not considered safe for adults and children. To eliminate the use of X-ray for imaging modality in dentistry we propose a system which includes digital imaging, thermal imaging and near infrared imaging which are non-invasive, non-harmful, cost effective and ease of use and care. After conducting a clinical study on 47 volunteers and 60 images are obtained on their denture studies reveal their efficiency here image modalities to be used as automatic classification of dental diseases and analyzing the proposed system the results confirm the diseases with greater accuracy of 87.5% hence there is a possibility of replacing X-ray images a hybrid imager which houses a thermal, a near infrared and high resolution imager.

Punal M. Arabi, T. S. Naveen, N. Vamsha Deepa, Deepak Samanta

Ensemble Algorithms for Islanding Detection in Smart Grids

In this paper, a predictive model for islanding detection in the presence of DG connected in smart grid is presented. In order to predict islanding state, an advanced machine learning approach, has been applied. The data is generated by carrying out simulation for various cases of islanding and non-islanding. The generated data-set for various cases is analyzed with and without dimensionality reduction using Principal Component Analysis. A hybrid classifier is then designed to correctly classify the cases into islanding & non-islanding. A comparison between various learning algorithms in terms of accuracy, precision, recall and F-score is also made.

Rubi Pandey, Damanjeet Kaur

An Analytical Approach for the Determination of Chemotherapeutic Drug Application Trade-Offs in Leukemia

For the treatment of different leukemia different chemotherapies are available. However the success rate of any particular drug scheduling may vary with leukemic condition. In general, low dose of chemotherapy is suggested for chronic leukemia, whereas application of high dose (myeloablative) chemotherapy is applied for acute and vigorous type of leukemia. In present work we have shown that chronic type of leukemia is controlled; however, for controlling vigorously growing leukemia is a challenge due to chemotherapeutic toxicity to the normal cells of the hematopoietic system. Hence for its management, we developed a control analysis model. This model may help to design an optimal chemotherapeutic schedule so that the controlling of the vigorously growing leukemic growth can be possible in one hand with the sustenance of the normal non-leukemic cell population on the other hand. This work shows that for long-term chemotherapeutic success in individual leukemic patients demands a judicious choice of drug dosing strategy that may determine the trade-off between leukemic growth and restoration time of normal cell population of the hematopoietic system.

Probir Kumar Dhar, Tarun Kanti Naskar, Durjoy Majumder

Term Co-occurrence Based Feature Selection for Sentiment Classification

In this paper, the strategy of feature selection for sentiment classification explored and compared to other significant feature selection strategies found in contemporary literature. The feature selection models performed using the statistical measure of t-score and z-score. SVM, NB and AdaBoost classifiers used for classification and compared. The objective of the paper is to explore and evaluate the scope of statistical measures for identifying the optimal features and its significance to classify the opinion using divergent classifiers. Performance analysis carried out on varied datasets with diverse range like the movie reviews, product reviews and tweets, the experiments carried out on feature selection strategies proposed and other strategies found in literature. From the results of the experimental studies, it is evident that optimal features selected using t-score and z-score are robust and outperformed the other feature selection strategies. In order to assess the significance of the feature selection models proposed, the classification process carried out using three classifiers called SVM, NB and AdaBoost. The classification accuracy about the features obtain by proposed models is much higher that compared to the classification accuracy obtained for the features selected by other contemporary models. Among the three classifiers that used to assess classification accuracy, AdaBoost has outperformed the other two models of SVM and NB.

Sudarshan S. Sonawane, Satish R. Kolhe

A Classification Approach for Monitoring and Locating Leakages in a Smart Water Distribution Framework

In a water distribution network, leakages have always remained a problem of significant importance as a large amount of water gets wasted since leakage is localized and repaired for its normal operation. Besides the traditional methods of identifying a leakage which takes a lot of efforts and incurs a huge cost with a low efficiency, the technological advancements has made it possible to develop a smart water distribution system that will capture the real time statistical values of the distribution network through integration of the information and communication technology (ICT) with the physical devices of the water pipeline structure. Further, machine learning techniques can be applied to these statistical parameters to develop a decision model to predict the future. This paper presents the statistical classification framework through support vector machine technique that extracts the pressure and flow values from different locations of the water pipeline network and classifies the features into the leakage or non-leakage condition. The mathematical simulation is done on the EPANET tool and the dataset is deployed on MATLAB for statistical classification.

Shailesh Porwal, Mahak Vijay, S. C. Jain, B. A. Botre

Chlorine Decay Modelling in Water Distribution System Case Study: CEERI Network

In this research paper EPANET and EPANET-MSX software tool is utilized to simulate the water network of CSIR-CEERI, Pilani. System is utilizing a real time contamination event detection algorithm, for detecting a randomly generated event using EPANET-MATLAB Toolkit. According to WHO (World Health Organisation) the required chlorine concentration for maintaining water quality is 0.5 mg/l. So re-chlorination stations are expected to include into network. A fixed detection threshold for chlorine residual is utilized for different sensing areas when chlorine concentration varies from this limit, controller module will adjust the value according to required level. Initial Chlorine, pipe roughness, demand pattern and others parameters of the underlying states of the water supply framework were created by Monte Carlo simulation method. This information of chlorine data is sent over a IOT integrated server. At that point this information is displayed over a user interactive application with the goal that a client intuitive view can be provided. The utilization of the Monte Carlo simulation in blend with heuristic classification have been turned out to be a capable tool to perform chlorine residuals finding and contamination event occurrence in sensors inside the CEERI pressure zone. By utilizing classification module’s output the event detection of contamination is done by raising a alert flag. Finally, this information about generated alerts are sent to the user, who is authorised to access and perform possible actions.

Mahak Vijay, Shailesh Porwal, S. C. Jain, B. A. Botre

Estimation of Link Margin for Performance Analysis of FSO Network

In a high-speed optical network, the preliminary consideration for a free space optical (FSO) link is the reliability of data communication under various atmospheric conditions, which provides a quality connection to the end user. This paper mainly focuses on the estimation of a quality parameter link margin (LM) in a FSO link. LM is calculated based on the meteorological data obtained at various smart cities under different weather conditions and geometrical attenuation. The availability of FSO link in an optical network is evaluated in terms of LM, which forms to establish a quality based network route for data transmission. The network performance of the proposed scheme is analyzed in terms of blocking probability (BP).

Kappala Vinod Kiran, Vikram Kumar, Ashok Kumar Turuk, Santos Kumar Das

Web Documents Prioritization Using Iterative Improvement

The amount of information accumulating on World Wide Web is growing in size exponentially. This led to difficulty in accessing the relevant information as it becomes tough for a user to access his required information in minimum amount of time. As a result of single query placed by user in search engine a large number of search results appear in front of him and to dig out the most relevant web link becomes a cumbersome task for user which can lead to decrease in trust for search engine. This paper proposes an approach for web structure and web usage mining by using iterative improvement algorithm. Iterative improvement is a randomized algorithm which is used for solving combinatorial optimization problem. This technique helps in selecting top T web pages and prioritizing them in relevance order. Experimental evaluation has been done which shows significant improvement in performance. The parameters used are access frequency, time duration, no of visitors, hubs and authorities. They cover the area of both web structure and web usage mining.

Kamika Chaudhary, Neena Gupta, Santosh Kumar

On Lagrangian Twin Parametric-Margin Support Vector Machine

A new simple and linearly convergent scheme is proposed in this paper for the dual formulation of twin parametric-margin support vector machine. Here, instead of considering the 1-norm error of slack variables, we have considered 2-norm of the vector of slack variables to make the objective functions strongly convex. Further, the proposed method solves a pair of linearly convergent iterative schemes instead of solving a pair of quadratic programming problems as in case of twin support vector machine and twin parametric-margin support vector machine. The proposed method considers in finding two parametric-margin hyperplanes that makes it less sensitive to heteroscedastic noise structure. Our experiments, performed on synthetic and real-world datasets, conclude that the proposed method has comparable generalization performance and improved learning speed in comparison to twin support vector machine, Lagrangian twin support vector machine and twin parametric-margin support vector machine.

Parashjyoti Borah, Deepak Gupta

Artificial Neural Network and Response Surface Methodology Modelling of Surface Tension of 1-Butyl-3-methylimidazolium Bromide Solution

The thermo-physical properties of the ionic liquids are required for engineering and product design applications in many pharmaceutical and food industries. Among the many unique properties of ionic liquids, their surface tension plays an important role for various reasons. In the present study, the aim is to investigate the effect of temperature and concentration on the surface tension of the binary solution of 1-butyl-3-methylimidazolium bromide + water. The concentration of the ionic liquid is varied from 0.1–0.6%w/w and temperature ranges from 302.85–337.45 K. A quadratic mathematical model has been formulated for predicting the surface tension using response surface methodology with central composite rotatable design having a coefficient of determination R2 as 0.9807. In addition, a two-layered feed forward back propagation neural network 2-4-1 is also modelled which provides a better performance when compared to response surface model. The developed ANN model can predict the surface tension with mean square error, root mean square error and percentage absolute average error equal to 0.156, 0.395 and 0.623, respectively.

Divya P. Soman, P. Kalaichelvi, T. K. Radhakrishnan

Optimized Food Recognition System for Diabetic Patients

Now a day’s, diabetic food recognition for various types of diabetic patients is the challenge proposed by the world medical Association. In order to overcome the challenge and limiting the misclassification rate, the Diabetic calorie measurement system was proposed. This proposed system grants user to acquire a photo of food and to measure calories without human intervention, which offer qualitative nutritional information about foods. Before the classification, Scale Invariant Feature Transform (SIFT) features obtained from different color spaces and texture features extracted by using Histogram Statistics(HS), Gray Level Co-occurrence Matrices (GLCM) and the Fast Fourier Transformation (FFT). By these color and textural features, Support Vector Machine (SVM), Extreme Learning Machine (ELM) and Biogeography Based Classification (BBO) are used for classification of diabetic patient comestible and not comestible foods. Based on the classification results, diabetic patient comestible food image involved in the system to measure their calorie by using GLCM and identify whether the food is comestible by TYPE:1 diabetic patient or TYPE:II diabetic patient. Final experimental results indicated that the performance rate of classification accuracy can be improved for FFT based texture feature compared to other features. The BBO optimizer gives optimized features and enhanced performance for very challenging large food image data set.

B. Anusha, S. Sabena, L. Sairamesh

Distributional Semantic Phrase Clustering and Conceptualization Using Probabilistic Knowledgebase

Distributional Semantics is an active research area in natural language processing (NLP) that develop methods for quantifying semantic similarities between linguistic elements in large samples of data. Short text conceptualization on the other hand is a technique for enriching short texts so that it become more interpretable. This is needed because most text mining tasks including topic modeling and clustering are based on statistical methods and won’t consider the semantics of text. This paper proposes a novel framework for combining distributional semantics and short text conceptualization for better interpretability of phrases in text data. Experiments on real-world datasets show that this method can better enrich phrases that are represented in distributional semantic spaces.

V. S. Anoop, S. Asharaf

Pong Game Optimization Using Policy Gradient Algorithm

Pong game was the titanic of the gaming industry in 20th century. Pong is the perfect example of deep reinforcement learning of ATARI game [1]. The game is extremely beneficial to improve concentration and memory capacity. Since the game is played by around 350 million people worldwide at present scenario, hence we saw the opportunity in this interesting game. The project has a great scope in atari game development. We proposed a stochastic reinforcement learning technique of Policy Gradient algorithm to optimize Pong game. The purpose of this study is to improve the algorithms that control the game structure, mechanism and real-time dynamics. We implemented policy gradient algorithm to improve the performance and training which is significantly better than traditional genetic algorithm.

Aditya Singh, Vishal Gupta

Forensics Data Analysis for Behavioral Pattern with Cognitive Predictive Task

Web browsing analysis is an emerging task to find the user’s behaviour during surfing of Internet. The individual session logs are observed and identified to authenticate and verify the intruders from normal users. While moving from one website to another website, the users will leave the digital footprints used to track the users’ interesting information which may assist the stakeholders about online advertising and the users’ sentimental analysis. A session log is a source for investigating an individual during a digital crime. Semantic forensics is novel investigation scheme employed to verify a users’ behaviour while browsing the Web, as it is a huge repository of information. In this paper, semantic forensics which is a new branch of forensic science is introduced and gives a clear-cut view about how the behavioural pattern of a user influence the semantic forensics. It also analyses the results by considering the sample database related to dark Web communication and explains how a session log can be accessed with the help of various phases in Cognitive Predictive Task (CPT).

S. Mahaboob Hussain, Prathyusha Kanakam, D. Suryanarayana, Sumit Gupta

Enhancing Personalized Learning with Interactive Note Taking on Video Lectures–An Analysis of Effective HCI Design on Learning Outcome

E-learning, MOOCS etc. are the present time technology mediated unvoidable pedagogy tools, which along with social media and affordable device inclusion continues to be unabated. Video lectures form a primary as well significantly vital part of MOOC instruction delivery design beyond geogaphical and cultural boundaries. They serve as a gateway to draw students into the course. In going over these videos accumulating knowledge, there is a high occurrence of cases [1] where the learner forgets about some of the concepts taught and focus more on what is the minimum amount needed to carry forward so that the participant can attempt the quizzes, tests and pass required grade. This is a step backward when teaching pedagogy is concerned with giving the student a learning outcome that seems to bridge the gap between what they knew of the course and what level of knowledge upgrades after they have taken the course to completion. To address this issue, we are proposing an interaction model that enables the learner to promptly take notes as and when a learner is viewing a video. This paper contains modeling of UI (User Interaction) issues and discussion of a working prototype as the proposed application MOOCbook, aimed at composing personalized notes for MOOC takers. Several world leading MOOC providers content integrated using application program interface (API). Findings of the work and empirical study have revealed encouraging learning outcome with longer retention of MOOC contents.

Suman Deb, Paritosh Bhattacharya

Intelligent Data Placement in Heterogeneous Hadoop Cluster

The MapReduce programming model and Hadoop has become the de facto standard for data-intensive applications. Hadoop tasks are mapped to certain nodes within the Hadoop cluster with data required by tasks. Such a strategy is intuitively appealing for a homogeneous cluster, both in terms of computation and storage capabilities. However most commonplace clusters are indeed heterogeneous, since nodes are added over a prolonged period. This necessitates the use of an intelligent data placement strategy among cluster nodes that accounts for the inherent heterogeneity, which otherwise incurs performance bottleneck. In this paper, we propose to have a performance based clustering of Hadoop nodes and subsequently place data among the nodes. Performance based profiling of nodes can be achieved by running multiple benchmarks in an offline manner and segregating dividing the cluster nodes into two subsets namely low and high performance nodes. Additionally, execution process of Hadoop tasks is monitored using Hadoop’s task speculation mechanism and computations are dynamically migrated for slow running tasks based on a prior knowledge of data block regarding the task. Experiments conducted demonstrates that the proposed intelligent data placement improve network utilization and cluster performance.

Subhendu Sekhar Paik, Rajat Subhra Goswami, D. S. Roy, K. Hemant Reddy

Effective Heuristics for the Bi-objective Euclidean Bounded Diameter Minimum Spanning Tree Problem

The Euclidean Bounded Diameter Minimum Spanning Tree (BDMST) Problem aims to find the spanning tree with the lowest cost, or weight, under the constraint that the diameter does not exceed a given integer D, and where the weight of an edge is the Euclidean distance between its two end points (vertices). Several well-known heuristic approaches have been applied to this problem. The bi-objective version of this problem aims to minimize two conflicting objectives, weight (or cost), and diameter. Several heuristics for the BDMST problem have been recast for the bi-objective BDMST problem (or BOMST problem) and their performance studied on the entire range of possible diameter values. While some of the extant heuristics are seen to dominate other heuristics over certain portions of the Pareto front of solutions, no single heuristic performs well over the entire range. This paper presents a hybrid tree construction heuristic that combines a greedy approach with a heuristic strategy for constructing effective tree “backbones”. The performance of the proposed heuristic is shown to be consistently superior to the other extant heuristics on a standard benchmark suite of dense Euclidean graphs widely used in the literature.

V. Prem Prakash, C. Patvardhan, Anand Srivastav

Analysis of Least Mean Square and Recursive Least Squared Adaptive Filter Algorithm for Speech Enhancement Application

Speech enhancement is a vital area of research, the performance of speech based human machine applications such as automatic speech recognition system, in car communication depends on the quality of speech communicated. Different methodologies have been used by various researchers to improve the quality of speech signal. In this paper an attempt is made to analyze the performance of Least Mean Square (LMS) and Recursive Least Squared (RLS) adaptive filter algorithm for speech enhancement application. The performance indices used for the evaluations is Mean Square Error (MSE), Signal to Noise Ration (SNR) and execution time. The detail analysis is done and experimentally the results are validated and certain modifications are suggested in the algorithm. The experimentation revels that LMS have fast convergence than RLS. The computational complexity of RLS is very high as compared to LMS.

Mrinal Bachute, R. D. Kharadkar

A Novel Statistical Pre-processing Based Spatial Anomaly Detection Model on Cyclone Dataset

Anomaly detection in heterogeneous severity feature space is a challenging issue for which only a few models have been designed and developed. Detection of spatial objects and its patterns helps to find essential spatial decision patterns from large spatial datasets. Traditional spatial anomaly detection techniques are failed to process and detect anomalies due to noise, sparsity and imbalance problems. Also, most of the traditional statistical anomaly detection models consider the homogeneous type of objects for outlier detection and ignore the effect of heterogeneous objects. In this proposed model, a novel statistical pre-processing based spatial anomaly detection model was proposed to find the anomalies on cyclone dataset. Experimental results proved that the proposed model has high computational detection rate with less mean error rate compared to the traditional anomaly detection models.

Lakshmi Prasanthi Malyala, Nandam Sambasiva Rao

Mathematical Modeling of Economic Order Quantity in a Fuzzy Inventory Problem Under Shortages

The purpose of this article is to develop an EOQ model to find the optimal order quantity of inventory items where shortages are allowed. Here shortages are uncertain and characterized by triangular fuzzy number and all other parameters like carrying costs, ordering cost, demand and time are considered as crisp numbers. For defuzzification sign distance ranking method has been employed. Total cost in fuzzy environment has been calculated and then by applying sign distance method corresponding crisp values have been calculated. The results have been compared and discussed with sensitivity analysis. To demonstrate the efficiency and feasibility of the proposed approach, one numerical example [2] has been solved with the existing method.

P. K. De, A. C. Paul, Mitali Debnath

Dynamic Processing and Analysis of Continuous Streaming Stock Market Data Using MBCQ Tree Approach

The most challenging facet in today’s analysis is dealing with real time dynamic data that has characteristic of changing in time, hence results in a continuous flow of data, often signified as streaming data. A distinct database has been assimilated to store the continuous data and designed in a way to query for the gushing data at that point rather than working on snapshot data. Consequently querying of these dynamic data have various confronts in the areas of data distribution, retrieving resultant set in an optimized way and query processing. To overcome the issues on querying the continuous data, a new idea Multilevel with Balanced Continuous Query tree (MBCQ) indexing technique have been commenced to handle querying of continuous data efficiently. The proposed system is applied in the Stock market analysis, which addresses these issues of obtaining the business data for stock market by implementing dynamic timestamp-based sliding window that controls incoming data and filters it with the necessary data to be viewed. These are used to make valuable stock decisions that help the analyst in winning a stock deal. This also validates for memory handler and index maintenance cost to that of existing technique and observed that the proposed system outperforms in a better way.

M. Ananthi, M. R. Sumalatha

Energy Consumption Forecast Using Demographic Data Approach with Canaanland as Case Study

Various methods or models of forecasting energy have been addressed in the past, but it has been observed from exhaustive research that these methods are highly complex and usually require past or historical data of load consumption in its analysis. Hence, a new model that is simple and only makes use of readily available demographic user data such as the Energy consumption in KWh per capita of the country in view, population and land mass area of the urban community under study (Canaanland in this case) was expatiated here. This paper reiterates this model in a clearer, more explicit and applicable way; developing a flow chart that shows a step by step process of forecasting the energy consumption and a matlab code which does the actually computation in KVAh for each forecast year. Results obtained in this case showed that though the forecast values seemed a bit lower than the actual peak consumption for past years covered in the forecast, it still gives an average estimate of the future energy needs for any community without previous information on consumption, which would be very useful in planning.

Oluwaseun Aderemi, Sanjay Misra, Ravin Ahuja

A Cloud-Based Intelligent Toll Collection System for Smart Cities

Electronic Toll Collection (ETC) systems may be adopted by city managers to combat the problems of long vehicular queues, fuel wastage, high accident risks, and environmental pollution that come with the use of traditional or manual toll collection systems. In this paper, an intelligent system is developed to eliminate long vehicular queues, fuel wastage, high accident risks, and environmental pollution in a smart city based on seamless interconnections of Wireless Sensor Networks (WSNs), and web and mobile applications that run on an Internet of Things (IoT)-Enabled cloud platform. A ZigBee WSN is designed and implemented using an Arduino UNO, XBee S2 radios, an XBee Shield, and a Seeduino GPRS Shield. For vehicle owners to make toll payments, view toll historical data, and get toll news feeds, a web application and a mobile application are designed and implemented based on Hyper Text Mark-up Language (HTML), Cascading Style Sheets (CSS), Javascript and Hyper Text Pre-processor (PHP). The mobile application is deployed using an Android platform. A cloud platform was also developed to provide business logic functionalities by using PHP as a scripting language, and MySQL as the database engine driver. Deployment of the developed ETC system in smart and connected communities will drastically minimize the challenges of long vehicular queues, fuel wastage, high accident risks, and environmental pollution in urban centers.

Segun I. Popoola, Oluwafunso A. Popoola, Adeniran I. Oluwaranti, Aderemi A. Atayero, Joke A. Badejo, Sanjay Misra

An Announcer Based Bully Election Leader Algorithm in Distributed Environment

In distributed system, a job is divided into sub jobs and distributed among the active nodes in the network; communication happens between these nodes via messages passing. For better performance and consistency, we need a leader node or coordinator node. There is no compulsion that leader node should be same all the time because of out of services, crashed failure etc. Over past years, tremendous algorithms have been introduced to select a new leader when leader is dead or crashed. Bully algorithm is a well known traditional method for the same when leader or coordinator becomes crashed. In this algorithm the highest Id node is selected as a leader, but this algorithm has some drawbacks such as message passing complexity, heavy network traffic, redundancy etc. To overcome this problem, we are introducing an announcer based Bully election leader algorithm which is the modified version of original algorithm to overcome the above mentioned shortcomings. In our proposed algorithm we use an announcer who will decide the next leader or coordinator after current leader failure. Our analytical comparison presents that our proposed algorithm uses less messages passing with respect to the existing algorithms.

Minhaj Khan, Neha Agarwal, Saurabh Jaiswal, Jeeshan Ahmad Khan

Deep Learning Based Adaptive Linear Collaborative Discriminant Regression Classification for Face Recognition

Face recognition is a popular research problem in the domain of image analysis. The steps involved in face recognition are mainly face verification and face classification. Face verification algorithms have been well-defined in recent years, but face classification researchers are still facing problems. In this regard we propose face recognition using Adaptive Linear Collaborative Discriminant Regression Classification (ALCDRC) with Deep Learning (DL) algorithm by considering different sets of the training images. In particular, ALCDRC makes utilization of various weights to describe different training sets and uses such weighting data to compute between-class and with-in-class reconstruction errors and after that ALCDRC endeavors to locate an ideal projection matrix which maximize the ratio of Between-Class Reconstruction Error (BCRE) over With-in-Class-Reconstruction Error (WCRE). Experimentation was done on two challenging face databases called ORL and YALE-B and prominent results are obtained.

K. Shailaja, B. Anuradha

A Novel Clustering Algorithm for Leveraging Data Quality in Wireless Sensor Network

Till date, the research work in Wireless Sensor Network is mainly inclined towards rectifying the problem associated with the nodes and protocol associated with it, e.g., energy problems, clustering issue, security loopholes, uncertain traffic, etc. However, there is less emphasis towards the user’s demand, i.e., data quality. As wireless nodes undergo various forms of adverse wireless condition in order to carry out data aggregation, it is quite inevitable that an aggregated data forwarded may not have a good data quality. Therefore, we present a novel clustering technique that concentrates on achieving the lowest possible error. With an aid of analytical modeling, a novel clustering technique is formulated using probability theory that targets the node with higher retention of redundant information so that it can be mitigated effectively. The study outcome shows better data quality of the proposed system.

B. Prathiba, K. Jaya Sankar, V. Sumalatha

Smart and Innovative Trends in Natural Language Processing

Frontmatter

Development of a Micro Hindi Opinion WordNet and Aligning with Hown Ontology for Automatic Recognition of Opinion Words from Hindi Documents

The Indian languages are deprived in terms of accessibility of natural language tools. Especially, the tools for carrying out the particular opinion mining task: opinion word orientation in native language is not available. Reasoning about such natural language words requires a semantically rich lexical resource. When the ontology is aligned with a lexical resource like WordNet, a rich knowledge base is created which can be useful for various information retrieval and natural language processing applications. In order to do this, a micro level Hindi Opinion WordNet is developed and is aligned with the Hindi Opinion WordNet Ontology (HOWN). The opinion lexicon (both Hindi positive and negative words) for 700 Hindi adjectives is also developed. The synset ID values of Hindi opinion synsets are mapped with the synset ID values of corresponding English opinion WordNet synsets. A front end query interface is designed to query the HOWN ontology for opinion word details. This query is transformed into SPARQL format. This task is for automatic recognition of opinionated terms from Hindi documents by the machine.

D. Teja Santosh, Vikram Sunil Bajaj, Varun Sunil Bajaj

Evaluation and Analysis of Word Embedding Vectors of English Text Using Deep Learning Technique

Word embedding is a process of mapping words into real number vectors. The representation of a word as vector maps uniquely each word to exclusive vector in the vector space of word corpus. The word embedding in natural language processing is gaining popularity these days due to its capability to exploit real world tasks such as syntactic and semantic entailment of text. Syntactic text entailment comprises of tasks like Parts of Speech (POS) tagging, chunking and tokenization whereas semantic text entailment contains tasks such as Named Entity Recognition (NER), Complex Word Identification (CWI), Sentiment classification, community question answering, word analogies and Natural Language Inferences (NLI). This study has explored eight word embedding models used for aforementioned real world tasks and proposed a novice word embedding using deep learning neural networks. The experimentation performed on two freely available datasets of English Wikipedia dump corpus of April, 2017 and pre-processed Wikipedia text8 corpus. The performance of proposed word embedding is validated against the baseline of four traditional word embedding techniques evaluated on the same corpus. The average result of 10 epochs shows the better performance of proposed technique than other word embedding techniques.

Jaspreet Singh, Gurvinder Singh, Rajinder Singh, Prithvipal Singh

POS Tagging of Hindi Language Using Hybrid Approach

Natural language processing (NLP) is the process of extracting meaningful information from natural language. Part of speech (POS) tagging is considered as the one of the important tool for Natural language processing. Part of speech is a process of assigning a tag to every word in the sentences as a particular part of speech such as Noun, pronoun, adjective, verb, adverb, preposition, conjunction etc. Hindi is a natural language so there is a need to perform natural language processing on Hindi sentence. So this paper present pos tagging of Hindi Language using Hybrid Approach. First we tagged the Hindi words with the help of WordNet dictionary which consists of around 1 lakh unique class category of words like Noun, verb, adjective, and adverb. But still, many words are not tagged so we used Rule-based approach to assign a tag to untagged words. We use HMM model as a statistical approach to remove ambiguity. We evaluated our system over a corpus of 1000 sentences and 15000 words with 7 different standard part of speech tags for Hindi We achieved an accuracy of 92%.

Nidhi Mishra, Simpal Jain

Prosody Detection from Text Using Aggregative Linguistic Features

With the advent of digital revolution and new technologies the demand for commodious interfaces has increased. A speech interface in a person’s native/first language gives an epitome of ease in accessing information. Tamil Text-To-Speech synthesis is one such speech interface and this paper is focused on developing a prosody prediction model for enhancing the naturalness of the synthesized speech. The proposed prosody prediction model for multifarious Tamil text is developed to classify the text into three classes of high, mid and low using the generated aggregative linguistic feature score. The proposed prosody prediction model resulted in a f-measure of 0.84 when tested against multifarious text and the performance of this model is certainly encouraging to explore further in this direction.

Vaibhavi Rajendran, G. Bharadwaja Kumar

Deep Neural Network Based Recognition and Classification of Bengali Phonemes: A Case Study of Bengali Unconstrained Speech

This paper proposed a phoneme recognition and classification model for Bengali continuous speech. A Deep Neural Network based model has been developed for the recognition and classification task where the Stacked Denoising Autoencoder is used to generatively pre-train the deep network. Autoencoders are stacked to form the deep-structured network. Mel-frequency cepstral coefficients are used as input data vector. In hidden layer, 200 numbers of hidden units have been utilized. The number of hidden layers of the deep network is kept as three. The phoneme posterior probability has been derived in the output layer. This proposed model has been trained and tested using unconstrained Bengali continuous speech data collected from the different sources (TV, Radio, and normal conversation in a laboratory). In recognition phase, the Phoneme Error Rate is reported for the deep-structured model as 24.62% and 26.37% respectively for the training and testing while in the classification task this model achieves 86.7% average phoneme classification accuracy in training and 82.53% in the testing phase.

Tanmay Bhowmik, Amitava Choudhury, Shyamal Kumar Das Mandal

Do Heavy and Superheavy Syllables Always Bear Prominence in Hindi?

Lexical stress in Hindi is not distinctive in nature. Past studies on Hindi stress system have an agreement that syllable weight is the most influencing feature for stress. In this paper, we investigate the change in the duration of syllable nucleus as an acoustic correlate of syllable weight. The duration is captured in four contexts—(i) vowel identity (ii) voiced/voiceless coda in closed syllables and (iii) word uttered after a stressed and an unstressed syllable. It is found that heavy syllables are prominent in limited context only. The prominence pattern of a heavy syllable is largely affected by aforementioned contexts. Moreover, it is also found that superheavy syllables are independent of these contexts and are always prominent.

Somnath Roy, Bimrisha Mali

Design and Development of a Dictionary Based Stemmer for Marathi Language

Stemming is one of the term conflation techniques used to reduce morphological variations of the term into a unique term called as “stem”. Stemming is one of the significant pre-processing steps performed in various applications of natural language processing (NLP) and information retrieval (IR): like machine translation, named entity recognition, automated document processing, etc. In this paper, we focus on the development of automated stemmer for the Marathi language. We have adopted the dictionary lookup technique for this task. The experiment is tested on news articles in the Marathi language consists of 4500 words. The proposed stemmer achieved a maximum accuracy of 80.6% when tested on nine different runs. The over-stemming error rate is low. The satisfactory result of proposed stemmer encourages us to use this stemmer for the information retrieval task.

Harshali B. Patil, Neelima T. Mhaske, Ajay S. Patil

Deep Convolutional Neural Network for Handwritten Tamil Character Recognition Using Principal Component Analysis

Offline handwritten character recognition is one of the most challenging researches in the field of pattern recognition. This challenge is due to unique writing style for different users. Many techniques were presented for recognition of handwritten English, Bangla, Gurmukhi, Chinese, Devanagari, etc. Due to the complexity of Tamil characters, recognition of Tamil character is a challenging task in the field of machine learning. To overcome this complexity, a new approach called deep learning had entered into the field of machine learning. Convolutional neural network is the special kind of network that comes under deep learning used to work with images. Therefore the main idea behind this research is to develop a novel method by combining principal component analysis (PCA) and convolutional neural network for feature extraction, to recognize the Tamil characters in a superior way.

M. Sornam, C. Vishnu Priya

Neural Machine Translation System for Indic Languages Using Deep Neural Architecture

Translating into an Indic language is a challenging task. Indic languages are based on Sanskrit and have rich and diverse grammar. Due to its vast grammar, it requires very large number of complex rules for creating traditional rule based machine translation system. In this work, we have created an Indic machine translation system that utilize recent advancement in the area of machine translation using deep neural architectures. Results presented in this article shows that using neural machine translation we can achieve more natural translation for indic languages such as Hindi that was previously not efficiently possible with rule based or phrase based translation systems.

Parth Shah, Vishvajit Bakarola, Supriya Pati

Backmatter

Additional information

Premium Partner

Neuer Inhalt
image credits