Skip to main content

Über dieses Buch

This volume contains 70 papers presented at CSI 2014: Emerging ICT for Bridging the Future: Proceedings of the 49th Annual Convention of Computer Society of India. The convention was held during 12-14, December, 2014 at Hyderabad, Telangana, India. This volume contains papers mainly focused on Machine Learning & Computational Intelligence, Ad hoc Wireless Sensor Networks and Networks Security, Data Mining, Data Engineering and Soft Computing.



A Non-Local Means Filtering Algorithm for Restoration of Rician Distributed MRI

Denoising algorithms for medical images are generally constrained owing to the dependence on type of noise model as well as introduction of artifacts (in terms of removal of fine structures) upon restoration. In context of magnitude Magnetic Resonance Images (MRI), where noise is approximated as Rician instead of additive Gaussian; denoising algorithms fail to produce satisfactory results. This paper presents a Non-Local Means (NLM) based filtering algorithm for denoising Rician distributed MRI. The proposed denoising algorithm utilizes the concept of self-similarity which considers the weighted average of all the pixels by identifying the similar and dissimilar windows based on Euclidean distance for MRI restoration. Simulations are carried out on MRI contaminated with different levels of Rician noise and are evaluated on the basis of Peak Signal-to-Noise Ratio (


) and Structural Similarity (SSIM) as quality parameters. Performance of the proposed algorithm has shown significant results when compared to the other state of the art MRI denoising algorithms.

Vikrant Bhateja, Harshit Tiwari, Aditya Srivastava

Evaluation of Pheromone Update in Min-Max Ant System Algorithm to Optimizing QoS for Multimedia Services in NGNs

Next Generation Networks (NGN) is towards providing seamless Quality of Service (QoS) for converged mobile and fix multimedia services anywhere-anytime to users through different access network technologies. Factors affecting the QoS in NGN are speech encoders, delay, jitter, packet loss and echo. The negotiation and dynamic adaptation of QoS is currently considered to be one of the key features of the NGN concept. Optimal QoS for multimedia services in the NGNs for multi-flow resource constrained utility maximization is complex problem. In this paper, we propose a novel Min-Max Ant System (MMAS) algorithm with a pheromone updating scheme to solve it. We compared effective of MMAS to SMMAS (

Smoothed MMAS

), MLAS (

Multi-level Ant System

) and 3-LAS (

Three-level Ant System

). The computational results showed that our approach is currently among the best performing algorithms for this problem.

Dac-Nhuong Le

Combining Classifiers for Offline Malayalam Character Recognition

Offline Character Recognition is one of the most challenging areas in the domain of pattern recognition and computer vision. Here we propose a novel method for Offline Malayalam Character Recognition using multiple classifier combination technique. From the preprocessed character images, we have extracted two features: Chain Code Histogram and Fourier Descriptors. These features are fed as input to two feedforward neural networks. Finally, the results of both neural networks are combined using a weighted majority technique. The proposed system is tested using two schemes- Writer independant and Writer dependant schemes. It is observed that the system achieves an accuracy of 92.84% and 96.24% respectively for the writer independant and writer dependant scheme considering top 3 choices.

Anitha Mary M. O. Chacko, P. M. Dhanya

A Probabilistic Based Multi-label Classification Method Using Partial Information

In recent study there exist many approaches to solve multi-label classification problems which are used in various applications such as protein function classification, music categorization, semantic scene classification, etc., It in-turn uses different evaluation metrics like hamming loss and subset loss for solving multi-label classification but which are deterministic in nature. In this paper, we concentrate on probabilistic models and develop a new probabilistic approach to solve multi-label classification. This approach is based on logistic regression.The other approach is based on the idea of grouping related labels. This method trains one classifier for each group and the corresponding label is called as group representative. Predict other labels based on the predicted labels of group representative. The relations between the labels are found using the concept of association rule mining.

Gangadhara Rao Kommu, Suresh Pabboju

Sensor Based Radio Controlled Multi-utility Solar Vehicle to Facilitate Plough in Agriculture

This paper proposes an approach to implement computer driven multi-utility engineering vehicle that can be operated from any location within the radio-frequency to reduce the human efforts. Based on supervise autonomy, tractor is able to plough the field with defined pressure on the soil. A simple desktop computer can operate this radio controlled tractor through arrow keys by viewing the scene from camera headed over front of it. This is a prototype model which can be implemented in real world environment. The experiment with this demonstration shows a significant and positive effect on plowing the field without visiting there with solar charged batteries only. Looking ahead to the futurity, estimates are that farmers will be devoured.

Vishu, Prateek Agrawal, Sanjay Kumar Singh, Kirandeep Kaur, Karamjeet Kaur

Analysis of Performance of MIMO Ad Hoc Network in Terms of Information Efficiency

Real time channel models are employed to account the acceptance of the path loss in terms of frequency and transmission range in Rayleigh fading environment. Different interference levels are encountered across a packet transmission, creating the interference diversity that mitigates the multiple access interference effects. It also improves the spectral efficiency. The information efficiency of the network is evaluated in terms of the product of spectral efficiency and distance. In this paper, different path losses are considered to optimize the transmission range and to enhance information efficiency. The performance of the framework is evaluated for real time channel model.

Swati Chowdhuri, Nilanjan Dey, Sayan Chakraborty, P. K. Baneerjee

Handwritten Kannada Numerals Recognition Using Histogram of Oriented Gradient Descriptors and Support Vector Machines

The role of a good feature extractor is to represent an object using numerical measurements. A good feature extractor should generate the features, which must support the classifier to classify similar objects into one category and the distinct objects into separate category. In this paper, we present a method based on HOG (Histogram of Oriented Gradients) for the recognition of handwritten kannada numerals. HOG descriptors are proved to be invariant to geometric transformation and hence they are one among the best descriptors for character recognition. We have used Multi-class Support Vector Machines (SVM) for the classification. The proposed algorithm is experimented on 4,000 images of isolated handwritten Kannada numerals and an average accuracy of 95% is achieved.

S. Karthik, K. Srikanta Murthy

Design and Utilization of Bounding Box in Human Detection and Activity Identification

All video surveillance system uses some important steps to detach moving object from background and thus differentiate all the objects in video image frames. The main endeavor of this paper is to use an object bounding box concept to detect the human blob and human activity detection using a system. The images are captured by using single static camera to make the system cost effective. The bounding box that bounds each object in the image frames can be utilized to track and detect activities of the moving objects in further frames. These bounding boxes can be utilized to detect human and human activities like crowd and the estimation of crowd. This paper gives the implementation results of bounding box for activity detection.

P. Deepak, Smitha Suresh

Performance Analysis of PBlock Algorithm Implemented Using SIMD Model to Attain Parallelism

Software applications are driving the growth of cloud computing era. The security of data over the networks is essential in order to enable cloud computing applications. The security of data is ensured through the use of various types of encryption algorithms. In the current cloud computing era we are witnessing the use of multi core processors which has enabled us to run security applications, simultaneously at both client and the server end. The encryption as well as decryption process of security algorithms is compute intensive and can take significant benefit from parallel implementations that can run on these multi core processors. Moreover these algorithms will consume more energy on uniprocessor systems due to the massive calculations they do, because there is a non-linear relationship between frequency of a core and power supply. This paper introduces a parallel version of Blowfish algorithm using Single Instruction Multiple Data model which is named as PBlock and its implementation on a Symmetric Multi Processor machine along with the results of performance gains that we have obtained on a number of benchmark examples.

Disha Handa, Bhanu Kapoor

Blind Watermarking Technique for Grey Scale Image Using Block Level Discrete Cosine Transform (DCT)

This paper presents a robust watermarking technique to copyright an image. Proposed technique is totally based on DCT and is different from most of the available techniques.We are embedding blockwise watermark against the noise, filtering and cropping attack. Before embedding the watermark for any host image we must calculate the gain factor. According to our approach gain factor will vary for two different host images . The experimental results show that in addition the invisibility and security, the scheme is also robust against signal processing.

Ravi Tomar, J. C. Patni, Ankur Dumka, Abhineet Anand

Recognition and Tracking of Occluded Objects in Dynamic Scenes

The object tracking in video sequences is required in high level applications such as video communication and compression, object recognition, robotic technology etc. However; such systems are challenged by occlusions caused by interactions of multiple objects. A visual tracking system must track objects which are partially or even fully occluded. The proposed system uses a robust object tracking algorithm to handle partial and full occlusions. Occlusion handling is based on the concept of patch based framework to identify the parts which are going to occlusion. An adaptive Gaussian Mixture Model is used to extract the visual moving objects and then the noise removal steps are performed. This paper describes a bounding box system for tracking the moving objects. The main objective of the proposed system is to track the human objects and detect occlusion by a patch concept during tracking.

Suresh Smitha, K. Chitra, P. Deepak

Playlist Generation Using Reactive Tabu Search

This paper proposes a reactive tabu search algorithm to solve the automatic playlist generation problem effectively. The conventional tabu search, though produces promising results, has the disadvantage of the tabu size being fixed. The tabu size is fixed by the user and it determines the capacity of the memory. A preset parameter does not yield appropriate results in a heuristic algorithm and so there must be mechanisms to make the tabu list size variable in order to get optimum results. The reactive tabu search, has mechanisms to make tabu size variable and thereby avoids often repeated solutions. This proposed method for the playlist generation problem yields better results than the conventional tabu search. The concrete implementation of the method has been described, which is supported by experimental analysis.

Sneha Antony, J. N. Jayarajan

Design of Fuzzy Logic Controller to Drive Autopilot Altitude in Landing Phase

Nowadays the automatic control science is an important filed. Its importance has been raised from equipment evolution and increased control requirements over the many science fields, such as medical and industrial applications. Hence, it was necessary to find ways for studying and working to develop control systems. This paper proposes study of an aircraft as control object in longitudinal channel during the landing phase, and study altitude due to its importance during this phase. This paper introduces the possibility of implementing automated control process using fuzzy controllers. Finally, comparing the results of fuzzy and traditional controllers has been discussed.

Adel Rawea, Shabana Urooj

Video Shot Boundary Detection: A Review

Video image processing is a technique to handle the video data in an effective and efficient way. It is one of the most popular aspects in the video and image based technologies such as surveillance. Shot change boundary detection is also one of the major research areas in video signal processing. Previous works have developed various algorithms in this domain. In this paper, a brief literature survey is presented that establishes an overview of the works that has been done previously. In this paper we have discussed few algorithms that were proposed previously which also includes histogram based, DCT based and motion vector based algorithms as well as their advantages and their limitations.

Gautam Pal, Dwijen Rudrapaul, Suvojit Acharjee, Ruben Ray, Sayan Chakraborty, Nilanjan Dey

User Aided Approach for Shadow and Ghost Removal in Robust Video Analytics

In almost all computer vision applications moving objects detection is the crucial step for information extraction. Shadows and ghosts will often introduce errors that will certainly effect the performance of computer vision algorithms, such as object detection, tracking and scene understanding. This paper studies various methods for shadows and ghost detection and proposes a novel user-aided approach for texture preserving shadows and ghost removal from surveillance video. The proposed algorithm addresses limitations in uneven shadow and ghost boundary processing and umbra recovery. This approach first identifies an initial shadow/ghost boundary by growing a user specified shadow outline on an illumination-sensitive image. Interval-variable pixel intensity sampling is introduced to eliminate anomalies, raised from unequal boundaries. This approach extracts the initial scale field by applying local group intensity spline fittings around the shadow boundary area. Bad intensity samples are substituted by their nearest intensities based on a log-normal probability distribution of fitting errors. Finally, it uses a gradual colour transfer to correct post-processing anomalies such as gamma correction and lossy compression.

I. Lakshmi Narayana, S. Vasavi, V. Srinivasa Rao

A Discrete Particle Swarm Optimization Based Clustering Algorithm for Wireless Sensor Networks

Clustering is a widely used mechanism in wireless sensor networks to reduce the energy consumption by sensor nodes in data transmission. Partitioning the network into optimal number of clusters and selecting an optimal set of nodes as cluster heads is an NP-Hard problem. The NP-Hard nature of clustering problem makes it a suitable candidate for the application of evolutionary algorithm and particle swarm optimization (PSO). In this paper, we shall suggest a PSO based solution to the optimal clustering problem by using residual energy and transmission distance of sensor nodes. Simulation results show a considerable improvement in network lifetime as compared to existing PSO based algorithms and other clustering protocols like LEACH and SEP.

R. K. Yadav, Varun Kumar, Rahul Kumar

Natural Language Query Refinement Scheme for Indic Literature Information System on Mobiles

The concept and realization of ‘Information Pulling’ on handheld mobile devices facilitate an easy and effective information access. SMS driven English language information systems for various domains like agriculture, health, education, banking and government are available. However, information systems that support Indic languages are not reported. We have developed an information system that retrieves appropriate transliterated literature documents of Marathi and Hindi languages in response to SMS queries. The proposed system uses Vector Space Model (VSM) to rank the documents according to similarity score. We proposed and used appropriate data structures to enhance the basic VSM for the transliterated document domain. Furthermore we propose to customize the rank of the results obtained in response to user queries. The rank management is based on corresponding relevance feedback given by users.

In this paper we describe the development of the information system for transliterated documents and share our experimental results of relevance feedback mechanism. We have designed a probability relevance model for rank management customization. The performance improvement of the system is demonstrated with the help of standard measures Normalized Discounted Cumulative Gain (NDCG) and Discounted Cumulative Gain (DCG).

Varsha M. Pathak, Manish R. Joshi

Quality of Service Analysis of Fuzzy Based Resource Allocator for Wimax Networks

WiMAX is an upcoming technology gaining grounds in recent times that has inherent support for real and non real applications. The rise in number of real time application with popularity of mobile phones always tests scheduler performance of broadband wireless systems like WiMAX. Distribution of resources in such networks has always been a challenging phenomenon. This problem can be solved if scheduling decision is based on traffic conditions of incoming traffic. This paper proposes an application of fuzzy logic by virtue of which an intelligent system for distribution of resources has been defined. The system works as adaptive approach in granting bandwidth to those traffic classes that has relatively higher share of incoming traffic in its queues. The results demonstrate significance of the proposed method.


An Hybrid Approach for Data Clustering Using K-Means and Teaching Learning Based Optimization

A new efficient method for optimization, ‘Teaching-Learning Based Optimization (TLBO)’, has been proposed very recently for addressing the mechanical design problems and it can also be used for clustering numerical data. In this paper teaching learning based optimization is used along with kmeans algorithm for clustering the data into user specified number of clusters. It shows how TLBO can be used to find the centroids of a user specified number of clusters. The hybrid algorithm has been implemented for attaining better results for clustering.

Pavan Kumar Mummareddy, Suresh Chandra Satapaty

A Short Survey on Teaching Learning Based Optimization

Optimization is a process for finding maximum or minimum of a function subject to set of functions. In other words optimization finds the most suitable value for a function within a given domain. Global Optimization means finding minimum value of all local minima and maxima value from all local maxima. The procedure to find out the global maxima or minima point is called a Global Optimization. Teaching-Learning-Based Optimization (TLBO) is recently being used as a new, reliable, accurate and robust optimization technique scheme for global optimization over continuous spaces. In this paper we are comparing the performance of various versions of TLBO.

M. Sampath Kumar, G. V. Gayathri

Linear Array Optimization Using Teaching Learning Based Optimization

Optimization is a serious problem in antenna arrays. Many evolutionary and meta-heuristic algorithms are used in solving such array optimization problems. In this paper a new optimization algorithm known as Teaching Learning Based Optimization technique (TLBO) is applied on side lobe level (SLL) reduction. The Non Uniform Linear Array (NULA) considered has uniform spacing and non-uniform amplitude distribution with symmetry around the axis. The desired SLL is achieved by properly defining the amplitude distribution for the NULA.

V. V. S. S. S. Chakravarthy, K. Naveen Babu, S. Suresh, P. Chaya Devi, P. Mallikarjuna Rao

Partition Based Clustering Using Genetic Algorithm and Teaching Learning Based Optimization: Performance Analysis

Clustering is useful in several machine learning and data mining applications such as information retrieval, market analysis, web analysis etc. The most popular partitioning clustering algorithms are K-means. The performance of these algorithms converges to local minima depends highly on initial cluster centroids. In order to overcome local minima apply evolutionary and population based methods like Genetic Algorithms and Teaching Learning Based Optimization (TLBO). The GA and TLBO is applied to solve the challenges of partitioning clustering method K-means like initial centroid selection and compare the results with using GA and TLBO. TLBO is latest population based method and uses a population of solutions to proceed to the global solutions and overcome the local minima problem. TLBO method is based on effect of the influence of a teacher on the output of learners in a class.

Kannuri Lahari, M. Ramakrishna Murty, Suresh C. Satapathy

Multilevel Thresholding in Image Segmentation Using Swarm Algorithms

Image segmentation is the most important part of image processing, and has a large impact on quantitative image analysis. Among many segmentation methods, thresholding based segmentation is widely used. In thresholding method, selection of optimum thresholds has remained a challenge over decades. In order to determine thresholds, most of the methods analyze the histogram of the image. The optimal thresholds are found by optimizing an objective function built around image histogram. The classical segmentation methods often fails to give good result for images whose histograms have multiple peaks. Since Swarm algorithms have shown promising results on multimodal problems, hence the alternative methods for optimal image segmentation. This papers presents the comprehensive analysis of Swarm algorithms for determining the optimal thresholds on standard benchmark images. An exhaustive survey of various Swarm algorithms on multilevel image thresholding was carried out and finally comprehensive performance comparison is presented both in numerical and pictorial form.

Layak Ali

A Honey Bee Mating Optimization Based Gradient Descent Learning – FLANN (HBMO-GDL-FLANN) for Classification

Motivated from successful use of the Honey Bee Mating Optimization (HMBO) in many applications, in this paper, a HMBO based Gradient Descent Learning for FLANN classifier is proposed and compared with FLANN, GA based FLANN and PSO based FLANN classifiers. The proposed method mimics the iterative mating process of honey bees and strategies to select eligible drones for mating process, for selection of best weights for FLANN classifiers. The classification accuracy of HMBO-GDL-FLANN is investigated and compared with FLANN, GA-based FLANN and PSO-based FLANN. These models have been implemented using MATLAB and results are statistically analyzed under one way ANOVA test. To obtain generalized performance, the proposed method has been tested under 5-fold cross validation.

Bighnaraj Naik, Janmenjoy Nayak, H. S. Behera

Optimal Sensor Nodes Deployment Method Using Bacteria Foraging Algorithm in Wireless Sensor Networks

The nodes in wireless sensor networks (WSN) need to be deployed optimally to cover whole geographic area and the communication link between them should be optimal. During deployment in remote locations, simulation of the proposed algorithm can be used for optimally deploying the sensor nodes in the area. Bacteria foraging is the nature inspired algorithm which is used to make clusters among various similar entities. Here the nodes to be deployed can be taken as the bacteria that are in search of the food which is depicted by the best possible communication link. This paper presents a novel algorithm for optimal sensor node deployment leading to optimal clustering of WSN nodes. This paper utilizes the fact that an area can be covered fully using regular hexagon. So using a bacteria foraging algorithm for optimization the locations are such adjusted that all the nodes in the network moves to vertices of regular hexagons connected with each other. This leads to complete coverage of the area and all the nodes are equidistant.

Pooja Nagchoudhury, Saurabh Maheshwari, Kavita Choudhary

Efficient Data Aggregation Approaches over Cloud in Wireless Sensor Networks

In future wireless sensor network (WSNs) scenarios, the mobility is emerging as an important feature with increased number of sensors. Multifarous obstacles in this research are being encountered as the deployments in sensor networks are growing. However, these issues can be shielded from the software developer in by integrating the solutions into a layer of software services. The Data is ever growing which demands efficient data handling algorithms. In this paper, we propose a technique in which sensed data will be stored over cloud and different data aggregation techniques like clustering and classification will be used to process such big data on the cloud. This will reduce the computation overload on the base station as the data is stored and processed on cloud itself. Clustering is used to omit the abrupt values and cluster the similar data together. Classification algorithms are used for reaching to a final conclusion. A predictive Markov chain model was also developed for the prediction of overall weather outlook. Then a concept of weather forecasting, called Long Range Forecasting was used to predict the exact numeric values of the future weather parameters.

Pritee Parwekar, Veni Goel, Aditi Gupta, Rajul Kukreja

Performance Analysis of Modified Bellman Ford Algorithm Using Cooperative Relays

Cooperative communication plays a vital role in modern wireless communication. It improves the performance of the system and supports higher data rates. In this paper, the nodes in the network are classified into amplify-and-forward (AF) and decode-and-forward (DF) relays based on the Signal to Noise Ratio (SNR). The node who’s received signal SNR is greater than threshold SNR is considered as DF relay, otherwise it considered as an AF relay. Network lifetime can be increased by operating nodes in the power save mode (PSM). The energy efficient Modified Bellman Ford Algorithm (MBFA) using the power saving mechanism of 802.11g is employed for routing. All the nodes acting as a DF relay will be considered for routing. The performance of MBFA can be improved by cooperative relays. Residual Energy is considered as a routing metric and optimal path is selected using MBFA. In a grid network, nodes are placed in a uniform distance and grid topology gives better performance over random topology. Mobility of nodes cause packet loss and the performance of a mobile network degrades further with PSM. Performance of MBFA with PSM for a static network using grid topology is better compare with mobile networks and random topology. For various node densities, the residual battery capacity of static and mobile networks for grid and random topologies are investigated using MBFA. The bit-error-rate performance of the MBFA using cooperative relay for different modulation schemes BPSK, QPSK and 64-QAM were analyzed.

Rama Devi Boddu, K. Kishan Rao, M. Asha Rani

Selection of Reliable Channel by CRAODV-RC Routing Protocol in Cognitive Radio Ad Hoc Network

The Cognitive Radio Ad Hoc NETwork (CRAHN) technology is a burning novel technology in the wireless communication era and it meets the requirements like self-configuring, self-healing, and robustness with low deployment cost. In this communication environment routing has vital role to establish the path from source to destination and transmit the data between them. The efficient and effective route can be formed by selecting the reliable channel for transmitting the data. In this paper, it has been established the efficient and effective route, using threshold values of TIME-ON of primary users and channel capacity which are important constraints for routing the packets.

Sangita Pal, Srinivas Sethi

Sensor Controlled Sanitizer Door Knob with Scan Technique

Hand sanitizers were developed for use of cleansing of hands without soap and water. These are made up of gels that contain alcohol in order to kill germs present on the skin. The alcohol works immediately and effectively in order to kill bacteria and many viruses. Sanitizer is low of cost cheaply available to use for cleansing of hands. They are convenient, portable, easy to use and less time consuming. So in this project a hand sanitizer is attached to the door handle and releases automatically when the person touches the door handle. It saves time in our fast life by removing the use of washbasins. Although various other options are already present in which a wet tissue is attached to the door handle. But the drawback of this system is tissues get dried up and hence become of no use. It causes wastage. The sanitizers on the other hand are liquid based. The project has been implemented using FPGA. The FPGA has many advantages as speed, number of input/output ports and performance. This system has been successfully implemented and tested in hardware using Xylinx 14.5 software packages using Very High Speed Integrated circuit hardware description language(VHDL). RTL and technology schematic are included to validate simulation results.

Tamish Verma, Palak Gupta, Pawan Whig

Impact of Propagation Models on QoS Issues of Routing Protocols in Mobile Ad Hoc Networks

Wireless and mobile communication networks are becoming more and more fashionable in today’s mobile world. In wireless communications, Mobile Ad hoc Networks play crucial role. Ad hoc network is an infrastructure-less and self -organized collection of autonomous nodes. In mobile ad hoc network, routing is one of the thrust areas. There are no committed routers in MANETs; every network node should contribute to routing. Simulation tools are used to analyze the performance of routing protocols in MANETs. Propagation models show strong impact on the performance of routing protocols during simulations. This paper investigates the impact of familiar propagation models Two-ray and Shadowing on the performance of AODV and DSR routing protocols using NS-2. Experimental results found that both AODV and DSR show better throughput in two-ray ground model.

Attada Venkataramana, J. Vasudeva Rao, S. Pallam Setty

Implementation of Scatternet in an Intelligent IoT Gateway

Anything and everything will be connected in the world of IoT. This allows a ubiquitous communication around the world. The communication can be a sensed data from the physical world, or control signal for a device or else a usual internet data communication. There can be several ways with which the real world device can communicate to the IoT platform. Bluetooth is one such technology which allows communication between a device in the real world and IoT network. Bluetooth Scatternet concept, gives a bluetooth network the capability to support multiple concurrent bluetooth device communication over a wide area.. This paper discuss about a custom build intra and inter piconet scheduling model and its real-time validation on our own IoT platform which includes the Gateway, AGway (Amrita Gateway) and Middleware, AIoTm (Amrita Internet of Things Middleware). The paper emphasis on scatternet building and scatternet maintenance. The entire implementation is based on the linux bluetooth stack Blue-Z.

Lakshmi Mohan, M. K. Jinesh, K. Bipin, P. Harikrishnan, Shiju Sathyadevan

Reactive Energy Aware Routing Selection Based on Knapsack Algorithm (RER-SK)

Wireless mobile ad-hoc network technology is designed for the establishment of a network anywhere and anytime, characterized by lack of infrastructure, clients in a network free to move and organize themselves in an arbiter fashion, Communication may have multiple links and heterogeneous radio, can operate in a stand-alone manner, which needs a efficient dynamic routing protocol in terms of energy due to its energy constraint characteristic. We propose a novel energy-aware routing protocol for mobile ad-hoc networks, called reactive energy aware routing selection based on knapsack: RER-SK. It addresses the two important characteristics of MANETs: energy-efficiency and improving life time of network. It considers the node’s optimistic information processing capacity with respect to current energy & traffic conditions with the help of knapsack algorithm. The simulation result shows that this work is better than the existing MBCR & MRPC interns of network lifetime by using NS2.

Arshad Ahmad Khan Mohammad, Ali Mirza, Mohammed Abdul Razzak

A Novel Approach for Enhancing the Security of User Authentication in VANET Using Biometrics

Vehicular Ad Hoc Network (VANET) offers various services to users. Misusing such network could cause destructive consequences. A perfect user authentication scheme is necessary to secure the VANET system. Use of biometrics in authentication can overcome the limitations of existing random key based authentication techniques. A combination of face and finger print biometrics provide more accurate recognition of users. Here we propose, A novel approach for enhancing the security of User Authentication in VANETs based on biometrics. It concentrates on enhancing the security of Vehicle-to-Infrastructure (V2I) communication in VANET.

P. Remyakrishnan, C. Tripti

Detection of Black Hole Attack Using Code Division Security Method

A mobile Adhoc network (MANET) is a collection of mobile nodes to form a temporary network without use of any predefined infrastructure. Direct communication is possible only when two nodes lie within their sensing range; otherwise communication is made through intermediate nodes till the destination is reached. Such type of networks can allow any node to join in the network or leave the network at any instant of time. So any node can act as a host or the router in the network, which results in security issues in MANETs. A well known attack in MANETs is a Black hole attack. In this paper, we present a simple but effective method called Code Division Security Method (CDSM) for security in order to prevent Black hole attack in MANETs. Black hole node is a malicious node which can mislead a normal node to forward the data through it and corrupt the data so that it can degrade the performance of the network. We validate our approach using network simulator with an example.

Syed Jalal Ahmad, V. S. K. Reddy, A. Damodaram, P. Radha Krishna

Cluster Based Data Replication Technique Based on Mobility Prediction in Mobile Ad Hoc Networks

The mobile database system in a MANET is a dynamic distributed database system, which is composed of some mobile MHs. The key issues in MANETs for mobile database are: How to optimize mobile queries, cache and replicate data, manage transactions and routing. In this proposal, we wish to take the problems of data replication in solving the mobile database issues. Replication of data in a MANET environment focuses on to improve reliability and availability of data to the mobile clients (node). There are many issues revolving around replication of data in such a scenario like power, server and node mobility, networking partition and frequent disconnection. We propose a cluster based data replication technique for replication of data and to overcome the issues related to node mobility or disconnection problem in MANET environment. Our approach has two phases; initial phase consists of formation of cluster and cluster head and in the second phase, the distributions of data (replicated data) to the respective cluster head. By NS2 simulation, we will show that our proposed technique attains better data consistency and accuracy with reduced delay and overhead.

Mohammed Qayyum, Khaleel Ur Rahman Khan, Mohammed Nazeer

Mitigating FRI-Attack Using Virtual Co-ordinate System in Multi-hop Wireless Mesh Networks

Routing in Wireless Mesh Network has become an active field of research in recent days. Most of the existing protocols are vulnerable to many attacks. One such approach that causes serious impact on a WMN is a Fraudulent Routing Information attack known as FRI –Attack, with which an external attacker can drop all the data packets by using a single compromised mesh node. In this paper, we present a Virtual Coordinates based solution to mitigate FRI attack in WMN. Virtual Coordinate System uses the topological structure of a network to get virtual coordinates rather than getting real coordinates using GPS. The proposed mechanism is designed for an on demand routing protocols such as HWMP and it relies on digital signature to mitigate FRI attack in WMNs during route discovery. This is a software based solution and it does not require each node to be equipped with specialized hardware like GPS.

P. Subhash, S. Ramachandram

Cluster Oriented ID Based Multi-signature Scheme for Traffic Congestion Warning in Vehicular Ad Hoc Networks

To report safety messages like traffic congestion warning in a Vehicular Ad hoc Network (VANET) vehicles communicate with each other and with the fixed Road Side Units (RSU). To ensure the messages communicated are true, message authentication plays a vital role. Since the number of incoming messages received by the RSU grows exponentially with time, delay in authentication increases. Existing batch and priority based verification schemes addresses issues like re-verification when there is a single false message. This leads to delay in authentication. In this paper, a cluster oriented ID based multi-signature scheme is proposed to overcome the delay in authentication. In this scheme, the network is clustered and each cluster holds a cluster head which is responsible for generating a multi-signature. Experimental analysis shows that this scheme incurs less delay in authentication, communication overhead and loss ratio when compared to existing approaches.

Bevish Jinila, Komathy

Source Node Position Confidentiality (SNPC) Conserving Position Monitoring System for Wireless Networks

Position Monitoring System (PMS) is one of the applications of wireless networking that aims for monitoring and tracking objects’ current position in the wireless area. For example, in order to keep an eye on the movement of an enemy in military applications the position monitoring system in wireless network can be set up. Battlefield surveillance is the classic example of this category in which the soldier needs protection from attackers. Consequently the issue is: how to conceal the position of the wireless node from the attacker? In this paper, we present a novel scheme for preserving confidentiality of the source node position in wireless network. The main objective behind presenting this scheme is to maximize the accuracy of the aggregate position information, to minimize the communication and computational cost and to offer source node position confidentiality by achieving numerous aspects of security, anonymity, traceability, revocation, data unlinkability and with the help of fake data resources. At the end, the experimental work and simulation results depicts the effectiveness of the proposed scheme.

Darshan V. Medhane, Arun Kumar Sangaiah

Encryption/Decryption of X-Ray Images Using Elliptical Curve Cryptography with Issues and Applications

Elliptic Curve Cryptography is a public key cryptography scheme which is leading now days because of its great advantages (small key size, less time for encryption, no solution to Discrete Logarithmic problem, impossible time taken for brute force attack) for handheld, low memory, portable and small devices. Applying Elliptic Curve Cryptography on text and image gives almost equal performance. In this paper we are summarizing that how Elliptic Curve Cryptography encrypts and decrypt text data and image. Operations responsible for encryption are point multiplication, point addition, point doubling, and point subtraction are explained in detail. One more thing which is very necessary in decryption of the Elliptic Curve Cryptography that is Discrete Logarithmic Problem which is also explained briefly and many more comparative study about Elliptic Curve Cryptography advantages, Disadvantages, Elliptic Curve Cryptography attacks and its applications comparing with other Encryption Algorithms.

Vikas Pardesi, Aditya Khamparia

Network Quality Estimation - Error Protection and Fault Localization in Router Based Network

Devices of Network are used for Different purposes like Huge data transmission, easy to access and time saving are the applications of digitized communication system. Wireless communication systems consist of a number of routers and links. Processing speed, link failure, control of one router over another router as well extended delay causing huge problems in transmission. The proposed method is based on a three phased technique. The network parameter detection phase includes a protocol oriented technique for network parameter considerations and mutual node communication. To provide communication the second phase includes security inclusion in network correlated nodes. Here this paper proposes principal and credential sharing among a number of nodes present in the network who are neighbor to each other. The last phase of the paper includes the detection of the error present in the network


For a flow contained development, this paper first gives an introduction related to security in UMA network in the section one. The second section is a literature study of some previous related works. A well-structured and modularized proposal is given in section three followed by a simulation result in section four. At last an overall conclusion is given.

M. HemaLatha, P. Padmanabham, A. Govarhan

Cryptanalysis of Image Encryption Algorithm Based on Fractional-Order Lorenz-Like Chaotic System

This paper provides break of an image encryption algorithm suggested by Xu

et al.

recently in [Commun Nonlinear Sci Numer Simulat 19 (10) 3735–3744 2014]


The authors realized a Laplace transformation based synchronization between two fractional-order chaotic systems to execute error-free encryption and decryption of digital images. The statistical analyses show the consistent encryption strength of Xu

et al.

algorithm. However, a careful probe of their algorithm uncovers underlying security shortcomings which make it vulnerable to cryptanalysis. In this paper, we analyze its security and proposed chosen plaintext-attack/known plaintext-attack to break the algorithm completely. It is shown that the plain-image can be successfully recovered without knowing secret key. The simulation of proposed cryptanalysis evidences that Xu

et al.

algorithm is not secure enough for practical utilization.

Musheer Ahmad, Imran Raza Khan, Shahzad Alam

The Probabilistic Encryption Algorithm Using Linear Transformation

The probabilistic encryption produces more than one ciphertext for the same plaintext. In this paper an attempt has been made to propose a probabilistic encryption algorithm based on simple linear transformation. The variable length sub key groups are generated using a random sequence. A randomly selected element is replaced by each element of the plaintext from the corresponding indexed sub key group. With this a cryptanalyst cannot encrypt a random plaintext looking for correct ciphertext. The security analysis and performance of the method are studied and presented.

K. Adi Narayana Reddy, B. Vishnuvardhan

Network Management Framework for Network Forensic Analysis

Tracing malicious packets back to their respective sources is important to defend the internet against attacks. Content based trace-back techniques have been proposed to solve the problem of source identification. It is not feasible to effectively store and query all the data stored in the devices for extended periods of time due to resource limitations in the network devices.

In this paper, we propose a management framework for network packet trace-back with optimum utilization of device storage capacity. We aim to remotely manage the devices and also to store large forensic data so that we can identify the source of even older attacks.

Ankita Bhondele, Shatrunjay Rawat, Shesha Shila Bharadwaj Renukuntla

Fluctuating Pattern Size Handling for the Extracted Base Verb Forms from Participles and Tenses for Intelligent NLP

Natural Language Processing (NLP) is a very important part of a conversation as well as that of a chatterbot. Complete vocabulary building for a chatterbot is a cumbersome and time intensive process and thus a self learning chatterbot is a much more efficient alternative. The learning process can be initiated by a multidimensional approach including individual words to phrases and the whole concepts. Verbs tend to be a constant feature since they have different forms, namely participles and tenses. Even though we will discuss the algorithm to derive the base verb from any participle or tense but for storing such data items it a special kind of data handling is required.

In this regard we will be discussing and proposing the fluctuating data handling strategy which will help us not just to understand the strategy of fluctuating data handling but also its correlation with data handling for smaller databases.

P. S. Banerjee, G. Sahoo, Baisakhi Chakraborty, Jaya Banerjee

Enhanced Encryption and Decryption Gateway Model for Cloud Data Security in Cloud Storage

The Cloud computing technology is a concept of providing online multiple resources in the scalable and reliable method. The user will access cloud service as per their requirement of computing level. There is no capital expenditure involved in computing resources for cloud user. The users have to pay as per their storage, network and service usage. The cloud computing have some highlighted issues like compliance, cross border data storage issues, multi-tenant and down time issues. The most important issues are related to storage of data in cloud. Data confidentiality, data integrity, data authentication and regulations on data protection are major problems that affect user’s business. This paper discusses about encryption and decryption data security issues in cloud and its safety measures and comes out with a novel research work of cloud data security model.

D. Boopathy, M. Sundaresan

A New Approach for Data Hiding with LSB Steganography

Steganography is the art of hiding data into a media in such a way that the presence of data can’t be detected when the communication is taking place. This paper provides a review of recent achievements of LSB based spatial domain steganography that have an improved steganography’s ultimate objectives, which are undetectable, robustness and capacity of hidden data. These techniques can help researchers in understanding about image steganography and various techniques of hiding data in an image. Along with this, two new methods are proposed one for hiding secret message into cover image and the second is hiding a grey scale secret image into another grey scale image. These methods uses 4-states #table that generates pseudo random numbers which are used for hiding secret information. These methods provide higher security because secret information is hidden at different position of LSB of image depending on pseudo numbers generated by the #table.

G. Prashanti, K. Sandhyarani

Generation of 128-Bit Blended Key for AES Algorithm

The AES algorithm is most widely used algorithm for various security based applications. Security of the AES algorithm can be increased by using biometric for generating a key. To further increase the security, in this paper a 128 bit blended key is generated from IRIS and arbitrary key. An IRIS based 128 bit key is generated from IRIS features. Generated key is concealed using arbitrary key to form a blended key using Fuzzy Commitment scheme. Brute force attack is widely used against the encrypted data. This attack searches the entire key space. If the key is more random , then chances for getting attack is less. In this paper Generated key randomness is verified and compared with biometric key randomness. Blended key is 10% more random than IRIS based biometric key.

S. Sridevi Sathya Priya, P. Karthigaikumar, N. M. SivaMangai

Designing Energy-Aware Adaptive Routing for Wireless Sensor Networks

Many energy-aware routing protocols take into account the residual battery power in sensor nodes or/and the energy required for transmission along the path. In the deployment of an environmental sensor network, we observed that applications may also impose requirements on routing, thus placing higher demands on protocol design. We demonstrated our approach to this issue through FloodNet, a flood warning system, which uses a predictor model developed by environmental experts to make flood predictions based on readings of water level collected by a set of sensor nodes. Because the model influences the node reporting frequency, we proposed the FloodNet adaptive routing (FAR) which transmits data across nodes with ample en- ergy and light reporting tasks whilst conserving energy for others low on battery power and heavily required by the monitoring task. As a reactive protocol, FAR is robust to topology changes due to moving obstacles and transient node failure. We evaluate the FAR performance through simulation, the result of which corresponds with its anticipated behavior and improvements.

K. Seena Naik, G. A. Ramachandra, M. V. Brahmananda Reddy

A Dynamic and Optimal Approach of Load Balancing in Heterogeneous Grid Computing Environment

Grid computing is some sort of distributed computing which shares the resources, processor and network in the organization or between the organizations for accomplish task. It involves huge amounts of computational task which require resource sharing across multiple computing domains. Resource sharing needs an optimal algorithm; to enhance the performance we should focus on how to increase the global throughput of computational grid. Load balancing in grid which distributes the workloads across various computing nodes to achieve optimal resource utilization, reduce latency, maximize throughput and to avoid any node by overload and under load. Several existing load balancing methods and techniques only interested in distributed systems those are having interconnection between homogeneous resources and speedy networks, but in Grid computing, these methods and techniques are not feasible due the nature of grid computing environment like





resource selection

characteristics. To consider the above problem we need to develop such an algorithm which optimally balances the loads between heterogeneous nodes. It is based on tree structure where load is managed at different levels such as neighbor-based and cluster based load balancing algorithms which reduces complexity can and less number of nodes required for communication during load balancing.

Jagdish Chandra Patni, M. S. Aswal, Aman Agarwal, Parag Rastogi

Analysis on Range Enhancing Energy Harvester (Reach) Mote Passive Wake-Up Radios for Wireless Networks

Today, the wireless sensor networks are used increasingly in many applications fields and in parallel, these are used by scientific researchers to improve and accelerated the features of such networks. These designs are very challenging and are sustainable. On the above, these energy-dependent sensors are relied to run for long periods. These Sensor nodes are usually battery-powered and thus have very shorter lifetime. In this paper, we introduce a novel passive wake-up radio device called REACH (Range Enhancing Energy Harvester) Mote for a traditional sensor node, which uses the energy harvester circuit combined with an ultra-low-power pulse generator to trigger the wake-up of the mote. And this approach aims to reduce the latency without increasing energy consumption.

N. Shyam Sunder Sagar, P. Chandrasekar Reddy, Chavali Sumana, Naga Snigdha, A. Charitha, Abhuday Saxena

UOSHR: UnObservable Secure Hybrid Routing Protocol for Fast Transmission in MANET

A Mobile ad hoc network (MANET) is a collection of independent mobile nodes that are self organized and self configured. Routing in MANET is a critical issue due to mobility of node and openness of network, so an efficient routing protocol makes the MANET secure and reliable. UOSPR deals with the concept of anonymity, unlinkability and unobservability as well as uses BATMAN proactive routing in which every node has knowledge of best hop details instead of maintaining entire network topology. In UOSPR there is latency in data transmission if route is not found in the routing table at a particular time interval. We know that ad hoc network has limited battery power, but already half of the power used for security in USOR and UOSPR, remaining power has to be used efficiently for calculating routes and data transmission. For this purpose UOSHR is introduced by combining reactive (USOR) and Proactive (UOSPR) routing protocols. This technique eliminates latency in finding route, achieves fast and secure transmission of data. NS2 is used to implement and validate the effectiveness of the design.

Gaini Sujatha, Md. Abdul Azeem

An ANN Approach for Fault Tolerant Wireless Sensor Networks

Wireless Sensor Network (WSN) is an emerging technology that has revolutionized the whole world. This paper proposes an artificial neural network model for a reliable and fault-tolerant WSN based on an exponential Bi-directional Associative Memory (eBAM). An eBAM has higher capacity for pattern pair storage than the conventional BAMs. The proposed model strives to improve the fault tolerance and reliability of packet delivery in WSN by transmitting small-sized packets called vectors. The vectors are associated with the original large-sized packets after encoding the associations between the hetero-associative vectors for the given problem space. This encoding is done by the application of the evolution equations of an eBAM. The performance characteristics of the proposed model are compared with other BAM models through simulation.

Sasmita Acharya, C. R. Tripathy

An Intelligent Stock Forecasting System Using a Unify Model of CEFLANN, HMM and GA for Stock Time Series Phenomena

The aim of this work to suggest and apply a unify model by combining the Computationally Efficient Functional Link Artificial Neural Networks (CEFLANN), Hidden Markov Model (HMM), and Genetic Algorithms (GA) to predict future trends from a highly uncertainty stock time series phenomena. We present a framework of an intelligent stock forecasting system using complete features to predict stock trading estimations that may consequence in better profits. Using CEFLANN architecture, the stock prices are altered to independent sets of values that become input to HMM. The trained and tested HMM output is used to identify the trends in the stock time series data. We apply different methods to generate complete features that raise trading decisions from stock price indices. We have used population based optimization tool genetic algorithms (GAs) to optimize the initial parameters of CEFLANN and HMM. Finally, the results achieved from the unified model are compared with CEFLANN and other conventional forecasting methods using performance assessment techniques.

Dwiti Krishna Bebarta, T. Eswari Sudha, Ranjeeta Bisoyi

Subset K-Means Approach for Handling Imbalanced-Distributed Data

The effectiveness of clustering analysis relies not only on the assumption of cluster number but also on the class distribution of the data employed. This paper represents another step in overcoming a drawback of K-means, its lack of defense against imbalance data distribution.


-means is a partitional clustering technique that is well-known and widely used for its low computational cost. However, the performance of


-means algorithm tends to be affected by skewed data distributions, i.e., imbalanced data. They often produce clusters of relatively uniform sizes, even if input data have varied cluster size, which is called the “uniform effect.” In this paper, we analyze the causes of this effect and illustrate that it probably occurs more in the


-means clustering process. As the minority class decreases in size, the “uniform effect” becomes evident. To prevent the effect of the “uniform effect”, we revisit the well-known K-means algorithm and provide a general method to properly cluster imbalance distributed data.

The proposed algorithm consists of a novel under random subset generation technique implemented by defining number of subsets depending upon the unique properties of the dataset. We conduct experiments using ten UCI datasets from various application domains using five algorithms for comparison on eight evaluation metrics. Experiment results show that our proposed approach has several distinctive advantages over the original k-means and other clustering methods.

Ch. N. Santhosh Kumar, K. Nageswara Rao, A. Govardhan, N. Sandhya

Dynamic Recurrent FLANN Based Adaptive Model for Forecasting of Stock Indices

Prediction of future trends in financial time-series data are very important for decision making in the share market. Usually financial time-series data are non-linear, volatile and subject to many other factors like local or global issues, causes a difficult task to predict them consistently and efficiently. This paper present an improved Dynamic Recurrent FLANN (DRFLANN) based adaptive model for forecasting the stock Indices of Indian stock market. The proposed DRFLANN based model employs the least mean square (LMS) algorithm to train the weights of the networks. The Mean Absolute Percentage Error (MAPE), the Average Mean Absolute Percentage Error (AMAPE), the variance of forecast errors (VFE) is used for determining the accuracy of the model. To improve further the forecasting results, we have introduced three technical indicators named Relative Strength Indicator (RSI), Price Volume Change Indicator (PVCI), and Moving Average Volume Indicator (MAVI). The reason of choosing these three indicators is that they are focused on important attributes of price, volume, and combination of both price and volume of stock data. The results show the potential of the model as a tool for making stock price prediction.

Dwiti Krishna Bebarta, Ajit Kumar Rout, Birendra Biswal, P. K. Dash

MPI Implementation of Expectation Maximization Algorithm for Gaussian Mixture Models

Gaussian Mixture Model (GMM) is a mathematical model represented by a mixture of Gaussian distributions, each with their own mean and variance parameters. In applications like speaker-recognition, where the GMM is to be trained on hundred thousand of data points for several different speakers, training a GMM becomes computationaly intensive and requires long time to train. One way to solve this problem is by implementing a parallel algorithm for training the GMM. In this paper, we present a parallel algorithm for Expectation Maximization using the message passing paradigm, laying the emphasis on data parallelism. Our results show that our parallel algorithm when applied to an input of 200,000 points for a 8 dimensional Gaussian mixture model having 128 components takes 81.86 seconds on 128 cores which is about 92 times faster than the serial implementation.

Ayush Kapoor, Harsh Hemani, N. Sakthivel, S. Chaturvedi

Effect of Mahalanobis Distance on Time Series Classification Using Shapelets

The sequence of values that are measured at time intervals equally spaced is time series data. Finding shapelets within a data set as well as classifying that data based on shapelets is one of the most recent approaches to classification of this data. In the classification using shapelets, Euclidean distance measure is adopted to find dissimilarity between two time series sequences. Though the Euclidean distance measure is known for its simplicity in computation, it has some disadvantages: it requires data to be standardized and it also requires that the two data objects being compared be of the same length. It is sensitive to noise as well. To overcome the problem, Mahalanobis distance measure can be used. In the proposed work, classification of time series data is performed using time series shapelets and used Mahalanobis distance measure which is the measure of distribution between a point and distribution. Correlations between data set is considered. It does not depend on scale. The cost complexity pruning is performed on decision tree classifier. The Mahalanobis distance improves the accuracy of algorithm and cost complexity pruning method reduces the time complexity of testing and classification of unseen data. The experimental results show that the Mahalanobis distance measure leads to more accuracy and due to decision tree pruning the algorithm is faster than existing method.

M. Arathi, A. Govardhan

Text Summarization Basing on Font and Cue-Phrase Feature for a Single Document

In recent times owing to the magnitude of data present digitally across networks over a wide range of databases the need for text summarization has never been higher. The following paper deals with summarization of text derived from the syntactic and semantic features of the words in the document. We apply the technique of calculating a threshold value from both the attribute and semantic structure of the individual words. The algorithm helps in calculating the threshold value in order to give weightage to a particular word in a document. Initially the document undergoes the preprocessing techniques; the obtained data will be kept in a data set, then on that data we will apply the proposed algorithm in order to get a summarized data.

S. V. S. S. Lakshmi, K. S. Deepthi, Ch. Suresh

A Novel Feature Selection and Attribute Reduction Based on Hybrid IG-RS Approach

Document preprocessing and Feature selection are the major problem in the field of data mining, machine learning and pattern recognition. Feature Subset Selection becomes an important preprocessing part in the area of data mining. Hence, to reduce the dimensionality of the feature space, and to improve the performance, document preprocessing, feature selection and attribute reduction becomes an important parameter. To overcome the problem of document preprocessing, feature selection and attribute reduction, a theoretic framework based on hybrid Information gain-rough set (IG-RS) model is proposed. In this paper, firstly the document preprocessing is prepared; secondly an information gain is used to rank the importance of the feature. In the third stage a neighborhood rough set model is used to evaluate the lower and upper approximation value. In the fourth stage an attribute reduction algorithm based on rough set model is proposed. Experimental results show that the hybrid IG-RS model based method is more flexible to deal with documents.

Leena H. Patil, Mohammed Atique

A Novel D&C Approach for Efficient Fuzzy Unsupervised Classification for Mixed Variety of Data

The Clustering or unsupervised classification has variety of requirements in which the major one is the capability of the chosen clustering approach to deal with scalability and to handle with the mixed variety of data set. Data sets are of many types like categorical/nominal, ordinal, binary (symmetric or asymmetric), ratio and interval scaled variables.

The present scenario of Variety of latest approaches of unsupervised classification are Swarm Optimization based, Customer Segmentation based, Soft Computing methods like GA based, Entropy based and Fuzzy based methods and hierarchical approaches have two serious bottlenecks…Either they are hybrid mathematical techniques or large computation demanding which increases their complexity and hence compromises with accuracy.

The proposed methodology deals with this problem with a newly and efficiently generated algorithm and shapes a better, lesser complex and computationally demanding pseudo codes which may lead in future a revolutionary approach. We work upon multivariate data set. In case of nominal variables, we quantify the dataset by different methods to construct combined category quantifications and plot the object scores.

Here we have proposed an iterative procedure to calculate the cluster centers and the object memberships. To support our approach, a numerical experiment has been demonstrated. For all mixed variety of attributes Binary, Interval and Ratio scaled along with ordinal type attributes, an efficient methodology by Divide and Conquer (D&C) approach has been designed and has been given in this write up.

We will separately calculate the grouping criteria and objective function and will sum up them to get combined function. The better groups may be obtained my maximizing this function.

Rohit Rastogi, Saumya Agarwal, Palak Sharma, Uarvarshi Kaul, Shilpi Jain

Automatic Tag Recommendation for Journal Abstracts Using Statistical Topic Modeling

Topic modeling is a powerful technique for unsupervised analysis of large document collections. Topic models conceive latent topics in text using hidden random variables, and discover that structure with posterior inference. Topic models have a wide range of applications like tag recommendation, text categorization, keyword extraction and similarity search in the broad fields of text mining, information retrieval, statistical language modeling.

In this work, a dataset with 200 abstracts fall under four topics are collected from two different domain journals for tagging journal abstracts. The document model is built using LDA (Latent Dirichlet Allocation) with Collapsed Variational Bayes (CVB0) and Gibbs sampling. Then the built model is used to find appropriate tag for a given abstract. An interface is designed to extract and recommend the tag for a given abstract.

P. Anupriya, S. Karpagavalli

Diabetic Retinal Exudates Detection Using Extreme Learning Machine

Diabetic Retinopathy is a disorder of the retina as a result of the impact of diabetes on the retinal blood vessels. It is the major cause of blindness in people like age groups between 20 & 60. Since polygenic disorder proceed, the eyesight of a patient may commence to deteriorate and causes blindness. In this proposed work, the existence or lack of retinal exudates are identified using Extreme Learning Machine(ELM). To discover the occurrence of exudates features like Mean, Standard deviation, Centroid and Edge Strength are taken out from Luv color space after segmenting the Retinal image. A total of 100 images were used, out of which 80 images were used for training and 20 images were used for testing. The classification task carried out with classifier extreme learning machine (ELM). An experimental result shows that the model built using Extreme Learning Machine outperforms other two models and effectively detects the presence of exudates in retina.

P. R. Asha, S. Karpagavalli

Spatial Data Mining Approaches for GIS – A Brief Review

Spatial Data Mining (SDM) technology has emerged as a new area for spatial data analysis. Geographical Information System (GIS) stores data collected from heterogeneous sources in varied formats in the form of geodatabases representing spatial features, with respect to latitude and longitudinal positions. Geodatabases are increasing day by day generating huge volume of data from satellite images providing details related to orbit and from other sources for representing natural resources like water bodies, forest covers, soil quality monitoring etc. Recently GIS is used in analysis of traffic monitoring, tourist monitoring, health management, and bio-diversity conservation. Inferring information from geodatabases has gained importance using computational algorithms. The objective of this survey is to provide with a brief overview of GIS data formats data representation models, data sources, data mining algorithmic approaches, SDM tools, issues and challenges. Based on analysis of various literatures this paper outlines the issues and challenges of GIS data and architecture is proposed to meet the challenges of GIS data and viewed GIS as a Bigdata problem.

Mousi Perumal, Bhuvaneswari Velumani, Ananthi Sadhasivam, Kalpana Ramaswamy

Privacy during Data Mining

The large amounts of data stored in computer files are increasing at a very remarkable rate. It is analysed and evaluated that the amount of data in the world is increasing like water coming into the ocean. At the same time, the users or a person who operates these data are expecting more sophisticated Knowledge. The Languages like Structure Query Languages are not adequate to support this increasing demand for information. Data mining makes an effort to solve the problem. We proposed a new approach for maintaining privacy during mining the knowledge from the data stores. Our approach is based on vector quantization, it quantizes the data to its nearest neighbour values that uses two algorithms one LBG and Modified LBG in codebook generation process.

Aruna Kumari, K. Rajasekhara Rao, M. Suman

A Memory Efficient Algorithm with Enhance Preprocessing Technique for Web Usage Mining

Huge amount of data is generated daily by billions of web users. The usage pattern of the web data could be very prized to the company in the field of understanding consumer behavior. Web usage mining includes three phases namely preprocessing, pattern discovery and pattern analysis. The focus of this paper is to establish an algorithm for pattern discovery based on the association between the users accessed web pages. We have proposed a complete preprocessing methodology to identify the distinct users. The foundation of the algorithm is to find the frequently accessed web pages. The biggest constrain for mining web usage patterns are computation overhead and memory overhead. The performance evaluation of algorithm shows that our algorithm is efficient and scalable.

Nisarg Pathak, Viral Shah, Chandramohan Ajmeera

A Systematic Literature Review on Ontology Based Context Management System

In ubiquitous environments, context modeling components and personalization engines make systems adaptable to the user behavior. Current research in context management focused on specific domain or environment. To the best of our knowledge, there are no open standardized interfaces or standard representation of context models exist. Recently, ontology based Context Management System (CMS) has been proposed in research for the design of context aware system in ubiquitous computing. The objective of this research work is to systematically identify relevant research work from various electronic data sources using systematic literature review (SLR) method. Our searches identified 188 papers, out of which we found 31 for inclusion in this study. The identified CMS have been analyzed in order to identify research gaps in ontology based Context Management (CM) research domain. The findings show that most ontology based CMS do exist which are predominantly focused on specialized or specific application domain. Considering the findings in the context of previously proposed research agendas, some of the key challenges identified in ontology based CMS are: lack of consistency checker component for verifying or ensuring consistency of sensed or gathered context information against the context model and unsecured means of transferring context information between various components in a distributed environment.

Rajarajeswari Subbaraj, Neelanarayanan Venkatraman

Improving Classification by Outlier Detection and Removal

Most of the existing state-of-art techniques for outlier detection and removal are based upon density based clustering of given dataset. In this paper we have suggested a novel approach for iteratively pruning of outliers based upon the non-alignment with model created in a n-dimensional hyperspace. The technique could be used for any classification problem as a pre-processing step, regardless of the classifier used. We have tested our hypothesis with Support Vector and RandomForest classifiers and have obtained significant improvement in results for both these classifiers. The effectiveness of this novel method has also been verified by improvements in results while performing classification using these standard classifiers. When pruned with our method standard classifiers like SVC and RandomForest classifier showed an improvement up to 4 percent.

Pankaj Kumar Sharma, Hammad Haleem, Tanvir Ahmad

An Efficient Approach to Book Review Mining Using Data Classification

With the growing usage of internet and popularity of opinion-rich resources like online reviews, people are actively using information technology to know and form their opinion. These reviews give vital information about a product and can impinge on its demand in cyber space. In this paper we have presented a hybrid technique combining TF-IDF method with opinion analysis using multinomial Naïve Bayes Classification algorithm to mine data of online reviews of books thereby improving the review results. TF-IDF method uses weighted technique to compute the weight of a word in a document and opinion analysis gives the polarity about a particular product. We amalgamate both techniques to improvise the efficiency of the results which may consequently be used by recommender systems for better recommendations.

Harvinder, Devpriya Soni, Shipra Madan

Classification of Tweets Using Text Classifier to Detect Cyber Bullying

Cyber bullying and internet predation threaten minors, particular teens and teens who do not have adequate supervision when they use the computer. The enormous amount of information stored in unstructured texts cannot simply be used for further processing by computers, which typically handle text as simple sequences of character strings. Therefore, specific (pre)processing methods and algorithms are required in order to extract useful patterns. Text Mining is the discovery of valuable, yet hidden, information from the text document. Text classification one of the important research issues in the field of text mining. We propose an effective approach to detect cyber bullying messages from Twitter through a weighting scheme of feature selection.

K. Nalini, L. Jaba Sheela

An Overview on Web Usage Mining

The prolific growth of web-based applications and the enormous amount of data involved therein led to the development of techniques for identifying patterns in the web data. Web mining refers to the application of data mining techniques to the World Wide Web. Web usage mining is the process of extracting useful information from web server logs based on the browsing and access patterns of the users. The information is especially valuable for business sites in order to achieve improved customer satisfaction. Based on the user’s needs, Web Usage Mining discovers interesting usage patterns from web data in order to understand and better serve the needs of the web based application. Web Usage Mining is used to discover hidden patterns from weblogs. It consists of three phases like Preprocessing, pattern discovery and Pattern analysis. In this paper, we present each phase in detail, the process of extracting useful information from server log files and some of application areas of Web Usage Mining such as Education, Health, Human-computer interaction, and Social media.

G. Neelima, Sireesha Rodda


Weitere Informationen

Premium Partner

BranchenIndex Online

Die B2B-Firmensuche für Industrie und Wirtschaft: Kostenfrei in Firmenprofilen nach Lieferanten, Herstellern, Dienstleistern und Händlern recherchieren.



Best Practices für die Mitarbeiter-Partizipation in der Produktentwicklung

Unternehmen haben das Innovationspotenzial der eigenen Mitarbeiter auch außerhalb der F&E-Abteilung erkannt. Viele Initiativen zur Partizipation scheitern in der Praxis jedoch häufig. Lesen Sie hier  - basierend auf einer qualitativ-explorativen Expertenstudie - mehr über die wesentlichen Problemfelder der mitarbeiterzentrierten Produktentwicklung und profitieren Sie von konkreten Handlungsempfehlungen aus der Praxis.
Jetzt gratis downloaden!