Skip to main content

2015 | Buch

Proceedings of the 3rd International Conference on Frontiers of Intelligent Computing: Theory and Applications (FICTA) 2014

Volume 2

herausgegeben von: Suresh Chandra Satapathy, Bhabendra Narayan Biswal, Siba K. Udgata, J. K. Mandal

Verlag: Springer International Publishing

Buchreihe : Advances in Intelligent Systems and Computing

insite
SUCHEN

Über dieses Buch

This volume contains 87 papers presented at FICTA 2014: Third International Conference on Frontiers in Intelligent Computing: Theory and Applications. The conference was held during 14-15, November, 2014 at Bhubaneswar, Odisha, India. This volume contains papers mainly focused on Network and Information Security, Grid Computing and Clod Computing, Cyber Security and Digital Forensics, Computer Vision, Signal, Image & Video Processing, Software Engineering in Multidisciplinary Domains and Ad-hoc and Wireless Sensor Networks.

Inhaltsverzeichnis

Frontmatter
Security and Privacy in Cloud Computing: A Survey

Cloud Computing is continuously evolving and showing consistent growth in the field of computing. It is getting popularity by providing different computing services as cloud storage, cloud hosting, and cloud servers etc. for different types of industries as well as in academics. On the other side there are lots of issues related to the cloud security and privacy. Security is still critical challenge in the cloud computing paradigm. These challenges include user’s secret data loss, data leakage and disclosing of the personal data privacy. Considering the security and privacy within the cloud there are various threats to the user’s sensitive data on cloud storage. This paper is survey on the security and privacy issues and available solutions. Also present different opportunities in security and privacy in cloud environment.

Mahesh U. Shankarwar, Ambika V. Pawar
Analysis of Secret Key Revealing Trojan Using Path Delay Analysis for Some Cryptocores

The design outsourcing of the IC supply chain across the globe has been witnessed as a major trend of the semiconductor design industry in the recent era. The increasing profit margin has been a major boost for this trend. However, the vulnerability of the introduction of malicious circuitry (Hardware Trojan Horses) in the untrusted phases of chip development has been a major deterrent in this cost effective design methodology. Analysis, detection and correction of such Trojan Horses have been the point of focus among researchers over the recent years. In this work, analysis of a secret key revealing Hardware Trojan Horse is performed. This Trojan Horse creates a conditional path delay to the resultant output of the cryptocore according to the stolen bit of secret key per iteration. The work has been extended from the RTL design stage to the pre fabrication stage of ASIC platform where area and power analysis have been made to distinguish the affected core from a normal core in 180nm technology node.

Krishnendu Guha, Romio Rosan Sahani, Moumita Chakraborty, Amlan Chakrabarti, Debasri Saha
Generating Digital Signature Using DNA Coding

This work focuses on signing the data with a signature using DNA coding for limited bandwidth systems or low computation systems. The proposed process has two modules. In first module, the sender generates a digital signature by signing the message using the DNA Coding Sequence. The second module is a hybrid of public key cryptography. A DNA symmetric key is generated, encrypted with DNA public key and shared with the intending recipient. Then the messages are exchanged using shared symmetric key. In both the modules, the work uses a simple non-linear function XOR to encrypt and decrypt the DNA coding sequence. The computation time required to perform the XOR operation matches the capabilities of limited bandwidth systems and suits our work. In addition the work also achieves high security in two levels, one is the secret matching of plain text letters to DNA Codon Sequence and the second is increase in the complexity of computation for breaking the algorithm using brute-force attack to square of the complexity achieved with 128-bit binary key, for the same length of DNA key.

Gadang Madhulika, Chinta Seshadri Rao
DNA Encryption Based Dual Server Password Authentication

Security-authentication is a crucial issue in networking for establishing communication between clients and servers, or between servers. Authentication is required whenever a secure exchange of information is sought between two computers. In a normal password authenticated key exchange all clients passwords are stored in a single server. If the server is compromised because of hacking or even insider attack, passwords in the server are all disclosed. This paper proposes two server password authenticated key exchange between two servers which is used to authenticate single client and thereby making loss of passwords to hackers much more difficult. The paper proposes the DNA for Encryption and Decryption along with ElGamal Encryption technique. This would prevent the intruder from using information obtained from one server towards accessing vital login in information.

P. V. S. N. Raju, Pritee Parwekar
Dynamic Cost-Aware Re-replication and Rebalancing Strategy in Cloud System

Cloud computing is a “pay per use” model, where the user or clients pay for the computational resources they use. Furthermore, in cloud failures are normal. Therefore cost is an important factor to be considered along with availability, performance and reliability. Also it is not necessary that the benefits accrued from the replication will be greater than the cost incurred. Thus, this paper proposes an algorithm named Dynamic Cost-aware Re-replication and Re-balancing Strategy (DCR2S). This algorithm optimizes the cost of replication using the concept of knapsack problem. The proposed algorithm is evaluated using CloudSim. Experimental results demonstrate the effectiveness of proposed algorithm.

Navneet Kaur Gill, Sarbjeet Singh
Signalling Cost Analysis of Community Model

Data fusion is generally defined as the application of methods that combines data from multiple sources and collect that information in order to get conclusions. This paper analyzes the signalling cost of different data fusion filter models available in the literature with the new community model. The signalling cost of the Community Model has been mathematically formulated by incorporating the normalized signalling cost for each transmission. This process reduces the signalling burden on master fusion filter and improves throughput. A comparison of signalling cost of the existing data fusion models along with the new community model has also been presented in this paper. The results show that our community model incurs improvement with respect to the existing models in terms of signalling cost.

Boudhayan Bhattacharya, Banani Saha
Steganography with Cryptography in Android

The paper introduces work on developing secure data communication system. It includes the usage of two algorithms RSA and AES used for achieving cryptography along with LSB for achieving steganography both on Android platform. The joining of these three algorithms helps in building a secured communication system on ‘Android’ platform which is capable of withstanding multiple threats.

The input data is encrypted using AES with a user defined key prior to being embedded in image using LSB algorithm. The key used for encryption is further wrapped by means of receivers public key and that wrapped product key is further kept hidden so as to pass it to receiver that assure all the security purposes (RSA), making this a reliable communication channel for sensitive data. All stages assure that the secret data becomes obsolete and cannot be break easily; the steganographic algorithm thus introduces an additional level of security.

Akshay Kandul, Ashwin More, Omkar Davalbhakta, Rushikesh Artamwar, Dinesh Kulkarni
A Cloud Based Architecture of Existing e-District Project in India towards Cost Effective e-Governance Implementation

The e-District project is a comprehensive and web enabled service portal have been designed by Government of India. E-District project is acting as an electronic gateway into the Government’s portfolio of services for common citizens. The e-District portal has redefined the process of public services by catering the services at common citizen’s door step at any time. The Government of India is implementing e-District project for each state in India by hosting the e-District project in each State Data Center, which is leading huge amount of cost due to building individual host environment for each state e-District project. In this study, authors have proposed a cost effective private cloud based architecture in e-District project with the help of multiple virtual machine, which would be created from high end physical server machine. The configuration and number of physical machine is subject to current capacity plan. Apart from cost, this proposed architecture would help to share common services among different state e-District projects seamlessly. Also, it will provide better security management, better control in maintenance, flexibility for making disaster recovery plan.

Manas Kumar Sanyal, Sudhangsu Das, Sajal Bhadra
A Chaotic Substitution Based Image Encryption Using APA-transformation

In this paper, we propose a new chaotic substitution based image encryption algorithm. Our approach combines the merits of chaos, substitution boxes, APA-transformation and random Latin square to design a cryptographically effective and strong encryption algorithm. The chaotic Logistic map is incorporated to choose one of thousand S-boxes as well as row and column of selected S-box. A keyed Latin square is generated using a 256-bit external key. The selected S-box value is transformed through APA- transformation which is utilized along with Latin square to substitute the pixels of image in cipher block chaining mode. Round operations are applied to fetch high security in final encrypted content. The performance investigations through statistical results demonstrate the consistency and effectiveness of proposed algorithm.

Musheer Ahmad, Akshay Chopra, Prakhar Jain, Shahzad Alam
Security, Trust and Implementation Limitations of Prominent IoT Platforms

Internet of Things (IoT) is indeed a novel technology wave that is bound to make its mark, where anything and everything (Physical objects) is able to communicate over an extended network using both wired and wireless protocols. The term “physical objects” means that any hardware device that can sense a real world parameter and can push the output based on that reading. Considering the number of such devices, volume of data they generate and the security concerns, not only from a communication perspective but also from its mere physical presence outside a secure/monitored vault demands innovative architectural approaches, applications and end user systems. A middleware platform/framework for IoT should be able to handle communication between these heterogeneous devices, their discoveries and services it offers in real time. A move from internet of computers to internet of anything and everything is increasing the span of security threats and risks. A comparative study of existing prominent IoT platforms will help in identifying the limitations and gaps thereby acting as the benchmark in building an efficient solution.

Shiju Satyadevan, Boney S. Kalarickal, M. K. Jinesh
An Improved Image Steganography Method with SPIHT and Arithmetic Coding

The paper proposes a Steganography scheme which focuses on enhancing the embedding efficiency. There are only limited ways on which one can alter the cover image contents. So, for reaching a high embedding capacity, in the proposed method , the data is compressed using SPIHT algorithm and Arithmetic Coding. After which the information is embedded into the cover medium . The proposed method suggests an efficient strategy for hiding an image into a cover image of same size without much distortion and could be retrieved back successfully. The advantage of the system is that the cover medium size is reduced to the same size of the input image where in normal cases it is twice or even more. Also the cover image could be recovered from the original stego-image.

Lekha S. Nair, Lakshmi M. Joshy
Image Steganography – Least Significant Bit with Multiple Progressions

In this paper we have proposed a new technique of hiding a message on least significant bit of a cover image using different progressions. We have also compared the cover image and stego images with histograms and computed the CPU Time, Mean Square Error (MSE), Peak Signal Noise Ratio (PSNR), Structural Similarity (SSIM) index and Feature similarity index measure (FSIM) of all images with our method and earlier approaches and make an empirical study. Experimental results concluded that our method is more efficient and fast as compared to classical LSB and LSB using prime.

Savita Goel, Shilpi Gupta, Nisha Kaushik
On the Implementation of a Digital Watermarking Based on Phase Congruency

In this paper a human visual system (HVS) model guided least significant bit (LSB) watermarking approach for copyright protection is proposed. The projected algorithm can embed more information into less featured surrounded areas within the host image determined by phase congruency. Phase congruency offers a dimensionless quantity which is an excellent measure feature points with high information and low in redundancy within an image. The region with fewer features indicates the most trivial visible aspects of an image, so alteration within these areas will be less noticeable to any viewer. Furthermore the algorithm will be tested by means of imperceptibility and robustness. Thus a new spatial domain image watermarking scheme will be projected with higher bit capacity.

Abhishek Basu, Arindam Saha, Jeet Das, Sandipta Roy, Sushavan Mitra, Indranil Mal, Subir Kumar Sarkar
Influence of Various Random Mobility Models on the Performance of AOMDV and DYMO

A Mobile Adhoc Network (MANET) is a collection of autonomous self organizing mobile devices that communicate with each other by creating a network in a given area. The moving behavior of each mobile device is the MANET is determined by the mobility model which is a crucial component in its performance evaluation. Here is the present work; we have investigated the influences of various random mobility models on the performance of Adhoc on-demand Multipath Distance Vector (AOMDV) Routing Protocol and Dynamic MANET On-demand (DYMO) Protocol. In order to validate our work, three different mobility scenarios are considered: Random waypoint (RWP), Random Walk with Wrapping (RWP-WRP) and Random Walk with reflections (RWP-REF). Experimental results establish the fact that the performance of the routing protocols is significantly influenced by the different parameters like number of nodes, and to end delay and packet delivering ratio.

Suryaday Sarkar, Meghdut Roychowdhury, Biswa Mohan Sahoo, Souvik Sarkar
Handling Data Integrity Issue in SaaS Cloud

Cloud computing is a technology that being widely adopted by many organizations like Google, Microsoft etc in order to make the resources available to multiple users at a time over the internet. Many issues are identified due to which cloud computing is not adopted by all users till now. The aim of this paper is to analyze the performance of the encryption algorithms in order to improve the data integrity in SaaS cloud. The proposed modified algorithm encrypts the data from different users using cryptographic algorithms namely: RSA, Bcrypt and AES. The algorithm is selected by user based on the level of security needed to be applied to the user’s data. Performance analysis of the given framework and algorithm is done using CloudSim. From our obtained results this could be easily found that the time taken for encryption of data using the discussed framework and proposed algorithms is much less in comparison to various other techniques.

Anandita Singh Thakur, P. K. Gupta, Punit Gupta
Load and Fault Aware Honey Bee Scheduling Algorithm for Cloud Infrastructure

Cloud computing a new paradigm in the field of distributed computing after Grid computing. Cloud computing seems to me more promising in term of request failure, security, flexibility and resource availability. Its main feature is to maintain the Quality of service (QoS) provided to the end user in term of processing power, failure rate and many more .So Resource management and request scheduling are important and complex problems in cloud computing, Since maintaining resources and at the same time scheduling the request becomes a complex problem due to distributed nature of cloud. Many algorithms are been proposed to solve this problem like Ant colony based, cost based, priority based algorithms but all these algorithm consider cloud environment as non fault, which leads to degrade in performance of existing algorithms. So a load and fault aware Honey Bee scheduling algorithm is proposed for cloud infrastructure as a service(IaaS). This algorithm takes into consideration fault rate and load on a datacenter to improve the performance and QoS in cloud IaaS environment.

Punit Gupta, Satya Prakash Ghrera
Secure Cloud Data Computing with Third Party Auditor Control

Cloud computing has been targeted as the future on demand architecture of IT enterprise. Cloud Computing can be used with trustworthy mechanism to provide greater data resources in comparison to the traditional limited resource computing. But the security challenges can stop the feasibility in the use of IT enterprises. In this paper, our aim is to provide a trustworthy solution for the cloud computing. Our proposed methodology provides secure centralized control and alert system. We are applying the same token with distributed verification with the centralized data scheme. Our approach achieves the integration of storage correctness insurance and data error localization, i.e., the identification of misbehaving clients and it can be control by the servers. It can support data updating, deletion and visualization on demand with the restrictive tokenization. Extensive security and performance analysis shows that the proposed scheme is highly efficient and resilient against Byzantine failure, malicious data modification attack, and even colluding attacks.

Apoorva Rathi, Nilesh Parmar
Effective Disaster Management to Enhance the Cloud Performance

Cloud computing is one of today’s most exciting technologies because of its capacity to reduce cost associated with computing while increasing flexibility and scalability for computer processes. IT organizations have expressed concerns about critical issues such as Security that accompany the widespread implementation of cloud computing. Security, in particular, is one of the most debated issues in the field of cloud computing and several enterprises look at cloud computing warily due to projected security risks. Also, there are two other critical features to be taken care, they are Availability and Reliability which are the most important factors for cloud computing resource for maintaining higher user satisfaction and business continuity. This paper focuses on a survey of a novel approach for Disaster management through Efficient Scheduling mechanism and Efficient Load balancing technique to control Disaster thereby enhancing the performance of cloud computing.

Chintureena, V. Suma
Implementation of Technology in Indian Agricultural Scenario

Agriculture is a pillar of industry and a key component of a nation- economy all over the world. Hence, innovation in agricultural science and application of technology has become an important force for supporting the development of modern agriculture.

The existence and continuity of any technology depends on customer satisfaction. The satisfaction for any service or products can be achieved by applying the basic principles of Software Engineering. Customer satisfaction depends on parameters like the resultant Service Quality provided. This paper focuses on introducing a model which highlights on a hierarchical bottom up approach for Indian agriculture where different aspects of agriculture are localized. This paper also presents Design of the model, Implementation of the model and finally the results are analyzed by considering with and without middleman as a parameter in the agricultural market.

Phuritshabam Robert, B. Naveen Kumar, U. S. Poornima, V. Suma
JPEG Steganography and Steganalysis – A Review

Steganography and steganalysis are important topics in information hiding. Steganography refers to the technology of hiding data into digital media without making any visual distortion on the media. On the other hand steganalysis is the art of detecting the presence of steganography in the media. This paper provides a detailed survey on steganography and steganalysis for digital images, mainly covering the fundamental concepts, the progress of steganographic methods for images in JPEG format and the development of the corresponding steganalytic schemes. As a consequence, a comparative study is also done on the strength and weakness of these different methods.

Siddhartha Banerjee, Bibek Ranjan Ghosh, Pratik Roy
Enhanced Privacy and Surveillance for Online Social Networks

An Online Social Network (OSN) is a platform to build social networks or social relations among people. The OSN’s allow users to share interests, activities, social details and professional details. Some of the OSN’s that are currently being used are Facebook, Twitter, Orkut etc. The major problem of social networks is providing privacy to the users. Social privacy, institutional privacy and surveillance are the key problems that are being faced by the OSN users. We developed a novel method to provide institutional privacy and surveillance to the OSN users. We introduced a new algorithm HSurveillance, which effectively implements the surveillance in OSN. The institutional privacy is provided to the users using locking mechanism. We believe that the proposed method will resolve the key security and privacy problems experienced by the OSN users.

Teja Yaramasa, G. Krishna Kishore
Neuro-key Generation Based on HEBB Network for Wireless Communication

In this paper a key generation technique for encryption/decryption, based on a single-layer perceptron network (Hebb Network), for wireless communication of information or data has been proposed. Two HEBB Neural networks have been used at both the sender and receiver ends. Both the networks have a Random Number Generator (RNG) that generates identical inputs at both ends. As both the networks are synchronized they generate same output pair for same input pair which is used as the secured secret key to encrypt the plain text through some reversible computation to form the cipher text. The receiver generate plain text by performing identical operation. The key is never transmitted during encoding across the network. This process ensures the integrity and confidentiality of a message transmitted via any medium as the secret key is unknown to any intruder thus imparts a potential solution to Man-in-the-middle attack.

Arindam Sarkar, J. K. Mandal, Pritha Mondal
KSOFM Network Based Neural Key Generation for Wireless Communication

In this paper a single layer perceptron (KSOFM Network) based neural key generation for encryption/decryption for wireless communication has been proposed. Identical KSOFM network has been used in both sender and receiver side and the final output weight has been used as a secret key for encryption and decryption. The final output matrix of the KSOFM network is taken and the minimum value neuron in the matrix is considered as the secret key. Depending upon the input and output neuron, different keys has been generated for each session which helps to form secret session key. In sender side the plain text is encrypted with the secret key to form the cipher text done by the EX-OR operation between the secret key and the plain text. Receiver will use the same secret key to decrypt the cipher text to plane text. The secret key is not sent via any medium so it minimizes the man-in-the-middle attack. Moreover various tests are performed in terms of chi-square test, which shows comparable results with the said proposed system.

Madhumita Sengupta, J. K. Mandal, Arindam Sarkar, Tamal Bhattacharjee
Hopfield Network Based Neural Key Generation for Wireless Communication (HNBNKG)

In this paper, a key generation and encryption/decryption technique based on Hopfield Neural network has been proposed for wireless communication. Hopfield Neural networks at both ends forms identical input vector, weight vector which in turn produces identical output vector which is used for forming secret-key for encryption/decryption. Using this secret-key, plain text is encrypted to form the cipher text. Encryption is performed by

Exclusive-OR

operation between plaintext and secret-key. Decryption is performed at the receiver through

Exclusive-OR

operation between cipher text and identical secret-key generated. Receiver regenerate the original message sent by the sender as encrypted stream. In HNBNKG technique sender and receiver never exchange secret-key. This technique ensured that, when message is transmitting between sender-receiver nobody can regenerate the message as no key is exchanged.

J. K. Mandal, Debdyuti Datta, Arindam Sarkar
Automatic Video Scene Segmentation to Separate Script and Recognition

Text or character detection in images or videos is a challenging problem to achieve video contents retrieval. In this paper work we propose to improved VTDAR (Video Text Detection and Recognition) Template Matching algorithm that applied for the automatic extraction of text from image and video frames. Video Optical Character Recognition using template matching is a system model that is useful to recognize the character, upper, lower alphabet, digits& special character by comparing two images of the alphabet. The objectives of this system model are to develop a model for the Video Text Detection and Recognition system and to implement the template matching algorithm in developing the system model. The template matching techniques are more sensitive to font and size variations of the characters than the feature classification methods. This system tested the 50 videos with 1250 video key-frames and text line 1530. In this system 92.15% of the Character gets recognized successfully using Texture-based approaches to automatic detection, segmentation and recognition of visual text occurrences in images and video frames.

Bharatratna P. Gaikwad, Ramesh R. Manza, Ganesh R. Manza
Gesture: A New Communicator

This paper is an illustrative approach for developing a visual interface for a gesture recognition system using color based blob detection. The software developed using the prescribed framework serves as a gesture interpretation system and is used to emulate the computer mouse with finger gestures. The objective is to develop an intuitive way to interact with computers and other digital devices and yet make it easy to use and cost effective at the same time. Although vision interfaces for gesture recognition have been researched and developed for some time, this approach has its own uniqueness and is more effective in many circumstances. The prescribed framework minimizes hardware requirements as it only requires a webcam other than the computer itself. The minimization of hardware requirements make it cost effective and easier to obtain. The predefined gestures are simple yet intuitive as these are inspired by certain every day gestures or movements that are used to interact with several tools and equipment in our daily life.

Saikat Basak, Arundhuti Chowdhury
A Novel Fragile Medical Image Watermarking Technique for Tamper Detection and Recovery Using Variance

In this paper, we propose a novel fragile block based medical image watermarking technique to produce high quality watermarked medical images, verify the integrity of ROI, accurately detect the tampered blocks inside ROI using both average and variance and recover the original ROI without loss. In the proposed technique, the medical image is segmented into three sets of pixels: ROI, Region of Non Interest (RONI) and border pixels. Later, authentication data along with ROI information is embedded inside border. ROI recovery data is embedded inside RONI. Results of experiments disclose that proposed method produced high quality watermarked medical images, identified the presence of tampers inside ROI with 100% accuracy and recovered the original ROI without any loss.

R. Eswaraiah, E. Sreenivasa Reddy
MRI Skull Bone Lesion Segmentation Using Distance Based Watershed Segmentation

The objective of separating touching objects in an image is a very difficult task. The task is all the more difficult when the touching objects are healthy tissues and unhealthy tissues of lesions in human brain.

A gray level MR image may be considered as a topographic relief and thus Watershed segmentation is used. Watershed refers to a ridge that divides areas drained by different river systems. A catchment basin is interpreted as a geographical area draining into a river or reservoir. The concept of watershed and catchment basins are used for analyzing biological tissues.

An MR image segmentation method is developed using Distance and Watershed Transforms.

Ankita Mitra, Arunava De, Anup Kumar Bhattacharjee
Extraction of Texture Based Features of Underwater Images Using RLBP Descriptor

In this paper, we present an approach for extraction of texture features of underwater images using Robust Local Binary Pattern (RLBP) descriptor. The literature survey reveals that the texture parameters that remain constant for the scene patch for the whole underwater image sequence. Therefore, we proposed technique to extract the texture features and these features can be used for object recognition and tracking. The underwater images suffer from image blurring and low contrast and performance of feature extractors is very less if we employ directly. Thus, we propose a novel image enhancement technique which is combination of different individual filters such as homomorphic filtering, curvelet denoising and LBP based Diffusion. We employ DoG based feature detector, for each detected interest point, the texture description is extracted using RLBP feature descriptor. The proposed feature extraction technique is compared and evaluated extensively with well known feature extractors using datasets acquired in underwater environment.

S. Nagaraja, C. J. Prabhakar, P. U. Praveen Kumar
Summary-Based Efficient Content Based Image Retrieval in P2P Network

The World Wide Web provides an enormous amount of images which is generally searched using text based methods. Searching for images using image content is necessary to overcome the limitations of text based search. Generally, in Unstructured P2P systems like Gnutella a complete blind search is used that floods the network with high query traffic. In this paper, we present a P2P system that uses ”‘informed search’” in which peers try to learn about the information maintained at their neighbours in order to minimise the query traffic. Here the images are first clustered using K-means clustering technique and then each peer is made to exchange its cluster information with its neighbouring peers using PROBE and ECHO. Typically one summary table per peer is maintained in which neighbouring peers data information is stored. When processing queries, these summaries are used to choose the most probable peer that is likely to contain information relevant to the query. If none of its neighbours has a match then standard random-walk algorithm is used for query propagation.

Mona, B. G. Prasad
Dynamic Texture Segmentation Using Texture Descriptors and Optical Flow Techniques

The texture which is in motion is known as Dynamic texture. As the texture can change in shape and direction over time, Segmentation of Dynamic Texture is a challenging task. Furthermore, features of Dynamic texture like spatial (i.e., appearance) and temporal (i.e., motion) may differ from each other. However, studies are mostly limited to characterization of single dynamic textures in the current literature. In this paper, the segmentation problem of image sequences consisting of cluttered dynamic textures is addressed. For the segmentation of dynamic texture, two local texture descriptor based techniques and Lucas-Kanade optical flow technique are combined together to achieve accurate segmentation. Two texture descriptor based techniques are Local binary pattern and Weber local descriptor. These descriptors are used in spatial as well as in temporal domain and it helps to segment a frame of video into distinct regions based on the histogram of the region. Lucas-Kanade based optical flow technique is used in temporal domain, which determines direction of motion of dynamic texture in a sequence. These three features are computed for every section of individual frame and equivalent histograms are obtained. These histograms are concatenated and compared with suitable threshold to obtain segmentation of dynamic texture.

Pratik Soygaonkar, Shilpa Paygude, Vibha Vyas
Design and Implementation of Brain Computer Interface Based Robot Motion Control

In this paper, a Brain Computer Interactive (BCI) robot motion control system for patients’ assistance is designed and implemented. The proposed system acquires data from the patient’s brain through a group of sensors using Emotiv Epoc neuroheadset. The acquired signal is processed. From the processed data the BCI system determines the patient’s requirements and accordingly issues commands (output signals). The processed data is translated into action using the robot as per the patient’s requirement. A Graphics user interface (GUI) is developed by us for the purpose of controlling the motion of the Robot. Our proposed system is quite helpful for persons with severe disabilities and is designed to help persons suffering from spinal cord injuries/ paralytic attacks. It is also helpful to all those who can’t move physically and find difficulties in expressing their needs verbally.

Devashree Tripathy, Jagdish Lal Raheja
Advanced Adaptive Algorithms for Double Talk Detection in Echo Cancellers: A Technical Review

An acoustic echo cancellation system is one of the most important breakthrough in the field of adaptive systems. Today acoustic echo cancellers (AEC) are an integral part of full duplex hands-free voice communication. Conventional echo cancellers use a linear model to represent the echo path. However many consumer devices include loud-speakers and power amplifiers that generate non-linear distortions.Non-linearity occurs due to the use of low cost electronic loud speakers, microphones and poorly designed enclosures in an AEC system. Non-linearity causes vibration and harmonic distortion and also degrades the speech quality. Double talk detector (DTD) is a key component of an AEC. A DTD is used to sense when the far end signal is corrupted by the near end speech. The DTD freezes the adaptation of model filter to prevent the divergence of the adaptive filter. Various authors have proposed different algorithms for double talk detection. Some of the most popular algorithms are Geigel algorithm, cross-correlation based DTD, normalized cross correlation based DTD, variable impulse response DTD etc. In this paper several double talk detection algorithms in a non-linear platform of an AEC has been discussed.

Vineeta Das, Asutosh Kar, Mahesh Chandra
A Comparative Study of Iterative Solvers for Image De-noising

In this paper we propose and compare the use of two iterative solvers using the Crank-Nicolson finite difference method, for image denoising via Partial differential equations (PDE) models such as Bilateral-filter-based model. The solvers considered here are: Successive-over-Relaxation (SOR) and an advanced solver known as Hybrid Bi-Conjugate Gradient Stabilized (Hybrid BiCGStab) method. We demonstrate that proposed hybrid BiCGStab solver for denoising yields better performance in terms of MSSIM and PSNR, and is more efficient than existing SOR solver and a state-of-the-art approach.

Subit K. Jain, Rajendra K. Ray, Arnav Bhavsar
Assessment of Urbanization of an Area with Hyperspectral Image Data

This study attempts to apply time series hyperspectral data to detect change in landcover and assess urbanization of a small town in West Bengal, India. The objective is to utilize the potential of hyperspectral data to extract spectral signatures of the urban components of the study area using automated end member extraction algorithm, classify the area using Linear Spectral Unmixing (LSU) and assess the rate of urbanization that has taken place in the region over a period of 2 years. The automated target generation algorithm has successfully identified the pure spectra of 9 urban features after which their individual abundances in the hyperspectral imageries have been estimated. Post classification, the classes have been compared on a pixel by pixel basis and the increase/decrease in pixels noted. The change thus detected indicates a significant depletion in green cover and water bodies in the study area with increase in concrete cover over the years indicating rapid urbanization.

Somdatta Chakravortty, Devadatta Sinha, Anil Bhondekar
Range Face Image Registration Using ERFI from 3D Images

In this paper, we present a novel and robust approach for 3D faces registration based on Energy Range Face Image (ERFI). ERFI is the frontal face model for the individual people from the database. It can be considered as a mean frontal range face image for each person. Thus, the total energy of the frontal range face images has been preserved by ERFI. For registration purpose, an interesting point or a land mark, which is the nose tip (or ‘pronasal’) from face surface is extracted. Then, this landmark is exploited to correct the oriented faces by applying the 3D geometrical rotation technique with respect to the ERFI model for registration purpose. During the error calculation phase, Manhattan distance metric between the localized ‘pronasal’ landmark on face image and that of ERFI model is determined on Euclidian space. The accuracy is quantified with selection of cut-points ‘T’ on measured Manhattan distances along yaw, pitch and roll. The proposed method has been tested on Frav3D database and achieved 82.5% accurate pose registration.

Suranjan Ganguly, Debotosh Bhattacharjee, Mita Nasipuri
Emotion Recognition for Instantaneous Marathi Spoken Words

This paper explore on emotion recognition from Marathi speech signals by using feature extraction techniques and classifier to classify Marathi speech utterances according to their emotional contains. A different type of speech feature vectors contains different emotions, due to their corresponding natures. In this we have categorized the emotions as namely Anger, Happy, Sad, Fear, Neutral and Surprise. Mel Frequency Cepstral Coefficient (MFCC) feature parameters extracted from Marathi speech Signals depend on speaker, spoken word as well as emotion. Gaussian mixture Models (GMM) is used to develop Emotion classification model. In this, recently proposed feature extraction technique and classifier is used for Marathi spoken words. In this each subject/Speaker has spoken 7 Marathi words with 6 different emotions that 7 Marathi words are Aathawan, Aayusha, Chamakdar, Iishara, Manav, Namaskar, and Uupay. For experimental work we have created total 924 Marathi speech utterances database and from this we achieved the empirical performance of overall emotion recognition accuracy rate obtained using MFCC and GMM is 84.61% rate of our Emotion Recognition for Marathi Spoken Words (ERFMSW) system. We got average accuracy for male and female is 86.20% and 83.03% respectively.

Vaibhav V. Kamble, Ratnadeep R. Deshmukh, Anil R. Karwankar, Varsha R. Ratnaparkhe, Suresh A. Annadate
Performance Evaluation of Bimodal Hindi Speech Recognition under Adverse Environment

Designing of a robust Human-Computer Interaction (HCI) system is a challenging task,especially for automatic speech recognition (ASR) when working under unfriendly environment.This paper proposesan ASRsystem which uses bimodal information (i.e. Speech along with the visual input) resulting inimproved robustness. In thisresearch staticand dynamic (∆) audio features are extracted using the Mel-Frequency Cepstral Coefficients (MFCC).The visual feature isextracted using Two-Dimensional Discrete Wavelet Transform (2D-DWT). Audio-video recognition is performed over different combination of visual feature using HMM (Hidden Markov Model) under clean and noisy environmental conditions.Aligarh Muslim University Audio Visual (AMUAV) Hindi database has been chosen as the baseline data. In addition, noisy speech signal performance is evaluated for different Signal to Noise Ratio (SNR: 30 dB to -20 dB). At last, addition of visual information to ASR is reported to increase the accuracy when working under smart assistive environment, i.e. for applications, which may not have the noise-free background condition.

Prashant Upadhyaya, Omar Farooq, M. R. Abidi, Priyanka Varshney
Extraction of Shape Features Using Multifractal Dimension for Recognition of Stem-Calyx of an Apple

In this paper, we introduce a novel approach to recognize stem-calyx of an apple using multifractal dimension. Our method comprises of steps such as preprocessing using bilateral filter, segmentation of apple using grow-cut method, multi-threshold segmentation is used to detect the candidate objects such as stem-calyx and small defects. The shape features of the detected candidate objects are extracted using Multifractal dimension and finally stem-calyx regions are recognized and differentiated from true defects using SVM classifier. The proposed algorithm is evaluated using experiments conducted on apple image dataset and results exhibit considerable improvement in recognition of stem-calyx region compared to existing techniques.

S. H. Mohana, C. J. Prabhakar
An Approach to Design an Intelligent Parametric Synthesizer for Emotional Speech

Speech synthesizer is an artificial system to produce speech. But the generation of emotional speech is a difficult task. Though many researchers have been working on this area since a long period, still it is a challenging problem in terms of accuracy. The objective of our work is to design an intelligent model for emotional speech synthesis. An attempt is taken to compute such system using rule based fuzzy model. Initially the required parameters have been considered for the model and are extracted as features. The features are analyzed for each speech segment. At the synthesis level the model has been trained with these parameters properly. Next to it, it has been tested. The tested results show its performance.

Soumya Smruti, Jagyanseni Sahoo, Monalisa Dash, Mihir N. Mohanty
Removal of Defective Products Using Robots

This paper addresses the utility of intelligent autonomous robotic arm for automatic removal of defective products in an industry. The task can be performed in two steps, finding the defective product with digital image processing and removal of defective part from the products. The image is regularly obtained and compared with the standard image. The defective product is sorted out based on threshold value between the real image and standard image. After detection of defective product, it is sorted out with the help of robotic arm and placed in the defective lot.

Birender Singh, Mahesh Chandra, Nikitha Kandru
Contour Extraction and Segmentation of Cerebral Hemorrhage from MRI of Brain by Gamma Transformation Approach

Computer-aided diagnosis (CAD) systems have been the focus of several research endeavors and it based on the idea of processing and analyzing images of different hemorrhage of the brain for a quick and accurate diagnosis. We use a gamma transformation approach with a preprocessing step to segment and detect whether a brain hemorrhage exists or not in a MRI scans of the brain with the type and position of the hemorrhage. The implemented system consists of several stages that include artefact and skull elimination as an image preprocessing, image segmentation, and location identification. We compare the results of the conducted experiments with reference image which are very promising visually as well as mathematically.

Sudipta Roy, Piue Ghosh, Samir Kumar Bandyopadhyay
An Innovative Approach to Show the Hidden Surface by Using Image Inpainting Technique

The research study presented in this paper, focuses on the problems and lacunas of existing image inpainting techniques and shows how proposed approach will prove to be a curing syrup to diseased concepts, available so far for image inpainting. Since, the paper is highlighting image inpainting technique’s drawbacks so the former part discusses what actually image inpainting technique means and where this great revolutionary need have its implementations and applications in the real world. Further, the purpose of image inpainting with various existing and latest algorithms/methods which are available so far, to inpaint an image are highlighted as a part of literature survey. The prime focus is to discuss, the innovative approach of the authors to remove disadvantage of existing image inpainting techniques i.e. if an object is small in size and is hidden behind a bigger object then by available inpainting techniques it is next to impossible to generate the image of hidden object as if bigger front object is selected as target region, then whole object along with the hidden object (behind bigger object) will also be removed during the time of object removal phase of inpainting. So, in final phase of paper, various descriptive images and live examples, methodology of whole proposed technology and self-proposed algorithms are discussed to remove this lacuna of the available inpainting techniques. Besides all, the resultant image will have the bigger front object getting transparent and only hidden smaller object as visible on the background image by implementation of proposed concept.

Rajat Sharma, Amit Agarwal
Fast Mode Decision Algorithm for H.264/SVC

H.264/AVC extension is H.264/SVC which is applicable for environment that demands video streaming. This paper presents an algorithm to reduce computation complexity and maintain coding efficiency by determining the mode quickly. Our algorithm terminates mode search by a probability model for both intra-mode and inter-mode of lower level and higher level layers in a Macro Block (MB). The estimated of Rate Distortion Cost (RDC) for modes among layers is used to determine best mode of each MB. This algorithm achieves about 26.9% of the encoding time when compared with JSVM reference software with minimal degradation in PSNR.

L. Balaji, K. K. Thyagharajan
Recognizing Handwritten Devanagari Words Using Recurrent Neural Network

recognizing lines of handwritten text is a difficult task. Most recent evolution in the field has been made either through better-quality pre processing or through advances in language modeling. Most systems rely on hidden Markov models that have been used for decades in speech and handwriting recognition. So an approach is proposed in this paper which is based on a type of recurrent neural network, in particularly designed for sequence labeling tasks where the data is hard to segment and contains long-range bidirectional interdependencies. Recurrent neural networks (RNN) have been successfully applied for recognition of cursive handwritten documents, in scripts like English and Arabic. A regular recurrent neural network (RNN) is extended to a bidirectional recurrent neural network (BRNN).

Sonali G. Oval, Sankirti Shirawale
Homomorphic Filtering for Radiographic Image Contrast Enhancement and Artifacts Elimination

The contrast of radiographic images is provided at physical level by anti-scatter grids and is usually further improved at image processing stage. The known contrast improvement methods are mainly based on image non-linear manipulations that may cause residual artifacts in processed images. In this paper an artifact-free approach is proposed for radiographic image filtering which is still an actual problem. The proposed algorithm is based on homomorphic equalizer design and application for image contrast enhancement, sharpening and artifacts elimination. Experimental results are discussed and concluded with description of advantages over existing approaches.

Igor Belykh
A Framework for Human Recognition Based on Locomotive Object Extraction

Moving Object detection based on video, of late has gained momentum in the field of research. Moving object detection has extensive application areas and is used for monitoring intelligence interaction between human and computer, transportation of intelligence, and navigating visual robotics, clarity in steering systems. It is also used in various other fields for diagnosing, compressing images, reconstructing 3D images, retrieving video images and so on. Since surveillance of human movement detection is subjective, the human objects are precisely detected to the framework proposed for human detection based on the Locomotive Object Extraction.The issue of illumination changes and crowded human image is discriminated. The image is detected through the detection feature that identifies head and shoulder and is the loci for the proposed framework. The detection of individual objects has been revamped appreciably over the recent years but even now environmental factors and crowd-scene detection remains significantly difficult for detection of moving object. The proposed framework subtracts the background through Gaussian mixture model and the area of significance is extracted. The area of significance is transformed to white and black picture by picture binarization. Then, Wiener filter is employed to scale the background level for optimizing the results of the object in motion. The object is finally identified. The performance in every stage is measured and is evaluated. The result in each stage is compared and the performance of the proposed framework is that of the existing system proves satisfactory.

C. Sivasankar, A. Srinivasan
Abnormal Event Detection in Crowded Video Scenes

Intelligent Video Investigation is on nice interest in trade applications because of increasing demand to scale back the force of analyzing the large-scale video information. Sleuthing the abnormal events from crowded video scenes offer varied difficulties. Initially, an oversized variety of moving persons will simply distract the native anomaly detector. Secondly it’s tough to model the abnormal events in real time. Thirdly, the inaccessibility of ample samples of coaching information for abnormal events ends up in problem in sturdy detection of abnormal events. Our planned system provides a peculiar approach to find anomaly in crowded video scenes. We are initially divide the video frame into patches and apply the Difference-of-Gaussian (DoG) filter to extract edges. Then we work out Multiscale Histogram of Optical Flow (MHOF) and Edge directed bar chart (EOH) for every patch. Then exploitation of Normalized Cuts (NCuts) and Gaussian Expectation-Maximization (GEM) techniques, and to cluster the similar patches into cluster and assign the motion context. Finally exploitation of k-Nearest neighbor (k-NN) search, and establish the abnormal activity at intervals in the crowded scenes. Our spatio-temporal anomaly search system helps to boost the accuracy and computation time for detection of irregular patterns. This technique is helpful for investigation, trade specific and market applications like public transportation, enforcement, etc.

V. K. Gnanavel, A. Srinivasan
Comparative Analysis and Bandwidth Enhancement with Direct Coupled C Slotted Microstrip Antenna for Dual Wide Band Applications

This paper presents a direct coupled C slot loaded microstrip antenna which can yield wider bandwidth for WLAN/WiMax applications. The different antenna geometries are simulated through IE3D Zeland simulation software for the comparative analysis of bandwidth. The recent development technology of wireless internet access applied to the WLAN (Wireless Local Area Network) frequency bands in the range 2.40-2.50 GHz has forced demand for dual-band antennas, which can be implemented in stationary and mobile devices. The proposed antenna has dual frequency band having fractional bandwidth 3.38% (1.392-1.44 GHz) and 69.5% (1.733-3.58 GHz) which is suitable WLAN/WiMax applications. The gain has been improved up to 5.11dBi, directivity 5.39dBi and efficiency 97.216%. The proposed directly coupled microstrip antenna fed by 50 Ω microstrip feed line.

Rajat Srivastava, Vinod Kumar Singh, Shahanaz Ayub
Quality Assessment of Images Using SSIM Metric and CIEDE2000 Distance Methods in Lab Color Space

Advances in imaging and computing hardware have led to an explosion in the use of color images in image processing, graphics and computer vision applications across various domains such as medical imaging, satellite imagery, document analysis and biometrics to name a few. However, these images are subjected to wide variety of distortions during its acquisition, subsequent compression, transmission, processing and then reproduction, which degrade their visual quality. Hence objective quality assessment of color images has emerged as one of the essential operation in image processing. During the last two decades, efforts have been put to design such an image quality metric which can be calculated simply but can accurately reflect subjective quality of human perception. In this paper, we evaluated the quality assessment of color images using CIE proposed Lab color space, which is considered to be perceptually uniform space. In addition we have used two different approaches of quality assessment namely, metric based and distance based.

T. Chandrakanth, B. Sandhya
Review Paper on Linear and Nonlinear Acoustic Echo Cancellation

In this paper a review on acoustic echo cancellation (AEC) systems based on linear AEC and nonlinear AEC are presented. The paper covers advancements in the previous research works related to the acoustic echo cancellation process. Here review is done based on adaptive algorithms used. Since in linear AEC systems, the non-linearity caused by loudspeakers, amplifier and low-quality enclosures are not taken into considerations. The implementation of linear AEC is based on the assumption of linearity. The performance of echo cancellation algorithms are degraded due to non-linearities in the acoustic path. Therefore non-linear AEC systems are evaluated and reviewed along with linear AEC.

D. K. Gupta, V. K. Gupta, Mahesh Chandra
PCA Based Medical Image Fusion in Ridgelet Domain

Medical image fusion facilitates the retrieval of complementary information from medical images and has been employed diversely for computer-aided diagnosis of diseases. This paper presents a combination of Principal Component Analysis (PCA) and ridgelet transform as an improved fusion approach for MRI and CT-scan. The proposed fusion approach involves image decomposition using 2D-Ridgelet transform in order to achieve a compact representation of linear singularities. This is followed by application of PCA as a fusion rule to improve upon the spatial resolution. Fusion Factor (FF) and Structural Similarity Index (SSIM) are used as fusion metrics for performance evaluation of the proposed approach. Simulation results demonstrate an improvement in visual quality of the fused image supported by higher values of fusion metrics.

Abhinav Krishn, Vikrant Bhateja, Himanshi, Akanksha Sahu
Swarm Optimization Based Dual Transform Algorithm for Secure Transaction of Medical Images

Modern healthcare systems are based on managing diagnostic information of patients through e-health. e-health refers to the internet enabled healthcare applications involving transacting personal health records and other internet based services including e-pharmacy etc. This paper introduces a hybrid algorithm which efficiently combines DWT-DCT-PSO for copyright protection and authentication of medical images. Particle swarm optimization is applied on the host image to find the intensities for embedding the watermark bits by gbest solution from objective function and the fitness function. The embedding strategy is adopted with intensity level, so the technique falls under robust blind watermarking technique. The simulation results shows that the proposed scheme yields good results when tested for different images and subjected to various attacks.

Anusudha Krishnamurthi, N. Venkateswaran, J. Valarmathi
Convolutional Neural Networks for the Recognition of Malayalam Characters

Optical Character Recognition (OCR) has an important role in information retrieval which converts scanned documents into machine editable and searchable text formats. This work is focussing on the recognition part of OCR. LeNet-5, a Convolutional Neural Network (CNN) trained with gradient based learning and backpropagation algorithm is used for classification of Malayalam character images. Result obtained for multi-class classifier shows that CNN performance is dropping down when the number of classes exceeds range of 40. Accuracy is improved by grouping misclassified characters together. Without grouping, CNN is giving an average accuracy of 75% and after grouping the performance is improved upto 92%. Inner level classification is done using multi-class SVM which is giving an average accuracy in the range of 99-100%.

R. Anil, K. Manjusha, S. Sachin Kumar, K. P. Soman
Modeling of Thorax for Volumetric Computation Using Rotachora Shapes

This paper presents the scope of mathematical modeling by using uncommon geometric shapes for the computation of thoracic volume. The modeling has been done for Rotachora shapes for estimation and computation of fluid volume present in the thoracic area. Proposed extended model based approach demonstrates the scopes of its sensitivity in terms of volumetric variations with the act of breathe. The act of breathe involved inspiration and expiration states. New models have been constructed to compute the thoracic volumes and their variations are shown with respect to the thoracic impedances. Four dimensional Rotachora shapes are taken into consideration. Human thorax is considered as cubinder in the first stage and as duo cylinder in the second phase under Rotachora shape category. It is observed that the volumes are rhythmically varying with the act of breath for the considered thoracic area along with the varying thoracic impedances. The obtained results validates that the chosen models are closely following the act of breath significantly and hence the obtained result could be utilized for clinical purposes.

Shabana Urooj, Vikrant Bhateja, Pratiksha Saxena, Aime lay Ekuakille, Patrizia Vergalo
A Review of ROI Image Retrieval Techniques

Content based image retrieval involves extraction of global and region features of images for improving their retrieval performance in large image databases. Region based feature have shown to be more effective than global features as they are capable of reflecting users specific interest with greater accuracy. However success of region based methods largely depends on the segmentation technique used to automatically specify the region of interest (ROI) in the query. Apart from this user can also specify ROI’s in an image. The ROI image retrieval involves the task of formulation of region based query, feature extraction, indexing and retrieval of images containing similar region as specified in the query. In this paper state-of-the-art techniques for ROI image retrieval are discussed. Comparative study of each of these techniques together with pros and cons of each technique are listed. The paper is concluded with our views on challenges faced by researchers and further scope of research in the area. The major goal of the paper is to provide a comprehensive reference source for the researchers involved in image retrieval based on ROI.

Nishant Shrivastava, Vipin Tyagi
A Novel Algorithm for Suppression of Salt and Pepper Impulse Noise in Fingerprint Images Using B-Spline Interpolation

The quality of Finger Print Images in image forensics plays a vital role in the the accuracy of biometric based identification and authentication system. To suppress the salt and pepper noise in fingerprint images, B-Splines have been used for interpolation. In this paper, a two stage novel and efficient algorithm for suppression of salt and pepper impulse noise for noise levels ranging from 15 % to 95 % using B-splines interpolation is being proposed. The algorithm removes salt and pepper impulse noise from the image in the first stage and in second stage, an edge preserving algorithm has been proposed which regularizes the edges that have been deformed during noise removal process.

P. Syamala Jaya Sree, Prasanth Kumar Pattnaik, S. P. Ghrera
Spectral-Subtraction Based Features for Speaker Identification

Here wavelet based features in combination with Spectral-Subtraction (SS) are proposed for speaker identification in clean and noisy environment. Gaussian Mixture Models (GMMs) are used as a classifier for classification of speakers. The identification performance of Linear Prediction Coefficient (LPC), Wavelet LPC (WLPC), and Spectral Subtraction WLPC (SS-WLPC) features are computed and compared. WLPC features have shown higher performance over the conventional methods in clean and noisy environment. SS-WLPC features have shown further improvements over WLPC features for speaker identification. Database of fifty speakers for ten Hindi digits are used.

Mahesh Chandra, Pratibha Nandi, Aparajita kumari, Shipra Mishra
Automated System for Detection of Cerebral Aneurysms in Medical CTA Images

In present scenario accurate detection of cerebral aneurysms in medical images plays a crucial role in reducing the incidents of subarachnoid hemorrhage (SAH) which carries a high rate of mortality. Many of the non-traumatic SAH cases are caused by ruptured cerebral aneurysms and accurate detection of these aneurysms can decrease a significant proportion of misdiagnosed cases. A scheme for automated detection of cerebral aneurysms is proposed in this study. The aneurysms are found by applying Normalization and generating the Probability Density Function (PDF) for the input image, local thresholding is used to identify appropriate aneurysm candidate regions. Feature vectors are calculated for the candidate regions based on gray-level, morphological and location based features. Rule based system is used to classify and detect cerebral aneurysms from candidate regions. Accuracy of the system is calculated using the sensitivity parameter.

M. Vaseemahamed, M. Ravishankar
Contrast Enhancement of Mammograms Images Based on Hybrid Processing

This paper introduces a new enhancement algorithm based on combination of different processing techniques. The method uses different methods at different stages of processing. In the beginning input image given to the algorithm is a portable gray map image and then Gaussian low pass filter is used to decompose the input image into low and high frequency components. On low frequency components we apply mathematical morphological operations and on high frequency components we apply edge enhanced algorithm. After this we combine processed low and high frequency components to get an enhanced image. Enhanced image is having better contrast and edge visibility comparing to the original image, but it contains noises. Wavelet transform is used to denoise the noisy image. The denoised image is then processed by using contrast limited adaptive histogram equalization(CLAHE) to have better edge preservation index (EPI) and contrast improvement index (EPI). The resulting image is then smoothed by passing the output image through a guided image filter(GIF).The edge preserve capacity and preservation of the naturalness of the GIF allows us to get better results.

The efficiency of any service or product, especially those related to medical field depends upon its applicability. The applicability for any service or products can be achieved by applying the basic principles of Software Engineering. Applicability of enhanced algorithms depends on parameters like the peak signal to noise ratio(PSNR),edge preservation index(EPI), etc. This paper focuses on introducing a model which highlights on a prototyping approach for highlighting the necessary details that will aid radiologist for the earlier detection of breast cancer.This paper also presents Design of the model, Implementation of the model and finally the results are analyzed by considering the quality metrics values like PSNR, EPI, CII.

Inam Ul Islam Wani, M. C. Hanumantharaju, M. T. Gopalkrishna
An Improved Handwritten Word Recognition Rate of South Indian Kannada Words Using Better Feature Extraction Approach

Ever since the evolution of communication in human day to day activities, hand writing has gained its own impact and popularity. Therefore, Handwritten Word Recognition (HWR) is quite challenging due to heavy variations of writing style, different size and shape of the character by various writers. Accuracy and efficiency are the major parameters in the field of handwritten character recognition. However, with the progress in technology, human computer interactions have become a mandatory process to carry on the fast and dynamic demanding activities of the everyday cycle. This paper thus throws light on an effective recognition process for the handwritten word recognition. The HWR is carried out in 3 stages. In the first stage, pre-processing removes the unwanted data like noise and the second stage extracts the best features such as the sharp corners, curves and loops and finally the third stage of the process classifies the image under the correct matching class using the Euclidean distance based classifier. This process is implemented and the results indicate an improved accuracy and efficient recognition rate.

M. S. Patel, Sanjay Linga Reddy, Krupashankari S. Sandyal
An Efficient Way of Handwritten English Word Recognition

Handwriting recognition has been one of the most fascinating and challenging research areas in the field of image processing and pattern recognition in the recent years, which is motivated by the fact that for severely degraded documents the segmentation based approach will produce very poor recognition rate. The quality of the original documents does not allow one to recognize them with high accuracy. Hence, the aim of this research is to produce system that will allow successful recognition of handwritten words, which is proven to be feasible even in noisy environments. This paper presents a method that performs pre-processing steps on hand written images such as skew and slant correction, baseline estimation, horizontal and vertical scaling. It uses structural features for feature extraction. Further, Euclidean distance method is applied for classification that produces single matching word having minimum difference value. This paper presents a sample of data set which encompasses the names of 30 districts present in the Karnataka state of India. This method is useful for the postal address, script recognition and systems which require handwriting data entry.

M. S. Patel, Sanjay Linga Reddy, Anuja Jana Naik
Text Detection and Recognition Using Camera Based Images

The increase in availability of high performance, low-priced, portable digital imaging devices has created an opportunity for supplementing traditional scanning for document image acquisition. Cameras attached to cellular phones, wearable computers, and standalone image or video devices are highly mobile and easy to use; they can capture images making them much more versatile than desktop scanners. Should gain solutions to the analysis of documents captured with such devices become available, there will clearly be a demand in many domains. Images captured from images can suffer from low resolution, perspective distortion, and blur, as well as a complex layout and interaction of the content and background.In this paper, we propose an efficient text detection method based on Maximally Stable Exterme Region (MSER) detector, saying that how to detect regions containing text in an image. It is a common task performed on unstructured scenes, for example when capturing video from a moving vehicle for the purpose of alerting a driver about a road sign . Segmenting out the text from a clutterd scene greatly helps with additional tasks such as optical charater recognition (OCR). The efficiency of any service or product, especially those related to medical field depends upon its applicability. The applicability for any service or products can b achieved by applying thr basic principles of Software Engineering.

H. Y. Darshan, M. T. Gopalkrishna, M. C. Hanumantharaju
Retinal Based Image Enhancement Using Contourlet Transform

In medical image processing, retinal image enhancement is the challenging issue to reveal the unseen details of an retinal image, thus, in many applications image enhancement issued to solve the challenges such as, noise reduction, blurring, degradation, etc. To improve the visual grade of retinal images we have many alternative image enhancement techniques that are suitable for specific application. This paper presents an overview of various retinal image enhancement techniques that will process the original Retinal image to obtain enhanced image suitable for a specific application. The method used in this paper has been evaluated with help of PSNR image Quality measure which is applied over several retinal images which is obtained from the datasets such as DRIVE, STARE and few other’s provided by local medical experts. The comparative experimental results indicate that our proposed enhanced method has better outcome.

P. Sharath Chandra, M. C. Hanumantharaju, M. T. Gopalakrishna
Detection and Classification of Microaneurysms Using DTCWT and Log Gabor Features in Retinal Images

Diabetic Retinopathy (DR) is one of the major causes of blindness in diabetic patients. Early detection is required to reduce the visual impairment causing damage to eye. Microaneurysms are the first clinical sign of diabetic retinopathy. Robust detection of microaneurysms in retinal fundus images is critical in developing automated system. In this paper we present a new technique for detection and localization of microaneurysms using Dual tree complex wavelet transform and log Gabor features. Retinal blood vessels are eliminated using minor and major axis properties and correlation is performed on images with the Gabor features to detect the microaneurysms. Feature vectors are extracted from candidate regions based on texture properties. Support vector machine classifier classifies the detected regions to determine the findings as microaneurysms or not. Accuracy of the algorithm is evaluated using the sensitivity and specificity parameters.

Sujay Angadi, M. Ravishankar
Classifying Juxta-Pleural Pulmonary Nodules

Lung cancer is a disease of abnormal cells multiplying and growing into a tumor in the human lung. It is the most dangerous and widespread cancer in the world. According to the stage of discovery of cancer cells in the lung, the process of early detection plays a very important and essential role to avoid serious advanced stages to reduce its percentage of distribution. Our lung cancer detection system basically detects and recognizes Juxta-pleural pulmonary nodules; which are attached to the wall of the lung. It is done in 4 stages such as obtaining ROI (Region Of Interest), Segmentation, Feature extraction and Classification.

CT (Computed Tomography) is considered to be the best modality for the diagnosis of Lung cancer. ROI can be selected either manually or automatically. Automated ROI retrieval is preferred as manual selection is considered to be tedious and time consuming as the operator has to go through the dataset slice by slice and frame by frame. Ray-casting algorithm is used to segment nodule and neural networks are used to classify the nodules appropriately.

K. Sariya, M. Ravishankar
The Statistical Measurement of an Object-Oriented Programme Using an Object Oriented Metrics

Object oriented design is more powerful than function oriented design. Previously the software was developed by using functional or structural approach but due to high quality demand, traditional metrics (i.e. Cyclomatic complexity, lines of code, comment percentage) cannot be applied. Object oriented metric assures to reduce cost and maintenance effort by serving earlier predictors to estimate software faults. The Object Oriented Analysis and Design of software gives the many benefits like reusability, decomposition of problems in to easily understandable objects. This paper presents the different object oriented metrics qualities in different dimensions (i.e. size, complexity, quality, reliability, etc). Object oriented metrics are used to analyze the complexity of any object oriented language (i.e. java, c++, C Sharp).In this paper we have taken the different sets of programs using C++ and Java. It concludes that Java dominants the C++ .The popularity is only due to measuring the software complexity, quality and estimation size of the projects.

Rasmita Panigrahi, Sarada Baboo, Neelamadhab Padhy
Applicability of Software Defined Networking in Campus Network

This research article focuses on application of open flow protocol which is a very useful milestone for researchers to run experimental protocols in their daily used network. Open flow protocol is based on the traditional Ethernet switch, with an internal flow table and a standardized interface for the perspective of adding and removing flow entries. The primary focus of our research is to encourage networking vendors to include open flow applicability into their switch like products for deployment in institute or university level campuses. We also assume that open flow is a pragmatic compromise, in one side it allows researcher to execute their developed experiments and in other side the vendors do not need to disclose the internal working of their product switch. In other words open flow allows researchers to evaluate their ideas in real world traffic setting; hence open flow came into existence with a useful campus component in proposed large scale test beds like Global Environment for Networking Innovations (GENI).

Singh Sandeep, R. A. Khan, Agrawal Alka
Author-Profile System Development Based on Software Reuse of Open Source Components

This paper demonstrates the contribution of simple open source tools to the development of a highly efficient author profiling system, which determines the age and gender of the author based on the authored text itself. With the rapid growth of the Web, the number of social websites has increased by twice a fold. Thus it becomes necessary for security agencies and intelligence experts to keep track of any malicious activity by users on the Web (such as pedophiles, security attacks etc.) by monitoring their profiles and flagging them if necessary. Rather than building the system from scratch Software Engineering provides us a Component Based Methodology (CBM) that permits the reuse of various components that will help us in achieving better quality software in a quick span of time, free of cost. Significant differences exist in the way males/females and younger/older people write. We illustrate in detail how the system exploits these differences for its development based on the architecture of the CBM.

Derrick Nazareth, Kavita Asnani, Okstynn Rodrigues
Software and Graphical Approach for Understanding Friction on a Body

This paper aims to explain the basic physics concept of friction in a better way as it highlights the behavior of the body during motion considering the effect of friction. The paper combines physics and mathematics with equations and graphs. The investigation focuses the characteristics of a body in motion by keeping eye on friction by making graphical approach and also by quantitative study. The research establishes certain facts and points on friction. It establishes certain equations which helps the concept to understand better. A simple demonstration has been made for better understanding the concept. MATLAB software is used to draw the graphs. An algorithm has also been framed for step by step understanding and bringing clarity to the concept.

Molla Ramizur Rahman
An Investigation on Coupling and Cohesion as Contributory Factors for Stable System Design and Hence the Influence on System Maintainability and Reusability

Complexity is an inherent property. Measuring and keeping it under control is more logical than practical. Since quality is directly proportional to complexity, a quantitative measure is expected. In software industry, software quality is depending on quality of each phases of its development. As the size of the requirement increases, the design phase complexity increases. This has an adverse affect on software stability. The fundamental design-need in Object Oriented Methodology (OOM) is the well-defined modules and their inter-connectivity, namely, cohesion and coupling. The structure of such artefact is expected to be simple since it influences stability and thereon the module reusability and maintainability. This paper encompasses an investigation on coupling and cohesion which are major design decisive factors and their influence on maintainability and reusability through design stability. The paper provides a hypothetical support on the influence of coupling and cohesion on maintainability and reusability. It also focuses on the further research interests in the same field as a part of through literature survey. The work would contribute to design a high quality product by which the industries sustain themselves in the competitive market.

U. S. Poornima, V. Suma
Application of Component-Based Software Engineering in Building a Surveillance Robot

In this paper, the application of Component-Based Software Engineering methodology (CBSE) in the development of a robotic system is documented. The robot movements can be controlled remotely with the help of a software application. It is also capable of streaming live video while moving. CBSE methodology emphasizes on developing new system from pre-built components. Therefore, it is suitable for the development of robotic systems where a large number of such components are used and there is also a wider scope for the reuse of these components. This paper gives, in detail, each phase of the robot development and also proves the suitability of CBSE in the development of such systems. The surveillance robot was successfully built using the software development methodology and worked well in accepting instructions from the software application on the direction of movement and capturing the video of the environment.

Chaitali More, Louella Colaco, Razia Sardinha
Comprehension of Defect Pattern at Code Construction Phase during Software Development Process

Ever since the introduction of computers, technological advancement has taken an exponential form. Thus, development of software which attains total customer satisfaction is one of the mandatory needs of any software industry. Delivery of defect free software is one of the primary requisites to achieve the aforementioned objective. In order to comprehend defect facets, it is essential to have knowledge of defect pattern at various phases of software development. This paper therefore provides a comprehensive analysis of occurrence of defect pattern which are obtained through a case study carried out in one of the sampled leading software industry. This empirical investigation is a throw light for the project personnel to formulate effective strategies towards reduction of defect occurrences and thereby improve quality, productivity and sustainability of the software products.

Bhagavant Deshpande, Jawahar J. Rao, V. Suma
Pattern Analysis of Post Production Defects in Software Industry

Software has laid a strong influence on all occupations. The key challenge of an IT industry is to engineer a software product with minimum post deployment defects. Software Engineering approaches help engineers to develop quality software within the scheduled time, cost, and resources in a systematic manner. In order to incorporate effective defect management strategies using software engineering discipline needs a complete and widespread knowledge of various aspects of defects. The position of this paper is to provide a pattern analysis of post production defects based on empirical observations made on several main frame projects developed in one of the leading software industries. Inferences thus obtained from this investigation indicate the existence of show stopper severity defects and their associated root cause. This awareness enables the developing team to reduce the residual defects and improve the pre production quality. It further aids the attainment of total customer satisfaction.

Divakar Harekal, Jawahar J. Rao, V. Suma
Secure Efficient Routing against Packet Dropping Attacks in Wireless Mesh Networks

Wireless Mesh Networks (WMNs) are susceptible to attacks and various other issues like open peer-to-peer network topology, shared wireless medium, stringent resource constraints, and a highly dynamic environment; hence, it becomes critical to detect major attacks against the routing protocols of such networks, and also to provide good network performance. In this paper, we address two severe packet dropping attacks, which cause serious performance degradation in wireless mesh networks. They are misrouting and power control attacks. To mitigate these attacks and to enhance the performance of WMNs, we propose a new secure and efficient routing protocol called Secure Efficient Routing against Packet Dropping Attacks (SERPDA). An extended local monitoring system based on the observing patterns in the behavior of neighboring nodes and checking capacity of the neighboring nodes is implemented to defend against packet dropping attacks. To improve the performance, we use additional metrics besides the usual ones for the selection of routes. With the help of a network simulator, we prove that the proposed protocol efficiently mitigates the attacks and also provides a more optimal path by considering load balancing, link quality, successful transmission rates, and the number of hops.

T. M. Navamani, P. Yogesh
Key Management with Improved Location Aided Cluster Based Routing Protocol in MANETs

Security is the main challenge in MANETs. The Key management scheme is crucial part of security which is important in MANETs. Authentication with key generation and distribution is a complicated task. In this paper we introduce an ILCRP- improved Location aided Cluster based Routing Protocol with Key Management scheme to make ILCRP secure. This Paper aims to provide better security with ILCRP protocol using Quantum Key Distribution. Quantum key distribution is used to generate a secure communication among the nodes. The ILCRP is a stable clustering protocol and appropriate for large number of nodes where all the nodes are enabled with GPS to achieve higher packet delivery ratio. Simulation result shows the demonstration of ILCRP with ILCRP-IDS in terms of ratio of packet delivery, delay required for an end to end communication and consumption of energy.

Yogita Wankhade, Vidya Dhamdhere, Pankaj Vidhate
Co-operative Shortest Path Relay Selection for Multihop MANETs

In this paper, we propose a simple Shortest Distance Path Relay Selection Criteria employing Decode and Forward (DF) Cooperative Protocol. We consider a single source and a single destination network with N candidate relays which are distributed uniformly within the coverage area. Flat Rayleigh fading channel with Log-distance path loss model is considered. In Shortest Distance Path Relay Selection we select the relay which is near to assumed Line of Sight (LOS). In Reactive Best Expectation Relay Selection Criteria relays which minimize total transmission time are selected for cooperation after source transmission. In Proactive Opportunistic Relay Selection Criteria before the source transmission, best relay which maximize mutual information capacity is selected for cooperation. The proposed relay selection criteria was compared with Reactive Best Expectation Relay Selection Criteria and Proactive Opportunistic Relay Selection Criteria. We further analyzed energy consumption, throughput and delay of the proposed system. The simulation results show that the proposed Shortest Distance Path Relay Selection consume less energy and has shortest delay compared to the Reactive Best Expectation and Proactive Opportunistic Relay Selection methods.

Rama Devi Boddu, K. Kishan Rao, M. Asha Rani
Intelligent Intrusion Detection System in Wireless Sensor Network

Wireless Sensor Networks (WSN) are formed with small tiny nodes which are sometime densely deployed in open and unprotected environment. In many applications particularly in the military applications WSNs are of interest to adversaries and they are susceptible to different types of attack. Though preventive measures are applied to protect against attacks but some attacks cannot be prevented using known preventive measure. Preventing the intruder from causing damage to the network, the intrusion detection system (IDS) can acquire information related to the attack techniques and helps to develop a preventing system. In this paper we propose an intelligent IDS algorithm and we also simulate our algorithm in castellia simulator. Our simulation results show different scenarios such as the attack period Vs packet dropped and attack period vs packet received. We have seen that under attack our IDS potentially improve the performance of the network in both cases.

Abdur Rahaman Sardar, Rashmi Ranjan Sahoo, Moutushi Singh, Souvik Sarkar, Jamuna Kanta Singh, Koushik Majumder
Secure Routing in MANET through Crypt-Biometric Technique

A dynamic network self configuring and multi-hop in nature without having any fixed infrastructure is called a mobile ad-hoc network(MANET).The main drawback of such types of networks is the occurrence of various attacks such as unauthorized data modification impersonation etc which affect their performance. Biometric perception is specified as the most novel method to protract security in various networks by involving exclusive identification features. The attainment of biometric perception depends upon image procurement and biometric perception system. Simulation as well as experimental results signifies that this proposed method achieves better performance parameters values for various mobile ad-hoc networks.

Zafar Sherin, M. K. Soni
Remote Login Password Authentication Scheme Using Tangent Theorem on Circle

A remote password authentication scheme based on a circle is proposed in this paper. In this scheme, we use some simple tangent theorem like secant tangent theorem to authenticate the user and the server. The security of this scheme depends on the tangent points located in a plane associated with the circle and tangent line. In our scheme, a legal user can freely choose and change his password using his smart card.

Shipra Kumari, Hari Om
A Survey of Security Protocols in WSN and Overhead Evaluation

There has been a widespread growth in the area of wireless sensor networks mainly because of the tremendous possibility of using it in a wide spectrum of applications such as home automation, wildlife monitoring, defense applications, medical applications and so on. However, due to the inherent limitations of sensor networks, commonly used security mechanisms are hard to implement in these networks. For this very reason, security becomes a crucial issue and these networks face a wide variety of attacks right from the physical layer to application layer. This paper present a survey that investigates the overhead due to the implementation of some common security mechanisms viz. SPINS, TinySec and MiniSec and also the computational overhead in the implementation of three popular symmetric encryption algorithms namely RC5 AES and Skipjack.

Shiju Sathyadevan, Subi Prabhakaranl, K. Bipin
DFDA: A Distributed Fault Detection Algorithm in Two Tier Wireless Sensor Networks

Detection of faulty relay nodes in two tier wireless sensor network (WSN) is an important issue. In this paper, we present a distributed fault detection algorithm for the upper tier of a cluster based WSN. Any faulty relay node is identified by its neighbors on the basis of the neighboring table associated with them. Time redundancy is used to tolerate transient faults and to minimize the false alarms. The algorithm has

O

(

m

) message complexity in the worst case for a WSN with

m

relay nodes. Simulation results are presented and analyzed with various performance metrics, including detection accuracy and false alarm rate.

Kumar Nitesh, Prasanta K. Jana
Secured Categorization and Group Encounter Based Dissemination of Post Disaster Situational Data Using Peer-to-Peer Delay Tolerant Network

Despite concerted efforts for relaying crucial situational information, disaster relief volunteers experience significant communication challenges owing to failures of critical infrastructure and longstanding power outages in disaster affected areas. Researchers have proposed the use of smart-phones, working in delay tolerant mode, for setting up a peer-to-peer network enabling post disaster communication. In such a network, volunteers, belonging to different rescue groups, relay situational messages containing needs and requirements of different categories to their respective relief camps. Delivery of such messages containing heterogeneous requirements to appropriate relief camps calls for on-the-fly categorization of messages according to their content. But, due to possible presence of malicious and unscrupulous entities in the network, content of sensitive situational messages cannot be made accessible even if that helps in categorization. To address this issue, we, in this paper, propose a secured message categorization technique that enables forwarder nodes to categorize messages without compromising on their confidentiality. Moreover, due to group dynamics and interaction pattern among groups, volunteers of a particular group encounter other volunteers of their own group (or groups offering allied services) more often than volunteers of other groups. Therefore, we also propose a forwarding scheme that routes messages, destined to a particular relief camp, through volunteers of that group or who encounter members of that group most frequently. This expedites the delivery of categorized messages to their appropriate destinations.

Souvik Basu, Siuli Roy
Lifetime Maximization in Heterogeneous Wireless Sensor Network Based on Metaheuristic Approach

Increasing the lifetime of heterogeneous wireless sensor network (WSN) and minimizing the run time using improved ACO is an important issue. The computational time is very important in case of such searching algorithms. In this methodology, maximum number of disjoints connected covers are found that fulfill coverage in network and connectivity of the network. A construction graph is designed in which each vertex denotes the a device in a subset. The ants find an optimal path on the construction graph which maximizes the number of connected covers based on pheromone and also heuristic information. The connected covers are built by the pheromone and it is used in the searching. The heuristic information satisfies the desirability of device assignment. The proposed metaheuristic approach is to maximize network lifetime and minimize the computational time for the searching process satisfying both sensing coverage and network connectivity.

Manisha Bhende, Suvarna Patil, Sanjeev Wagh
Lightweight Trust Model for Clustered WSN

Sensor network’s safety measures are built on an unrealistic trusted environment, because proposed trust models for

WSN

s are unsuited for resource supplies and have high computation overhead. This paper proposes a lightweight and realistic trust model for clustered WSNs (LTM). This model has been designed on the dynamic nature of actual trust building mechanism to meet the resource constraint of tiny sensor nodes. A trust metrics priority to emphasize on the important tasks of a sensor node is introduced here. A dynamic trust updating algorithm that ensures brisk drop and sluggish rise of trust is also proposed. Additionally, a self-adaptive weighted method is defined for trust aggregation, to avoid misjudgment of aggregated trust calculation. The proposed trust model also provides better resilience against vulnerabilities. We have tested the feasibility of our trust model with MATLAB.

Moutushi Singh, Abdur Rahaman Sardar, Rashmi Ranjan Sahoo, Koushik Majumder, Sudhabindu Ray, Subir Kumar Sarkar
An Efficient and Secured Routing Protocol for VANET

Since few years, Vehicular Adhoc Networks(VANET) deserve much attention. Routing and security are the two most important concerns in this type of network. A number of routing protocols already exist for vehicular ad hoc network but none of them are made to handle routing and security issues side by side. In this paper we propose a new junction-based geographical routing protocol which is capable of dealing with routing as well as security issues. This protocol consists of two modules: (i) To implement routing, at first this protocol selects appropriate junctions dynamically, through which a packet must transmit in order to reach to its destination. (ii) To deal with security issues, we have introduced the concept of mix-zones to prevent vehicle tracking by unauthorised users. At the end, the performance of the proposed work has been compared with some well known existing routing protocols depending upon some parameters like packet delivery ratio and normalized routing load in a simulated environment.

Indrajit Bhattacharya, Subhash Ghosh, Debashis Show
Authentication of the Message through Hop-by-Hop and Secure the Source Nodes in Wireless Sensor Networks

Message authentication is the most effective way to protect the data from unauthorized access and corrupted messages being forwarded in wireless sensor networks (WSNs). For this reason, many message authentication schemes have been developed, based on either symmetric-key cryptosystems or public-key cryptosystems. Some have the limitations of high computational and communication overhead and lack of scalability to node compromise attacks. To address these issues, a polynomial-based scheme was recently introduced. However, this scheme and its extensions all have the weakness of a built-in threshold determined by the degree of the polynomial when the number of messages transmitted is larger than this threshold, the adversary can fully recover the polynomial. While enabling intermediate nodes authentication, the proposed scheme allows any node to transmit an unlimited number of messages without suffering the threshold problem. In addition VGuard Security framework is used provide source privacy in the network. Both theoretical analysis and simulation results demonstrate that our proposed scheme is more efficient than the polynomial-based approach in terms of computational and communication overhead for various comparable security levels while providing message source privacy.

B. Anil Kumar, N. Bhaskara Rao, M. S. Sunitha
Rank and Weight Based Protocol for Cluster Head Selection for WSN

With the evolution of wireless sensor network, the interests in their application have increased considerably. The architecture of the system differs with the application requirement and characteristics. Now days there are number of applications in which hierarchal based networks are highly in demand and key concept of such network is clustering. Some of the most well-known hierarchical routing protocols like LEACH, SEP, TEEN, APTEEN and HEED are discussed in brief. These different conventional protocols have diverse strategies to select their cluster head but still have some limitations. Based on the limitations of these conventional models, a new approach has been proposed on the basis of ranks and weights assignment based protocol known as RWBP. This approach considers not only residual energy but also node’s degree and distance of nodes with base station. The node which has higher weight will be chosen as a cluster head. The objective of this approach is to have balance distribution of clusters, enhance lifetime and better efficiency than traditional protocols. The same approach is also applied for multi hop clustering i.e. multi hop RWBP in which the sensing field is divided into more number of areas and the area which lie farther from the base station is sending indirectly via intermediate cluster heads to the base station. The simulations are done in MATLAB with the network size 100x100 meters. The results of the proposed approach are resulting in better lifetime and stability region as compared to LEACH and SEP.

S. R. Biradar, Gunjan Jain
Backmatter
Metadaten
Titel
Proceedings of the 3rd International Conference on Frontiers of Intelligent Computing: Theory and Applications (FICTA) 2014
herausgegeben von
Suresh Chandra Satapathy
Bhabendra Narayan Biswal
Siba K. Udgata
J. K. Mandal
Copyright-Jahr
2015
Electronic ISBN
978-3-319-12012-6
Print ISBN
978-3-319-12011-9
DOI
https://doi.org/10.1007/978-3-319-12012-6