Skip to main content

2016 | Buch

Information Systems Design and Intelligent Applications

Proceedings of Third International Conference INDIA 2016, Volume 3

herausgegeben von: Suresh Chandra Satapathy, Jyotsna Kumar Mandal, Siba K. Udgata, Vikrant Bhateja

Verlag: Springer India

Buchreihe : Advances in Intelligent Systems and Computing

insite
SUCHEN

Über dieses Buch

The third international conference on INformation Systems Design and Intelligent Applications (INDIA – 2016) held in Visakhapatnam, India during January 8-9, 2016. The book covers all aspects of information system design, computer science and technology, general sciences, and educational research. Upon a double blind review process, a number of high quality papers are selected and collected in the book, which is composed of three different volumes, and covers a variety of topics, including natural language processing, artificial intelligence, security and privacy, communications, wireless and sensor networks, microelectronics, circuit and systems, machine learning, soft computing, mobile computing and applications, cloud computing, software engineering, graphics and image processing, rural engineering, e-commerce, e-governance, business computing, molecular computing, nano-computing, chemical computing, intelligent computing for GIS and remote sensing, bio-informatics and bio-computing. These fields are not only limited to computer researchers but also include mathematics, chemistry, biology, bio-chemistry, engineering, statistics, and all others in which computer techniques may assist.

Inhaltsverzeichnis

Frontmatter
A New Private Security Policy Approach for DDoS Attack Defense in NGNs

Nowadays, the Distributed Denial of Service (DDoS) attack is still one of the most common and devastating security threats to the internet. This problem is progressing quickly, and it is becoming more and more difficult to grasp a global view of the problem. In this paper, we propose a new defense method used the bandwidth in second that a server can use for UDP packets is set as a parameter for controlling a DDoS attack by using the number of UDP packets available. It is registered in the private security policy as a parameter for detecting a flood attack. The efficiency of our proposed method was also proved in the experiments with NS2. DDoS attack is controlled effectively by the private security policy the bandwidth of the regular traffic would be maintained.

Dac-Nhuong Le, Vo Nhan Van, Trinh Thi Thuy Giang
An Effective Approach for Providing Diverse and Serendipitous Recommendations

Over the years, recommendation systems successfully suggest relevant items to its users using one of its popular methods of collaborative filtering. But, the current state of recommender system fails to suggest relevant items that are unknown (novel) and surprising (serendipitous) for its users. Therefore, we proposed a new approach that takes as input the positive ratings of the user, positive ratings of the similar users and negative ratings of the dissimilar users to construct a hybrid system capable of providing all possible information about its users. The major contribution of this paper is to diversify the suggestions of items, a user is provided with. The result obtained shows that as compared to general collaborative filtering, our algorithm achieves better catalogue coverage. The novelty and serendipity results also proved the success of the proposed algorithm.

Ivy Jain, Hitesh Hasija
Envelope Fluctuation Reduction for WiMAX MIMO-OFDM Signals Using Adaptive Network Fuzzy Inference Systems

In this article, the envelope fluctuation i.e., peak-to-average power ratio (PAPR) reduction technique is developed and analyzed using an Adaptive-Network based Fuzzy Inference System (ANFIS) for multiple input multiple output combined with orthogonal frequency division multiplexing (MIMO-OFDM) system under fading environment. The proposed method involves the training of ANFIS structure by the MIMO-OFDM signals with low PAPR obtained from the active gradient project (AGP) method, and then combined with partial transmit sequence (PTS) PAPR reduction technique. This method approximately reaches the PAPR reduction as the active partial sequence (APS) method, with significantly less computational complexity and convergence time. The results depict that proposed scheme performs better other conventional than that of the other conventional schemes.

Khushboo Pachori, Amit Mishra, Rahul Pachauri, Narendra Singh
Modeling and Performance Analysis of Free Space Quantum Key Distribution

With the technological development, the demand for secure communication is growing exponentially. Global secure communication have become crucial with increasing number of internet applications. Quantum Cryptography (QC) or Quantum Key Distribution (QKD) in that regime, promises an unconditional security based on laws of quantum principles. Free space QKD allows longer communication distances with practical secure key rates to aid secure global key distribution via satellites. This is encouraging many research groups to conduct QKD experimentation. But it is observed that such experiments are very complex and expensive. This paper thus attempts to establish a model for analysis of free space QKD through simulation. The model will be verified against experimental results available from different literature. It can be seen that the simulation approach stands effective for performance analysis of such complex systems. The developed model, test parameters like quantum bit error rate and secret key rate against mean photon number of laser pulses and quantum channel loss, and proves to fit well with the experimental results.

Minal Lopes, Nisha Sarwade
Design of a Low-Delay-Write Model of a TMCAM

In this paper, a novel version of Ternary Magnetic-Content-Addressable Memory (TMCAM) is proposed for a low-delay-write operation. This is attained from the connections of circuit and majorly due to the exceptional operational features of CAM integrated with MTJ. While the previous TMCAM required each one of the MTJ to be written separately, this model attempts to nullify the problem. Consequently, a reduction in delay by almost twice is obtained in comparison to the previous TMCAM with a 22 nm CMOS technology used for simulation purposes. This can be effectively employed in adaptive biomedical signal processing applications where writing is often and hence, delay cannot be compromised.

N. S. Ranjan, Soumitra Pal, Aminul Islam
SLA Based e-Learning Service Provisioning in Cloud

Cloud services allow individuals and businesses to use software and hardware that are managed by third parties at remote locations. The cloud computing model allows access to information and computing resources from anywhere through only the successful network connection. Cloud computing provides Infrastructure, Software and Platform as-a-service to its users with huge data storage, networks, processing power, and applications. Compared to traditional computing, its resources and services are available and scalable according to consumers need. Hence, e-Learning has been an interesting area where use of cloud can be leveraged, to reach online education across the globe. Provisioning of specific services is always depending on relevant Service Level Agreements that need to be agreed by both parties: provider and consumer. The objective of this paper is to provision e-Learning service on cloud. In order to accomplish this goal, negotiation of corresponding SLA parameters specific to the e-Learning service is necessary.

Mridul Paul, Ajanta Das
Estimating the Similarities of G7 Countries Using Economic Parameters

The contribution of this paper is to estimate the equality and differences between and within G7 countries: Canada, France, Germany, Italy, Japan, United Kingdom and United States. We used five parameters (GDP per capita, Employment, Population, General government revenue, General government total expenditure, Gross national savings) which widely correlate economic growth in the G7 countries. The means of the seven countries are identically equal is considered as a null hypothesis for each five parameters. We are using One-way analysis of variance statistical technique. Furthermore, the complete data set is evaluated to test the equivalence of the means between the G7 countries and each of a seven countries. The results show significant gaps between the group of G7 countries as well as selected parameters.

Swati Hira, Anita Bai, P. S. Deshpande
Offline Malayalam Character Recognition: A Comparative Study Using Multiple Classifier Combination Techniques

Malayalam character recognition has gained immense popularity in the past few years. The intrinsic challenges present in this domain along with the large character set of Malayalam further complicate the recognition process. Here we present a comparative evaluation of different multiple classifier combination techniques for the offline recognition of Malayalam characters. We have extracted three different features from the preprocessed character images—Density features, Run-length count and Projection profiles. These features are fed as input to three different neural networks and finally the results of these three networks were combined and evaluated using six different classifier combination methods: Max Rule, Sum Rule, Product Rule, Borda Count Rule, Majority Voting and Weighted Majority voting schemes. The best recognition accuracy of 97.67 % was attained using the Weighted Majority scheme considering top 3 results.

Anitha Mary M. O. Chacko, K. S. Anil Kumar
A Comparative Study on Load Balancing Algorithms for SIP Servers

The widespread acceptance and usage of smartphones deployed with high end operating systems have made Voice over IP applications extremely popular and prevalent globally. A large set of users amongst these use a plethora of Internet based applications after configuring them on their devices. SIP Proxy servers are predominantly used in these VOIP networks for the routing challenges arising from the requirement of supporting millions of VOIP concurrent/subsequent calls and also increasing the QoS (Quality of Service) of the routed calls. For intelligent load balancing, call dispatchers are used to achieve high throughput and minimum response times by balancing the calls amongst SIP proxy servers. Several load balancing algorithms are used like round robin, Call-Join-Shortest-Queue (CJSQ), Transaction-Join-Shortest-Queue (TJSQ) and Transaction-Least-Work-Left (TLWL). In this paper, we present a comparative analysis of load balancing algorithms for SIP servers with respect to call response time and server throughput performance.

Abdullah Akbar, S. Mahaboob Basha, Syed Abdul Sattar
Design of Wireless Sensor Network Based Embedded Systems for Aquaculture Environment Monitoring

This paper proposes an process/environmental parameters monitoring framework based on Wireless Sensor Networks (WSNs) using embedded systems. The developed system will have many applications particularly in the area of aquaculture, tea plantations, vineyards, precision agriculture, green houses monitoring etc. The complexity of the system increases for applications involving aquaculture. The developed systems are tailored for sensing in wide farmland’s without (or) very little human supervision. The fully designed and developed system consists of sensors for monitoring the parameters, sensor node with wireless network topology, gateway for collecting the information from the sensor node and transmitting the information to the central control unit. Data base, real-time monitoring and visualization facilities, controlling of sensor nodes and sensors are provided at the central control unit. Sensor nodes, sensor assembly and gateways should with stand demanding environmental conditions like rain, dust, heat, moisture, relative humidity etc. With these systems deployed in large farm’s bodies the scientist’s will have more and more field data with very little human intervention. The acquired data can be used to provide quantitative indications about the farm’s status, thus easy for analysis and decision making. In this paper we are focusing our discussion about WSNs in the field of aquaculture.

Sai Krishna Vaddadi, Shashikant Sadistap
Touch-to-Learn: A Tangible Learning System for Hard-of Hearing Children

Children’s learning styles can be categorized mainly into visual, auditory, tactile and kinesthetic learning. The lack of auditory learning capability deprive the hard-of-hearing children from indulging into traditional learning environment. To facilitate the learning of the stone-deaf children we propose a “Touch-to-learn” system, a manifestation of a tangible learning system for the elementary learners. A prototype of the system is presented focusing on children’s dental health and proper eating style which can be extended in other learning areas. The system effectively make use of the technological development embedding technology into learning. It also bridges the gap between the physical and digital interaction by introducing the technologies like RFID and Wii Remote providing a tangible learning environment and thus accelerating the learning process of hard-of-hearing children.

Mitali Sinha, Suman Deb, Sonia Nandi
Creating Low Cost Multi-gesture Device Control by Using Depth Sensing

In real life, most of the works are done in a gradient manner of interaction. That means binary result do not come of every actions. Holding, touching and reacting requires the knowledge of weight, surface, friction etc. and depending on that user applies force and hold on the object. In this paper we tried to introduce similar kind of sensation for interacting with an object. The goal is to build a natural way to interact with different devices. Natural interaction involves gradient control of limbs which can be named as ‘Gradient gestures interact of things’. It has been long practice for gesture control devices like TV, electric appliances control, air condition control, etc. but by introduction of gradient gesture interact, switching on or off will be far beyond binary controls. It can control intermediate different segments of operation which can be termed as gradient control. For example, the intensity of light, the speed of fan, air condition temperature can be controlled by the proximate distance or closeness from the device. The Gradient Gestures interact of things is mostly important for gaming and interaction where depending on people’s proximate distance from each other we can introduce new gaming rules which can be used in more natural way. It can be used for entertainment purpose also.

Sonia Nandi, Suman Deb, Mitali Sinha
Evaluation of Glomerular Filtration Rate by Single Frame Method Applying GATES Formula

This paper aims to assess the utility of the single frame Method in the calculation of Glomerular Filtration Rate (GFR) by using GATES equation and compare it with GFR calculated by grouping frames. The DICOM image has number of frames in which the timely report of the activity of tracer is acquired accordingly at various instances of frames. Here we take a single frame from an image collection of 60 frames where the activity of the radio tracer is at maximum. The activity is expected to be maximized at the 40th frame which is a conjunction of filtration phase and excretion phase, which is proved by visual perception of the image. The GFR is calculated by using gates formula and the counts are obtained by semi-automatic segmentation tool called Fiji, ImageJ. This renal uptake study provides all the structural and functional information, in accordance to the traditional methods and proves to efficient when compared to the mathematical complexity.

A. Shiva, Palla Sri Harsha, Kumar T. Rajamani, Siva Subramanyam, Siva Sankar Sai
Design and Development of Cost Effective Wearable Glove for Automotive Industry

In an automotive production unit considerable amount of time is spent on testing and observation. Therefore a need arises to address this issue with the help of any automation technique. In this paper, a cost effective wearable glove for monitoring automotive parameters such as temperature, flaw detection, Electro Motive Force (EMF) leakage and Direct Current (DC) measurement etc. To perform these measurements, various sensor units like temperature sensor for monitoring temperature, ultrasonic sensor for flaw detection are interfaced with the Arduino board and the monitored parameters are displayed in a graphical Liquid Crystal Display (LCD). The proposed design for wearable glove using Arduino Board enables time efficient continual monitoring of the parameters.

Manoharan Sangeetha, Gautham Raj Vijayaragavan, R. L. Raghav, K. P. Phani
Tampering Localization in Digital Image Using First Two Digit Probability Features

In this paper, we have used the first digit probability distribution to identify inconsistency present in the tampered JPEG image. Our empirical analysis shows that, first two digits probabilities get significantly affected by tampering operations. Thus, prima facie tampering can be efficiently localized using this smaller feature set, effectively reducing localization time. We trained SVM classifier using the first two digit probabilities of single and double compressed images, which can be used to locate tampering present in the double compressed image. Comparison of the proposed algorithm with other state of the art techniques shows very promising results.

Archana V. Mire, Sanjay B. Dhok, Narendra J. Mistry, Prakash D. Porey
Classification of Bank Direct Marketing Data Using Subsets of Training Data

Nowadays, most business organizations practice Direct Marketing. One of the promising application areas of this type of marketing practice is Banking and Financial Industry. A classification technique using subsets of training data has been proposed in this paper. We have used a real-world direct marketing campaign data for experimentation. This marketing campaign was a telemarketing campaign. The objective of our experiment is to forecast the probability of a term-deposit plan subscription. In our proposed method we have used customer segmentation process to group individual customers according to their demographic feature. We have used X-means clustering algorithm for customer segmentation process. We have extracted few appropriate collection of customers from the entire customer database using X-means cluster algorithm, on the basis of demographic feature of individual customers. We have tested our proposed method of training for classifier using three most widely used classifiers namely Naïve Bayes, Decision Tree and Support Vector Machine. It has been found that the result obtained using our proposed method for classification on the banking data is better compare to that reported in some previous work on the same data.

Debaditya Barman, Kamal Kumar Shaw, Anil Tudu, Nirmalya Chowdhury
Offline Writer Identification and Verification—A State-of-the-Art

In forensic science different unique bio-metric information of humans are being used to analyses forensic evidence like finger print, signature, retina scan etc. The same can be used applied on handwriting analysis. The Automatic Writer Identification and Verification (AWIV) is a study which combines forensic analysis field and computer vision and pattern recognition field. This paper presents a survey of literature on the offline handwritten writer identification/verification with the type of data, features and classification approaches attempted till date in different languages and scripts. The analysis of the approaches has been described for further enhancement and adaptation of these techniques in different languages and scripts.

Chayan Halder, Sk. Md. Obaidullah, Kaushik Roy
Handwritten Oriya Digit Recognition Using Maximum Common Subgraph Based Similarity Measures

Optical Character Recognition have attracted the attention of lots of researchers lately. In the current work we propose a graph based approach to perform a recognition task for handwritten Oriya digits. Our proposal includes a procedure to convert handwritten digits into graphs followed by computation of the maximum common subgraph. Finally similarity measures between graphs were used to design a feature vector. Classification was performed using the K-nearest neighbor algorithm. After training the system on 5000 images an accuracy of 97.64 % was achieved on a test set of 2200 images. The result obtained shows the robustness of our approach.

Swarnendu Ghosh, Nibaran Das, Mahantapas Kundu, Mita Nasipuri
Design of Non-volatile SRAM Cell Using Memristor

Emerging chip technologies employ power-off mode to diminish the power dissipation of chips. Non-volatile SRAM (NvSRAM) enables a chip to store the data during power–off modes. This non-volatility can be achieved through memristor memory technology which is a promising emerging technology with unique properties like low-power, high density and good-scalability. This paper provides a detailed study of memristor and proposes a memristor based 7T2M NvSRAM cell. This cell incorporates two memristors which store the bit information present in the 6T SRAM Latch, and a 1T switch which helps to restore the previously written bit in situations of power supply failures, thereby making the SRAM non-volatile.

Soumitra Pal, N. S. Ranjan
Evolutionary Algorithm Based LFC of Single Area Thermal Power System with Different Steam Configurations and Non-linearity

Load Frequency Control (LFC) of single area thermal power system is presented in this work. Commonly used industrial Proportional-Integral-Derivative (PID) controller is considered as a supplementary controller and parameters are optimized by using evolutionary algorithm (Ant Colony Optimization (ACO)). Three cost functions are considered to optimize controller gain values. Such as, Integral Absolute Error (IAE), Integral Time Absolute Error (ITAE) and Integral Square Error (ISE) and also three different stem configurations (Non Reheat turbine, Single Stage Reheat turbine and Double stage reheat turbine) are considered in this work. Further the performance of proposed algorithm is proved by adding non-linearity (Generation Rate Constrain, Governor Dead Band and Boiler Dynamics) into the same power system and value of Step Load Perturbation (SLP) in all three steam configurations. Time domain analysis is used to study the performance of power system with different scenarios.

K. Jagatheesan, B. Anand, Nilanjan Dey
Game Theory and Its Applications in Machine Learning

Machine learning is a discipline that deals with the study of algorithms that can learn from the data. Typically, these algorithms run by generating a model built from the observed data, and then employ the generated model to predict and make decisions. Most of the problems in machine learning could be translated to multi-objective optimization problems where multiple objectives have to be optimized at the same time in the presence of two or more conflicting objectives. Mapping multi-optimization problems to game theory can give stable solutions. This paper presents an introduction of game theory and collects the survey on how game theory is applied to some of the machine learning problems.

J. Ujwala Rekha, K. Shahu Chatrapati, A. Vinaya Babu
A Study on Speech Processing

Speech is the most natural means of communication in human-to-human interactions. Automatic Speech Recognition (ASR) is the application of technology in developing machines that can autonomously transcribe a speech into a text in the real-time. This paper presents a short review of ASR systems. Fundamentally, the design of speech recognition system involves three major processes such as feature extraction, acoustic modeling and classification. Consequently, emphasis is laid on describing essential principles of the various techniques employed in each of these processes. On the other hand, it also presents the milestones in the speech processing research to date.

J. Ujwala Rekha, K. Shahu Chatrapati, A. Vinaya Babu
Forest Type Classification: A Hybrid NN-GA Model Based Approach

Recent researches have used geographically weighted variables calculated for two tree species, Cryptomeria japonica (Sugi, or Japanese Cedar) and Chamaecyparis obtusa (Hinoki, or Japanese Cypress) to classify the two species and one mixed forest class. In machine learning context it has been found to be difficult to predict that a pixel belongs to a specific class in a heterogeneous landscape image, especially in forest images, as ground features of nearly located pixel of different classes have very similar spectral characteristics. In the present work the authors have proposed a GA trained Neural Network classifier to tackle the task. The local search based traditional weight optimization algorithms may get trapped in local optima and may be poor in training the network. NN trained with GA (NN-GA) overcomes the problem by gradually optimizing the input weight vector of the NN. The performance of NN-GA has been compared with NN, SVM and Random Forest classifiers in terms of performance measures like accuracy, precision, recall, F-Measure and Kappa Statistic. The results have been found to be satisfactory and a reasonable improvement has been made over the existing performances in the literature by using NN-GA.

Sankhadeep Chatterjee, Subhodeep Ghosh, Subham Dawn, Sirshendu Hore, Nilanjan Dey
Optimizing Technique to Improve the Speed of Data Throughput Through Pipeline

High speed data processing is very important factor in super computers. Parallel computing is one of the important elements mostly used in fast computers. There are different methods actively involved to satisfy the concept parallel computing. In the present paper pipeline method is discussed with its flaws and different clock schemes. In the present paper data propagation delay is discussed at different existing techniques and presented a new method to optimize the data propagation delay. The new method is compared with different existing methods and designs of new technique with simulation results are presented. The simulation results are obtained from Virtual Digital Integrator.

Nandigam Suresh Kumar, D. V. Rama Koti Reddy
An Improved Data Hiding Scheme in Motion Vectors of Video Streams

An inter frame data hiding scheme in motion vectors of the compressed video streams is proposed in this paper. The main objective of this paper is to achieve a relatively high embedding capacity while preserving the encoding and the decoding schemes where the data hiding is based on the changing of motion vectors. However this approach tries to preserve the perceptual quality of the compressed video streams there by reflecting the usage of this algorithm to real time applications. The proposed approach is also been compared against the conventional LSB based approach both subjective and objective quality analysis were recorded for different experimental conditions.

K. Sridhar, Syed Abdul Sattar, M. Chandra Mohan
Hiding Sensitive Items Using Pearson’s Correlation Coefficient Weighing Mechanism

Data mining algorithms extract high level information from massive volumes of data. Along with advantage of extracting useful pattern, it also poses threats of revealing sensitive information. We can hide sensitive information by using privacy preserving data mining. As association rules are a key tool for finding the pattern, so certain rules can be categorized as sensitive if its disclosure risk is above a specific threshold. In literature, there are different techniques exist for hiding sensitive information. Some of these techniques are based on support and confidence framework, which suffers with limitations including choosing a suitable value of these measures which cause losing useful information, generation of large number of association rules and loss of database accuracy. We propose correlation based approach which uses measures other than support and confidence such as correlation among items in sensitive item sets to hide the sensitive items in the database.

K. Srinivasa Rao, Ch. Suresh Babu, A. Damodaram
OpenCV Based Implementation of Zhang-Suen Thinning Algorithm Using Java for Arabic Text Recognition

The aim of this research work is to implement Zhang-Suen thinning algorithm on openCV based java platform. The novelty lies in the comparative study of the obtained results using the proposed implementation with the existing implementations of Zhang-Suen thinning algorithm viz. using Matlab, C++ and compare the performance factor viz computation time with others. The experimental results achieved by openCV based java platform are faster when compared to Matlab and C++.

Abdul Khader Jilani Saudagar, Habeeb Vulla Mohammed
Numerical Modeling of Twin Band MIMO Antenna

Over the last decade multiple input multiple output (MIMO) systems have received considerable attention. There is some limitation while obtaining the most from MIMO, such as mutual coupling between antenna elements. Mutual coupling and therefore inter-elements spacing have important effects on the channel capacity of a MIMO communication system, its error rate and ambiguity of MIMO radar system. The effect of mutual coupling on MIMO system has been studied and then different array configurations are considered. Different configuration show different mutual coupling behaviour. After modelling and simulation, the array was designed, implemented and finally verified the result using Vector Network Analyzer. In this paper, a compact Twin bands MIMO antenna with low mutual coupling, operating over the range of 2.1–2.845 GHz is proposed.

Vilas V. Mapare, G. G. Sarate
Performance Evaluation of Video-Based Face Recognition Approaches for Online Video Contextual Advertisement User-Oriented System

In this research, we propose the online video contextual advertisement user-oriented system. Our system is a combination of video-based face recognition using machine learning models from the camera with multimedia communications and networking streaming architecture using Meta-data structure to video data storage. The real images captured by the camera will be analyzed based on predefined set of conditions to determine the appropriate object classes. Based on the defined object class, the system will access the multimedia advertising contents database and automatically select and play the appropriate contents. We analyse existing face recognition in videos and age estimation from face images approaches. Our experiment was analyzed and evaluated in performance when we integrate analyze age from the face identification in order to select the optimal approach for our system.

Le Nguyen Bao, Dac-Nhuong Le, Le Van Chung, Gia Nhu Nguyen
A Survey on Power Gating Techniques in Low Power VLSI Design

The most effective technique to reduce dynamic power is the supply voltage reduction by technology scaling which reduces threshold voltage. Under deep submicron technology, reduction in threshold voltage increases leakage currents, gate tunneling currents and leakage power in standby mode. Most of the handheld devices have long standby mode cause leakage current contributing to leakage power dissipation. In this paper, various leakage power reductions, charge recycling techniques, data retention of memories. Various Power gating techniques are discussed in detail.

G. Srikanth, Bhanu M. Bhaskara, M. Asha Rani
Phase Based Mel Frequency Cepstral Coefficients for Speaker Identification

In this paper new Phase based Mel frequency Cepstral Coefficient (PMFCC) are used for speaker identification. GMM with VQ are used as a classifier for classification of speakers. The identification performance of proposed features is compared with identification performance of MFCC features and phase features. The performance of PMFCC features has been found superior compared to MFCC features and phase features. Ten Hindi digits database of fifty speakers is used for simulation of results. This paper also explore the usefulness of phase information for speaker recognition.

Sumit Srivastava, Mahesh Chandra, G. Sahoo
A New Block Least Mean Square Algorithm for Improved Active Noise Cancellation

Acoustic noise is an undesired disturbance that is present in the information carrying signal in telecommunication systems. The communication process gets affected because noise degrades the quality of speech signal. Adaptive noise reduction is a method of approximating signals distorted by additive noise signals. With no prior estimates of input or noise signal, the levels of noise reduction are attainable that would be difficult or impossible to achieve by other noise cancelling algorithms, which is the advantage of adaptive technique. Adaptive filtering before subtraction allows the treatment of inputs that are deterministic or stochastic, stationary or time variable. This paper provides an analysis of various adaptive algorithms for noise cancellation and a comparison is made between them. The strengths, weaknesses and practical effectiveness of all the algorithms have been discussed. This paper deals with cancellation of noise on speech signal using three existing algorithms—Least Mean Square algorithm, Normalized Least Mean Square algorithm and Recursive Least Square algorithm and a proposed algorithm—advanced Block Least Mean Square algorithm. The algorithms are simulated in Simulink platform. Conclusions have been drawn by choosing the algorithms that provide efficient performance with less computational complexity.

Monalisha Ghosh, Monali Dhal, Pankaj Goel, Asutosh Kar, Shibalik Mohapatra, Mahesh Chandra
An Improved Feedback Filtered-X NLMS Algorithm for Noise Cancellation

The age of unmatchable technical expertise envisages noise cancellation as an acute concern, as noise is held responsible for creating hindrances in day to day communication. To overcome the noise present in the primary signal notable traditional methods surfaced over the passage of time being listed as noise barriers, noise absorbers, silencers, etc. The advanced modern day approach suppresses noise by continuous adaptation of filter weights of an adaptive filter. The change in approach was ground breaking that accredits its success to advent of adaptive filters which employs adaptive algorithms. The various premier noise cancellation algorithms include LMS, RLS etc. Further much coveted Normalized LMS, Fractional LMS, Differential Normalized LMS, Filtered-x LMS etc. ensued out of active framework in this field. The paper looks forward to provide an improved approach for noise cancellation in noisy environment using newly developed variants of Filtered x LMS (FxLMS) algorithm, Feedback FxLMS (FB-FxLMS). An initial detailed analysis of existing FXLMS algorithm and FB-FxLMS algorithm has been carried out along with the mathematics of the new proposed algorithm. The proposed algorithm is applied to noise cancellation and the results for each individual process were produced to make a suitable comparison between the existing and proposed one.

Bibhudatta Mishra, Ashok Behuria, Pankaj Goel, Asutosh Kar, Shibalik Mohapatra, Mahesh Chandra
Application of Internet of Things (IoT) for Smart Process Manufacturing in Indian Packaging Industry

Smart Manufacturing is the need of the hour in India with growing concerns of environmental safety, energy conservation and need for agile and efficient practices to help Indian firms remain competitive against the low cost mass manufacturing and imports. The twelfth five year plan in India (2012–17) (Ministry of Commerce, 2012) has identified low technology intensity, inadequate costs and high transaction costs as major constraints. Smart Manufacturing can help companies gather and consolidate data on near real time basis at each step of their operations to get meaningful insights using existing instrumentation e.g. sensors in valves, motors, pressure and energy meters by connecting them to a digital network where data generated by them is constantly stored for proactive decision making. This paper critically examines the role of Internet of Things in taking flexible packaging manufacturing to the next level in India with relevant citations from a Indian company and alleviating manufacturing pitfalls due to infrastructural issues be it energy, transportation or machine efficiency.

Ravi Ramakrishnan, Loveleen Gaur
Function Optimization Using Robust Simulated Annealing

In today’s world, researchers spend more time in fine-tuning of algorithms rather than designing and implementing them. This is very true when developing heuristics and metaheuristics, where the correct choice of values for search parameters has a considerable effect on the performance of the procedure. Determination of optimal parameters is continuous engineering task whose goals are to reduce the production costs and to achieve the desired product quality. In this research, simulated annealing algorithm is applied to solve function optimization. This paper presents the application and use of statistical analysis method Taguchi design method for optimizing the parameters are tuned for the optimum output. The outcomes for various combinations of inputs are analyzed and the best combination is found among them. From all the factors considered during experimentation, the factors and its values which show the significant effect on output are discovered.

Hari Mohan Pandey, Ahalya Gajendran
Analysis and Optimization of Feature Extraction Techniques for Content Based Image Retrieval

The requirement of improved image processing methods to index increasing image database that results in an alarming need of content based image retrieval systems, which are search engines for images and also is an indexing technique for large collection of image databases. In this paper, an approach to improve the accuracy of content based image retrieval is proposed that uses the genetic algorithm, a novel and powerful global exploration approach. The classification techniques—Neural Network and Nearest Neighbor have been compared in the absence and presence of Genetic Algorithm. The computational results obtained shows the significant increase in the accuracy by incorporating genetic algorithm for both the classification techniques implemented.

Kavita Chauhan, Shanu Sharma
Parametric Curve Based Human Gait Recognition

In this paper we institute a baseline technique for human identification based on their body structure and gait. This paper presents a unique human identification system based on self-extracted gait biometric features. Recurring gait analysis is done to deduce key frames from the gait sequence. The gait features extracted are height, hip, neck and knee trajectories of the human silhouette from the body structure Here, we propose two new parametric curves Beizer curve and hermite curve, based on gait pattern. The projected approach has been applied on the SOTON covariate database, which comprises eleven subjects. The testing samples are compared to training samples using normalized correlation, and subject classification is performed by nearest neighbor matching among correlation scores. From the conducted experimental results, it can be accomplished that the stated approach is successful in human identification.

Parul Arora, Smriti Srivastava, Rachit Jain, Prerna Tomar
An Energy Efficient Proposed Framework for Time Synchronization Problem of Wireless Sensor Network

Now a day’s Wireless Sensor Network are used in various applications where full or partial time synchronization plays a vital role. Basic aim of Time synchronization is to achieve equalization of local time between all the nodes within the network. This paper proposes a new framework for time synchronization problem of wireless sensor network. Time synchronization protocols are very popular and widely used in WSN now days. An analysis has been performed utilizing the proposed frame work, which leads to a conclusion that it consumes less energy than the traditional time synchronization protocols: Reference Broadcast Time Synchronization and Time-Sync Protocol for Sensor Network. It has been observed that the proposed frame work do not require a Global Positioning System or any other external system to coordinate with time, as a typical Network Time Protocol for wireless sensor system uses. The proposed time synchronization protocol is categorized as peer to peer, clock-correcting sender- receiver network-wide synchronization protocol depending upon the characteristics of WSN. The maximum probability theory is used in order to analyze the clock offset. It was observed that resynchronization interval is required to achieve a specific level of synchronization correctness. Results are obtained by simulating the WSN on NS2 to demonstrate the energy efficient feature of the proposed protocol.

Divya Upadhyay, P. Banerjee
Epigenetic and Hybrid Intelligence in Mining Patterns

The term Epigenetics science is an element of a ‘postgenomic’ analysis paradigm that has more and more place in the theoretical model of a unidirectional causative link from DNA → polymer → supermolecule → constitution. Epigenetics virtually means that “above” or “on high of” biological science. It refers to explicitly modifications to deoxyribonucleic acid that flip genes “on” or “off.” These changes don’t amendment the deoxyribonucleic acid sequence, however instead, they have an effect on however cells “read” genes. Epigenetic changes alter the natural object of DNA. One example of associate degree epigenetic amendment is DNA methylation—the addition of a alkyl group, or a “chemical cap,” to a part of the DNA molecule, that prevents sure genes from being expressed. In this paper, an algorithm i-DNA-M has been proposed which would improve the result of the mining intelligent patters in dataset. Patterns further helps to reconstruct phylogenetic network. The idea behind i-DNA-M is rearranging the input sequences in a way that the new arrangement gives a better tree, since the patterns or motifs affects the outcomes of phylogenetic network.

Malik Shamita, Singh Richa
A Technical Review on LVRT of DFIG Systems

The most important issue with doubly fed induction generator (DFIG) wind turbine is low voltage ride-through performance. To solve this problem, several techniques have been introduced. The paper discusses some of the most commonly used solutions for Low Voltage Ride Through (LVRT) of wind turbine generators which is the most important feature to be attained according to grid codes. A technical survey is presented with comparison between these techniques.

Pretty Mary Tom, J. Belwin Edward, Avagaddi Prasad, A. V. Soumya, K. Ravi
An AIS Based Approach for Extraction of PV Module Parameters

This article presents the calculation of the parameter extraction of photovoltaic (PV) panel using artificial immune system (AIS) and compared with genetic algorithm (GA) using MATLAB, Simulink at different environmental conditions (for different irradiations (200–1000 w/m2)). The proposed method showing Ipv versus Vpv curves and to compare the obtained curve to the ideal values in order to obtain the absolute error curve. For extracting parameters of a PV cell, the proposed method useful because it can handle nonlinear functions. The proposed method compared with manual data and are validated by three different types of PV modules named as, Multi-crystalline (SHELL S36), Mono-crystalline (SHELL SP70) and Thin-film (SHELL ST40). Data derived from these calculations beneficial to decide the suitable computational technique to build an accurate and efficient simulators for a PV system.

R. Sarjila, K. Ravi, J. Belwin Edward, Avagaddi Prasad, K. Sathish Kumar
Word Sense Disambiguation in Bengali: An Auto-updated Learning Set Increases the Accuracy of the Result

This work is implemented using the Naïve Bayes probabilistic model. The whole task is implemented in two phases. First, the algorithm was tested on a dataset from the Bengali corpus, which was developed in the TDIL (Technology Development for the Indian Languages) project of the Govt. of India. In the first execution of the algorithm, the accuracy of result was nearly 80 %. In addition to the disambiguation task, the sense evaluated sentences were inserted into the related learning sets to take part in the next executions. In the second phase, after a small manipulation over the learning sets, a new input data set was tested using the same algorithm, and in this second execution, the algorithm produced a better result, around 83 %. The results were verified with the help of a standard Bengali dictionary.

Alok Ranjan Pal, Diganta Saha
Efficient Methods to Generate Inverted Indexes for IR

Information retrieval systems developed during last 2–3 decades have marked the existence of web search engines. These search engines have become an important role player in the field of information seeking. This increasing importance of search engines in the field of information retrieval has compelled the search engine companies to put their best for the improvement of the search results. Therefore the measurement of the search efficiency has become an important issue. Information retrieval is basically used for identifying the activities which makes us capable to extract the required documents from a document repository. Information retrieval today is done on the basis of numerous textual and geographical queries having both the textual and spatial components. The textual queries of any IRS are resolved using indexes and an inversion list. This paper mainly concentrates on the indexing part and the analysis of the algorithm. Several structures are in existence for implementing these indexes. Hash tables, B-trees, sorted arrays, wavelet trees are few to name. For an efficient data structure there are different deciding parameters that are to be taken into account. Some important parameters considered in this paper are index creation time, storage required by inverted file and retrieval time. This paper provides a detailed comparative study of different data structures for the implementation of inverted files.

Arun Kumar Yadav, Divakar Yadav, Deepak Rai
Intelligent Mail Box

In 21st century, email is one of the most effective ways of written communication due to its easy and quick access. But now days with each individual receiving large number of emails, mostly promotional and unnecessary mails, organization of emails in individual’s inbox is a tedious task to do. In last decade, researchers and scientific community have contributed lot for organization of individual’s inbox by classifying the emails into different categories. In this paper, we propose an intelligent mail box where email classification has been carried out on the basis of labels created by users and needs few training mails for future classification. Hence it provides more personalized mail box to the user. The proposed system has been tested with various classifiers, viz. Support Vector Machine, Naïve Bayes, etc. and obtained the highest classification accuracy (66–100 %) with Naïve Bayes.

Hitesh Sethi, Ayushi Sirohi, Manish K. Thakur
Learning Based No Reference Algorithm for Dropped Frame Identification in Uncompressed Video

For last many years video authentication and detection of tampering in a video are major challenges in the domain of digital video forensics. This paper presents detection of one of the temporal tampering (frame drop) under no reference mode of tampering detection. Inspirit of the scheme presented by Upadhyay and Singh, this paper extends the features to train the SVM classifier and accordingly classify frames of given video as tampered or non-tampered frames, i.e. detects the tampering of frame drop. Subsequently given video is classified as tampered or non-tampered video. The obtained results with enhanced features show significant improvement in classification accuracy.

Manish K. Thakur, Vikas Saxena, J. P. Gupta
Multi-view Video Summarization

Video summarization is the most important video content service which gives us a short and condensed representation of the whole video content. It also ensures the browsing, mining, and storage of the original videos. The multi- view video summaries will produce only the most vital events with more detailed information than those of less salient ones. As such, it allows the interface user to get only the important information or the video from different perspectives of the multi-view videos without watching the whole video. In our research paper, we are focusing on a series of approaches to summarize the video content and to get a compact and succinct visual summary that encapsulates the key components of the video. Its main advantage is that the video summarization can turn numbers of hours long video into a short summary that an individual viewer can see in just few seconds.

Chintureena Thingom, Guydeuk Yeon
Comprehensive Review of Video Enhancement Algorithms for Low Lighting Conditions

Video enhancement becomes a very challenging problem under low lighting conditions. Numerous techniques for enhancing visual quality of videos/images captured under different environmental situations are proposed by number of researchers especially in dark or night time, foggy situations, rainy and so on. This paper discusses brief review of existing algorithms related to video enhancement techniques under various lighting condition such as De-hazing based enhancement algorithm, a novel integrated algorithm, gradient based fusion algorithm and dark channel prior and in addition it also presents advantages and disadvantages of these algorithms.

G. R. Vishalakshi, M. T. Gopalakrishna, M. C. Hanumantharaju
A Detailed Review of Color Image Contrast Enhancement Techniques for Real Time Applications

Real-time video surveillance, medical imaging, industrial automation and oceanography applications use image enhancement as a preprocessing technique for the analysis of images. Contrast enhancement is one of a method to enhance low contrast images obtained under poor lighting and fog conditions. In this paper, various variants of histogram equalisation, Homomorphic filtering and dark channel prior techniques used for image enhancement are reviewed and presented. Real-time processing of images is implemented on Field Programmable Gate Array (FPGA) to increase the computing speed. Further this paper focus on the review of contrast enhancement techniques implemented on FPGA in terms of device utilization and processing time.

P. Agalya, M. C. Hanumantharaju, M. T. Gopalakrishna
A Conceptual Model for Acquisition of Morphological Features of Highly Agglutinative Tamil Language Using Unsupervised Approach

Construction of powerful computer systems to understand the human languages or natural languages to capture information about various domains demands morphologically featured modeled architected appropriately in a core way. Morphological analysis is a crucial step that plays a predominant role in the field of natural language processing. It includes the study of structure, formation, functional units of the words, identification of morphemes to endeavor the formulation of the rules of the language. Since natural language processing applications like machine translation systems, speech recognition, information retrieval rely on large text data to analyze using linguistic expertise is not viable. To overcome this issue morphological analysis using unsupervised settings is incorporated. It is an alternative procedure that works independently to uncover the morphological structure of the languages. This paper gives a theoretical model to analysis morphologically the structure of the Tamil language in an unsupervised way.

Ananthi Sheshasaayee, V. R. Angela Deepa
A Fuzzy Approach for the Maintainability Assessment of Aspect Oriented Systems

Maintainability of the software is well-thought-out as one of the vital quality that software should possess according to ISO standards. Software Maintainability Assessment (SMA) of aspect oriented software has been a focus of research for some time. Statistical and machine learning approaches have been used for assessing the maintainability of software. Fuzzy logic acts as an alternative approach to SMA of aspect oriented systems. Fuzzy logic has emerged as an important tool for use in a variety of applications that range from control system engineering to the design of automated intelligence systems. Fuzzy logic has the ability to deal with uncertainty and multivalued data and does not rely on historic data. This characteristic of data free model building enhances the prospect from using fuzzy logic for software metrics. The paper presents a fuzzy logic based algorithm for SMA of aspect oriented systems.

Ananthi Sheshasaayee, Roby Jose
Split and Merge Multi-scale Retinex Enhancement of Magnetic Resonance Medical Images

Image contrast enhancement and highlighting the prominent details based on edge preserving are the fundamental requirement of medical image enhancement research. This paper presents the Multi-Scale Retinex (MSR) enhancement of Magnetic Resonance (MR) medical images using split and merge technique. The main limitations of the retinex based image enhancement schemes is computational complexity, halo artifacts, gray world violation, over enhancement etc. There exist various extensions for retinex methods developed by over a dozens of researchers, but most of the techniques are computationally complex. This is due to the fact that the image details are improved at the cost of increased computational complexity. The proposed method is efficient in computation, since the original input image is splitted into a size of 8 × 8 and then gray level Fast Fourier Transform (FFT) based retinex algorithm is exploited to improve the quality of sub-image. Reconstructed image is produced by merging each of the enhanced versions to the original image resolution. This scheme is validated for MR medical images and the parameters of the retinex method is adjusted to improve the contrast, details and overall quality image. Experimental results presented confirms that the proposed method outperforms compared to existing methods.

Sreenivasa Setty, N. K. Srinath, M. C. Hanumantharaju
Review on Secured Medical Image Processing

Medical image processing techniques requires continuous improve quality of services in health care industry. In the real world huge amount of information has to be processed and transmitted in digital form. Before transmission the image has to be compressed to save the bandwidth. This is achieved by alternate coefficient representation of image/videos in a different domain. Processing of images in transform domain takes comparable less computation by avoiding inverse and re-transforms operations. The fundamental behind the transform domain processing is to convert the spatial domain operations to its equivalent transform domain. This paper describes the analysis in the field of medical image processing.

B. Santhosh, K. Viswanath
Identification of Stages of Malignant Tumor in MRM Images Using Level Set Algorithm and Textural Analysis

An efficient Computer Aided Detection and Classification system has been developed to detect, classify and identify the stages of malignant tumor. Preprocessing stage that involves image enhancement and removal of noises has been carried out using histogram equalization and morphological operators. As a novel initiative Level set method has been used for segmentation. High accuracy at low computation time has been attained using level set method. Feature extraction stage involves extracting both wavelet and textural features. Wavelet analysis was best suited for the task as it aided in analyzing the images at various resolution levels. Initially a neural network is trained using the wavelet features to classify the tumor as normal, benign or malignant tumor. Few textural features are extracted from detected malignant tumors. Out of all textural features, energy values are used to train another neural network which is used to identify the stage of the malignant tumor. The performance of the system has been tested on 24 patient’s data obtained from a hospital. An overall accuracy of 97 % has been achieved for MIAS database images and 85 % for patient’s dataset.

M. Varalatchoumy, M. Ravishankar
Reviews Based Mobile Application Development Model (RBMAD)

Mobile development is the area of heavy activities in the software industry. The exponential growth in the users of smart phones has opened the doors of opportunities for e-commerce, m-commerce, banking, health and almost all walks of business. There has been a shift in software engineering models to device-centric models. The open-source technologies like Android and large number of handset manufactures under the umbrella of OHA (Open Handset Alliance) has changed the way software industry looks at mobile application development. Industry suffers from lack of a model and hence depends on best practices. This paper presents a model that has an application developed by one of the best practices and then review-based modelling is applied to enhance the overall development of the mobile application. This model assumes more importance as additional worries such as code-efficiency interaction with device resources, low-reusability and lack of portability are common in mobile application.

Manish Kumar Thakur, K. S. Prasanna Kumar
Three Phase Security System for Vehicles Using Face Recognition on Distributed Systems

There is a continuing problem of automobile theft which is a greater challenge that needs to be solved. Traditional security system needs many sensors and it is costlier. Modern security system needs to be implemented which uses biometric verification technology in the vehicles. In the proposed method the GSM controlled by the Renesas Microcontroller is used to find the location of the vehicle. The vehicle ignition will on only when the person accessing the vehicle authorises with the three phase security system. An SMS alert is sent to the authorised person if an unauthorised person accesses the vehicle who fails in the three phase security system. In this paper, Haar features are used to detect the face and Adaboost classifier is used to combine all weak classifiers into strong classifiers for deciding whether the captured image from the video is face or not. Principal Component Analysis method is used to recognize the detected face in the video. The proposed system provides an inexpensive security system for the automobiles using Face Detection and Recognition technology. This is an efficient method to authenticate the person in the vehicle security system.

J. Rajeshwari, K. Karibasappa, M. T. Gopalakrishna
SLA Based Utility Analysis for Improving QoS in Cloud Computing

A service rented or leased from cloud resource providers follows a systematic procedure on working with the resource and returning them in the same accordance. In relevant terms it can be stated like policies that a user need to adhere in order to utilize the resources. These polices are clearly declared on an agreement to define the service level policies to the cloud users. This agreement stances or acts as a legal document between the user and the resource provider. The most important role of an SLA treaty is to provide quality assured service to its users as stated on the agreement. Quality Agreement of negotiation among the contributors helps in defining the Quality of Service necessities of critical resource based progressions. Though, the negotiation process for users is a momentous job predominantly when there are frequent SaaS providers in the Cloud souk. Consequently, this paper proposes a novel briefing on negotiation agenda where a SaaS broker is employed as a resource provider for the customers to achieve the required service efficiently when negotiating with multiple providers. Negotiation framework simplifies intelligent mutual negotiating of SLAs between a SaaS agent and multiple providers to achieve different objectives for different participants. To capitalize on revenue and mend customer’s contentment levels for the broker, the paper also proposes the design of strategies based counter generation techniques.

Ananthi Sheshasaayee, T. A. Swetha Margaret
A Descriptive Study on Resource Provisioning Approaches in Cloud Computing Environment

Cloud computing has become a promising technology in many organizations. A huge amount of applications is accessed through Cloud at anytime and anywhere. Hence provisioning the resources at the right time is a challenging task in Cloud computing environment. The Cloud consumers utilize resources using Virtual machines based on a “Pay-as-you-go” basis. For this, the two types of resource provisioning plans were offered by the Cloud providers, namely On-Demand plan and Reservation plan. In common, the reservation scheme has low cost than On-Demand plan. Cloud computing environment provides different types of resource provisioning approaches for minimizing total cost. The good resource provisioning approach should avoid disintegration of resources, lack of resources, disputation of resources, over provisioning and under provisioning. This paper mainly focuses on giving an overview of Cloud computing, resource provisioning and descriptive analysis of various resource provisioning algorithms and techniques.

Ananthi Sheshasaayee, R. Megala
Analyzing the Performance of a Software and IT Growth with Special Reference to India

Effective management is essential in the software industries to produce quality software. The use of software has enabled software industries to manage effectively with reduced human effort. This paper aims to investigate the scenario of software utility in the Indian industry through a case study. This investigation has enabled to prepare a basic flowchart model for on-demand software. The paper also aims to study the number of errors made by a programmer for different projects having different number of program lines. It also identifies various parameters to rate and analyze the performance of software.

Molla Ramizur Rahman
Stock Price Forecasting Using ANN Method

Ability to predict stock price direction accurately is essential for investors to maximize their wealth. Neural networks, as a highly effective data mining method, have been used in many different complex pattern recognition problems including stock market prediction. But the ongoing way of using neural networks for a dynamic and volatile behavior of stock markets has not resulted in more efficient and correct values. In this research paper, we propose methods to provide more accurately by hidden layer data processing and decision tree methods for stock market prediction for the case of volatile markets. We also compare and determine our proposed method against three layer feed forward neural network for the accuracy of market direction. From the analysis, we prove that with our way of application of neural networks, the accuracy of prediction is improved.

Thangjam Ravichandra, Chintureena Thingom
Recognition of Handwritten English Text U Minimisation

In handwritten character recognition one of the most challenging task is segmentation. This is mainly due to different challenges like skewness of textlines, overlapping characters, connected components etc. This paper proposes a character recognition method of handwritten English documents. The textlines are segmented based on information energy that is calculated for every pixel in the scanned document and the characters are recognized using Artificial Neural Network (ANN). The recognition has an accuracy of almost 92 %. The proposed method can also be further improved to work on other languages as well as increase the accuracy.

Kanchan Keisham, Sunanda Dixit
A Novel Codification Technique for Tacit Knowledge in Software Industry Using Datamining Techniques

Tacit knowledge is an important resource which comes from experience and insight, and is not in any pre-recorded form. But, it has a strong contribution to the success of decision making procedure. This paper focuses on the summarization of various efforts of codification methodologies to convert tacit to explicit knowledge of an IT company, where the later plays a key role in decision making. This paper also tries to bring out the lacuna or technical gaps of various methodologies and propose a novel method to capture the tacit technical knowledge, so that the technical knowledge transfer in case of staff relocation does not affect the growth of the small scale IT industry. The challenge in software development life cycle is clearly captured by product management software tools like JIRA. We are using text mining techniques on these reports to gather all the tacit knowledge, of technical person assigned to the project. Added to that, from our scrum report, we will extract consolidate generic knowledge. These two, will ensure that we capture most of the needed tacit knowledge of the project coding or maintenance phase. The mining results from the data are promising and future work is to include sentimental analysis.

Jeffrey Bakthakumar, Murali Manohar, R. J. Anandhi
An Analysis on the Effect of Malicious Nodes on the Performance of LAR Protocol in MANETs

Mobile Ad Hoc Networks (MANETs) are more affected by various security problems because of the inherent characteristics such as dynamic network topology, unavailability of fixed infrastructure, lack of centralized control and high mobility of the nodes. One of the major security problems in MANETs is nodes’ misbehavior. The misbehaving nodes can advertise themselves of having a short route to the destination to transmit the data. Also, the misbehaving nodes participating in the route may stop forwarding the data at some point of time resulting in loss of packets. The main objective of our work is to analyze the effect of misbehaving nodes over the performance of Location Aided Routing (LAR) in MANETs. The work analyzes the performance of LAR protocol with the parameters such as throughput, packet delivery ratio, average delay and routing overhead using NS2 simulator. The simulation results clearly show that the routing performance of LAR decreases in the presence of misbehaving nodes. Thus there is a need for introducing authentication mechanism for secured data transmission using LAR protocol.

R. Suma, B. G. Premasudha, V. Ravi Ram
Qualitative Performance Analysis of Punjabi and Hindi Websites of Academic Domain: A Case Study

The dependency on websites has increased manifold. More and more websites are being developed in local languages also. However, most users feel that the websites developed in their local languages are not reliable and updated. So, the quality of the websites of Academic institutes which are in local languages have been performed. There are 49 academic institutes in India whose websites are in local languages. Using stratified sampling technique, the sample of websites that are selected for case study are 2 (66.6 %) of Punjabi and 20 (40.8 %) of Hindi. The testing has been performed on the selected websites by implementing a web quality model According to the testing, 12 (54.5 %) websites’ score is less than 50 %, 7 (31.8 %) websites’ score is between 50 and 60 % while only 3 (13.6 %) websites’ score is more than 60 %.

Rupinder Pal Kaur, Vishal Goyal
A Robust, Privacy Preserving Secret Data Concealing Embedding Technique into Encrypted Video Stream

With tremendously increasing demand for multimedia communication, it is essential to safeguard the video data from various attacks during its transmission over internet. Video encryption ensures privacy and security of the video content. For the purpose of authentication or to identify tampering if any, secret information is concealed in these encrypted videos. This paper puts forward an efficient and robust methodology for embedding the secret information directly into encrypted videos, which guarantees the confidentiality of the video content. The input video is first compressed using the popular H.264/AVC compression technique and by analyzing the properties of H.264 encoding technique, the three portions containing the sensitive data are encrypted. A standard RC4 stream cipher has been employed for encrypting the codewords of Motion vector differences, Intra prediction modes and Residual coefficients. The secret data can then be hidden by the data hider using a unique Codeword substitution technique without being aware of the video contents. Thus the technique ensures confidentiality of the video content. The hidden data can then be directly extracted by the intended authenticated receiver even without decrypting the video sequence. To validate the feasibility and the performance of the proposed work, the result metrics PSNR, SSIM and VQM have been estimated.

B. Santhosh, N. M. Meghana
Trusted Execution Environment for Data Protection in Cloud

Cloud Computing has become a major part of all organizations throughout the world because of the services offered such as Iaas, Saas, Paas and wide availability of these services. In spite of the benefits of cloud services they must consider how the security and privacy aspects are ensured, as users are often store some sensitive information with cloud storage providers which may be un trusted. In this paper, we are going to discuss about a novel approach where the Protection to data as included as a new service which will reduce the per application development cost to provide a secure environment and also provides secure access to data stored in public clouds.

Podili V. S. Srinivas, Ch. Pravallika, K. Srujan Raju
Erratum to: Information Systems Design and Intelligent Applications
Suresh Chandra Satapathy, Jyotsna Kumar Mandal, Siba K. Udgata, Vikrant Bhateja
Backmatter
Metadaten
Titel
Information Systems Design and Intelligent Applications
herausgegeben von
Suresh Chandra Satapathy
Jyotsna Kumar Mandal
Siba K. Udgata
Vikrant Bhateja
Copyright-Jahr
2016
Verlag
Springer India
Electronic ISBN
978-81-322-2757-1
Print ISBN
978-81-322-2756-4
DOI
https://doi.org/10.1007/978-81-322-2757-1