Skip to main content

2013 | Buch

Advances in Computing and Information Technology

Proceedings of the Second International Conference on Advances in Computing and Information Technology (ACITY) July 13-15, 2012, Chennai, India - Volume 2

herausgegeben von: Natarajan Meghanathan, Dhinaharan Nagamalai, Nabendu Chaki

Verlag: Springer Berlin Heidelberg

Buchreihe : Advances in Intelligent Systems and Computing

insite
SUCHEN

Über dieses Buch

The international conference on Advances in Computing and Information technology (ACITY 2012) provides an excellent international forum for both academics and professionals for sharing knowledge and results in theory, methodology and applications of Computer Science and Information Technology. The Second International Conference on Advances in Computing and Information technology (ACITY 2012), held in Chennai, India, during July 13-15, 2012, covered a number of topics in all major fields of Computer Science and Information Technology including: networking and communications, network security and applications, web and internet computing, ubiquitous computing, algorithms, bioinformatics, digital image processing and pattern recognition, artificial intelligence, soft computing and applications. Upon a strength review process, a number of high-quality, presenting not only innovative ideas but also a founded evaluation and a strong argumentation of the same, were selected and collected in the present proceedings, that is composed of three different volumes.

Inhaltsverzeichnis

Frontmatter
Analysis, Control and Synchronization of Hyperchaotic Zhou System via Adaptive Control

This paper investigates the analysis, control and synchronization of the hyperchaotic Zhou system (2009) via adaptive control. First, an adaptive control scheme is derived to stabilize the hyperchaotic Zhou system with unknown parameters to its unstable equilibrium at the origin. Then an adaptive synchronization scheme is derived to achieve global chaos synchronization of the identical hyperchaotic Zhou systems with unknown parameters. The results derived for adaptive stabilization and synchronization for the hyperchaotic system are established using the Lyapunov stability theory. Numerical simulations are shown to demonstrate the effectiveness of the adaptive control and synchronization schemes derived in this paper.

Sundarapandian Vaidyanathan
Secured Ontology Matching Using Graph Matching

Today’s market evolution and high volatility of business requirements put an increasing emphasis on the ability for systems to accommodate the changes required by new organizational needs while maintaining security objectives satisfiability. This is all the more true in case of collaboration and interoperability between different organizations and thus between their information systems. Ontology mapping has been used for interoperability and several mapping systems have evolved to support the same. Usual solutions do not take care of security. That is almost all systems do a mapping of ontologies which are unsecured. We have developed a system for mapping secured ontologies using graph similarity concept. Here we give no importance to the strings that describe ontology concepts, properties etc. Because these strings may be encrypted in the secured ontology. Instead we use the pure graphical structure to determine mapping between various concepts of given two secured ontologies. The paper also gives the measure of accuracy of experiment in a tabular form in terms of precision, recall and F-measure.

K. Manjula Shenoy, K. C. Shet, U. Dinesh Acharya
Image Restoration Using Knowledge from the Image

There are various real world situations where, a portion of the image is lost or damaged which needs an image restoration. A Prior knowledge of the image may not be available for restoring the image, which demands for a knowledge derivation from the image itself. Restoring the lost portions of the image based on the knowledge obtained from the image area surrounding the lost area is called as Digital Image Inpainting. The information content in the lost area could contain structural information like edges or textural information like repeating patterns. This knowledge is derived from the boundary area surrounding the lost area. Based on this, the lost area is restored by looking at similar information in the same image. Experimentation have been done on various images and observed that the algorithm restores the image in a visually plausible way.

S. Padmavathi, K. P. Soman, R. Aarthi
Harmony-Based Feature Weighting to Improve the Nearest Neighbor Classification

This paper introduces the use of Harmony Search with novel fitness function in order to assign higher weights to informative features while noisy irrelevant features are given low weights. The fitness function is based on the Area Under the receiver operating characteristics Curve (AUC). The aim of this feature weighting is to improve the performance of the

k

-NN algorithm. Experimental results show that the proposed method can improve the classification performance of the

k

-NN algorithm in comparison with the other important method in realm of feature weighting such as Mutual Information, Genetic Algorithm, Tabu Search and chi-squared (

χ

2

). Furthermore, on synthetic data sets, this method is able to allocate very low weight to the noisy irrelevant features which may be considered as the eliminated features from the data set.

Ali Adeli, Mehrnoosh Sinaee, M. Javad Zomorodian, Ali Hamzeh
Effectiveness of Different Partition Based Clustering Algorithms for Estimation of Missing Values in Microarray Gene Expression Data

Microarray experiments normally produce data sets with multiple missing expression values, due to various experimental problems. Unfortunately, many algorithms for gene expression analysis require a complete matrix of gene expression values as input. Therefore, effective missing value estimation methods are needed to minimize the effect of incomplete data during analysis of gene expression data using these algorithms. In this paper, missing values in different microarray data sets are estimated using different partition-based clustering algorithms to emphasize the fact that clustering based methods are also useful tool for prediction of missing values. However, clustering approaches have not been yet highlighted to predict missing values in gene expression data. The estimation accuracy of different clustering methods are compared with the widely used KNNimpute and SKNNimpute methods on various microarray data sets with different rate of missing entries. The experimental results show the effectiveness of clustering based methods compared to other existing methods in terms of Root Mean Square error.

Shilpi Bose, Chandra Das, Abirlal Chakraborty, Samiran Chattopadhyay
MACREE – A Modern Approach for Classification and Recognition of Earthquakes and Explosions

Though many systems are available for discrimination between earthquakes and explosions, our introduces new advances and some rudimental results of our ongoing research project. To discriminate between earthquakes and explosions, temporal and spectral features extracted from seismic waves, additional some seismological parameters (such as epicenter depth, location, magnitude) are crux for rapid and correct recognizing event sources (earthquakes or explosions). Seismological parameters are used as the first step to screen out obvious earthquake events. Fourier transforms (FFT), chirp-Z transforms, wavelet transforms have been conducted and some prominent features are acquired by present experimental dataset. In some experiments, wavelet features plus support vector classification (SVC) have reached very high correct recognition rate (>90%). This proposed paper can be used in evolving scenarios.

K. Vijay Krishnan, S. Viginesh, G. Vijayraghavan
Building Concept System from the Perspective of Development of the Concept

The concept system with rich content is the key to improve the performance of knowledge-based artificial intelligence knowledge system. And a sufficient number of concepts, rich in semantic association, to meet the multi-tasking and developed concept system are one of the major challenges of knowledge engineering. It is the fundamental goal of conceptualization of knowledge, too. In this paper, for the study of natural language processing, from the perspective of development of the concept, a framework is proposed to building concept system.

Cheng Xian-yi, Shen Xue-hua, Shi Quan
E-Government Adoption: A Conceptual Demarcation

The Information and Communication Technologies (ICT) are being increasingly used by various governments to deliver their services at the locations convenient to its citizens. E-government is a kind of governmental administration which is based on ICT Services. The essence of e-government is using information technology to break the boundary of administrative organizations, and build up a virtual electronic government. E-government initiatives are common in most countries as they promise a transparent, citizen-centric government and reduce operational cost. Emerging with E-government, theories and practices of public administration have stepped into a new knowledge era. E-government presents a tremendous impetus to move forward with higher quality, cost-effective, government services and a better relationship between citizens and government. This paper discusses the different issues, challenges, adoption factors for e-government implementation and, presents a conceptual demarcation on these factors.

Rahmath Safeena, Abdullah Kammani
Expert Webest Tool: A Web Based Application, Estimate the Cost and Risk of Software Project Using Function Points

There are several area of the software engineering in which we can use the function point analysis (FPA) like project planning, project construction, software implementation etc. In software development, accuracy and efficiency of cost estimation methodology for a web based application is very important. The proposed web based application (i.e. Expert webest tool), is to produce accurate cost estimation and risk estimation throughout the software development cycle to determine feasibility of software project. Cost of the software projects depends on the project size, project type, cost adjustment factor, cost driven factors, nature and characteristics of the project. Software estimation needs to estimates or predict the software costs and software risk early in the software life-cycle.

In this paper we proposed the Expert webest tool in Java, this tool is used to two different purpose, first to estimate the cost of the software & secondly, to estimate the risk in the software. Most of the software’s fails due to over budget, delay in the delivery of the software & so on. Function point is a well known established method to estimate the size of software projects. Its measure of software size that uses logical functional terms, business owners & user, more readily understand.

The management of risks is a central issue in the planning and management of any venture. In the field of software, Risk Management is a critical discipline. The process of risk management embodies the identification, analysis, planning, tracking, controlling, and communication of risk. It gives us a structured mechanism to provide visibility into threats to projects success. Risk management is a discipline for living with the possibility that future events may cause adverse effects. Risk management partly means reducing uncertainty. The propose tool indicates the risk & estimates risk using risk exposure. Management team to estimates the cost & risk within a planned budget and provide a fundamental motivation towards the development of web based application project. Find heuristic risk assessment using cost factors, indicating product & project risk using some risk factors & check some risk management strategies in under estimation development time.

Ajay Jaiswal, Meena Sharma
A Web-Based Adaptive and Intelligent Tutor by Expert Systems

Todays, Intelligent and web-based E-learning is one of regarded topics. So researchers are trying to optimize and expand its application in the field of education. The aim of this paper is developing of E-learning software which is customizable, dynamic, intelligent and adaptive with Pedagogy view for learners in intelligent schools. This system is an integration of adaptive web-based E-learning with expert systems as well. Learning process in this system is as follows. First intelligent tutor determines learning style and characteristics of learner by a questionnaire and then makes his model. After that the expert system simulator plans a pre-test and then calculates his score. If the learner gets the required score, the concept will be trained. Finally the learner will be evaluated by a post-test. The proposed system can improves the education efficiency highly as well as decreases the costs and problems of an expert tutor. As a result, every time and everywhere (ETEW) learning would be provided via web in this system. Moreover the learners can enjoy a cheap remote learning even at home in a virtual simulated physical class. So they can learn thousands courses very simple and fast.

Hossein Movafegh Ghadirli, Maryam Rastgarpour
Traffic Light Synchronization

Traffic Synchronization mechanisms aim at minimizing traffic bottle necks, controlling traffic flow, by means of dynamic adjustments in Traffic Signal timings. This paper presents a two level synchronization approach, Local Synchronization and Global Synchronization for synchronizing the traffic flow. Focusing upon determining the traffic light timings based not only upon the current lane densities perform the local synchronization which itself is standalone, and also the determination of the green time of the traffic signals based on the densities of the peer junctions at various levels, along with a set of parameters associated with these to provide Global Synchronization, easing out the traffic flow from the heavier density areas to the lower density zones. Hence, implementing two level intelligence platforms, the system serves as a reliable standalone traffic control system control even if either one of the intelligence platform goes down.

K. Sankara Nayaki, N. U. Nithin Krishnan, Vivek Joby, R. Sreelakshmi
Partition Sort versus Quick Sort: A Comparative Average Case Analysis with Special Emphasis on Parameterized Complexity

In our previous work we introduced Partition sort and found it to be more robust compared to the Quick sort in average case. This paper does a more comprehensive comparative study of the relative performance of these two algorithms with focus on parameterized complexity analysis. The empirical results revealed that Partition sort is the better choice for discrete distribution inputs, whereas Quick sort was found to have a clear edge for continuous data sets.

Niraj Kumar Singh, Soubhik Chakraborty
Removal of Inconsistent Training Data in Electronic Nose Using Rough Set

Inconsistency in the electronic nose data set may appear due to noise that originates from various sources like electrical equipments, measuring instruments and some times the process itself. The presence of high noise leads to produce data that are of conflicting decision and thus encounters misleading or biased results. Also the performance of the electronic nose depends upon the number of relevant, irredundant features present in the data set. In an electronic nose the features correspond to the sensor array. While deploying an electronic nose for a specific application, it is observed that some of the features (sensors response) may not be required rather than only a subset of the sensor array contributes to the decision, which implies the optimization of sensor array is also important. To obtain a consistent precise data set both the conflicting data and irrelevant features must be removed. The rough set theory proposed by Z. Pawlak, is capable of dealing with such an imprecise, inconsistent data set and in this paper, the rough-set based algorithm has been applied to remove the conflicting training patterns and optimize the sensor array in an electronic nose instrument used for sensing aroma of black tea samples.

Anil Kumar Bag, Bipan Tudu, Nabarun Bhattacharyya, Rajib Bandyopadhyay
A Decentralised Routing Algorithm for Time Critical Applications Using Wireless Sensor Networks

Wireless Sensor Networks (WSN) are presently creating scenarios of decentralised architectures where application intelligence is distributed among devices. Decentralised architectures are composed of networks that contain sensors and actuators. Actuators base their action on the data gathered by sensors. In this paper, a decentralised routing algorithm called DRATC for time critical applications like fire monitoring and extinguishing is proposed that makes use of the Decentralised Threshold Sensitive routing algorithm. The sensing environment consists of many Monitoring Nodes that sense fire and report the data to the Cluster Head. The Cluster Head directs the Extinguishing Node to extinguish the fire before sending the data to the Base Station.

R. A. Roseline, P. Sumathi
Application Security in Mobile Devices Using Unified Communications

Unified communications is evolving with each passing day, by making its presence felt in several new user scenarios. It enables the consumer to avail information through various channels, thus ensuring a faster message delivery. Therefore, the time taken to take a decision based on the information or the time taken to act on the information is reduced, resulting in an enterprise becoming more agile, and meeting the ever increasing demand of its customers (internal or external).

Ramakrishna Josyula
Iterative Image Fusion Using Fuzzy Logic with Applications

Image fusion is the process of reducing uncertainty and minimizing redundancy while extracting all the useful information from the source images. Image fusion process is required for different applications like medical imaging, remote sensing, machine vision, biometrics and military applications. In this paper, an iterative fuzzy logic approach utilized to fuse images from different sensors, in order to enhance visualization. The proposed workfurther explores comparison between fuzzy based image fusion and iterative fuzzy fusion technique along with quality evaluation indices for image fusion like image quality index, mutual information measure, root mean square error, peak signal to noise ratio, entropy and correlation coefficient. Experimental results obtained from fusion process prove that the use of the proposed iterative fuzzy fusion can efficiently preserve the spectral information while improving the spatial resolution of the remote sensing images and medical imaging.

Srinivasa Rao Dammavalam, Seetha Maddala, M. H. M. Krishna Prasad
A Review on Clustering of Web Search Result

The over abundance of information on the web, makes information retrieval a difficult process. Today’s search engines give too many results out of which only few are relevant. A user has to browse through the result pages to get the desired result. Web search result clustering is the clustering of results returned by the search engines into meaningful groups. This paper throws light and categorizes various clustering techniques that have been applied on the web search result.

Mansaf Alam, Kishwar Sadaf
Application of Information Theory for Understanding of HLA Gene Regulation in Leukemia

The classical concept of information entropy can be useful in analyzing data pertaining to bioinformatics. In the present work, this has been utilized in understanding of the regulation of HLA gene expression by the inducible promoter region binding transcription factors (TFs). Human HLA surface expression data acquired through flow cytometry and corresponding different TFs expression data acquired through semi-quantitative PCR have been used in this work. The gene regulation phenomenon is considered as an information propagation channel with an amount of distortion. Information entropies computed for the source, receiver and computation of channel equivocation and mutual information are used to characterize the phenomenon of HLA gene regulation. The results obtained in the current exercise reveals that the state of leukemia alters the role of each TF, which tally with the current hypotheses about HLA gene regulation in different leukemias. Hence, this work shows the applicability of information theory in understanding of HLA gene regulation derived from human data.

Durjoy Majumder
Improving Security in Digital Images through Watermarking Using Enhanced Histogram Modification

Image transmission plays an important role in recent studies of engineering and scientific research fields. While transmitting the image, they have to be secured. Many approaches are available for secure transmission of images. This method focuses towards the use of invisible watermarking for encryption purpose. The attacker cannot able to find the difference when watermarking is used. There are two algorithms used for making the watermarking object. The performance evaluation is done by introducing various attacks to the watermarked object and it is done using Matlab 7.11.

Vidya Hari, A. Neela Madheswari
Survey on Co-operative P2P Information Exchange in Large P2P Networks

Peer to Peer Information Exchange (PIE) is a technique to improve the data availability using data present within the peers in the network. Network Coding (NC) which gives a set of combined data dissemination procedure to improves the transmission efficiency of the network. These two techniques are merged in such a way that the PIE can be performed in much improved way. But immense problem concerned with the technique is that the performance degrades gradually with increase in the number of peers in the network. Clustering approach is proved to be more efficient for solving the scalability issues in large networks. Thus in this paper, we have presented a detailed study on the techniques like PIE, NC and clustering from the view of their requirement, merits and demerits and performance improvement on their combine usage. In this paper, we have also provided a distant view of Co-operative P2P Information Exchange (cPIE) which incorporates clustering technique in the existing PIE with NC technique to eradicate its scalability and performance bottleneck issues.

S. Nithya, K. Palanivel
SNetRS: Social Networking in Recommendation System

With the proliferation of electronic commerce and knowledge economy environment both organizations and individuals generate and consume a large amount of online information. With the huge availability of product information on website, many times it becomes difficult for a consumer to locate item he wants to buy. Recommendation Systems [RS] provide a solution to this. Many websites such as YouTube, e-Bay, Amazon have come up with their own versions of Recommendation Systems. However Issues like lack of data, changing data, changing user preferences and unpredictable items are faced by these recommendation systems. In this paper we propose a model of Recommendation systems in e-commerce domain which will address issues of cold start problem and change in user preference problem. Our work proposes a novel recommendation system which incorporates user profile parameters obtained from Social Networking website. Our proposed model SNetRS is a collaborative filtering based algorithm, which focuses on user preferences obtained from FaceBook. We have taken domain of books to illustrate our model.

Jyoti Pareek, Maitri Jhaveri, Abbas Kapasi, Malhar Trivedi
Linguistic Conversion of Syntactic to Semantic Web Page

Information is knowledge. In earlier days one has to find a resource person or resource library to acquire knowledge. But today just by typing a keyword on a search engine all kind of resources are available to us. Due to this mere advancement there are trillions of information available on net. So, in this era we are in need of search engine which also search with us by understanding the semantics of given query by the user. One such design is only possible only if we provide semantic to our ordinary HTML web page. In this paper we have explained the concept of converting an HTML page to RDFS/OWL page. This technique is incorporated along with natural language technology as we have to provide the Hyponym and Meronym of the given HTML pages.

G. Nagarajan, K. K. Thyagarajan
Two-Stage Rejection Algorithm to Reduce Search Space for Character Recognition in OCR

Optical Character Recognition converts text in images into a form that the computer can manipulate. The need for faster OCRs stems from the abundance of such text. This paper presents a Two-Stage Rejection Algorithm for reducing the search space of an OCR. It is tacit that the reduction in search space expedites an OCR. Preprocessing operations are applied on the input and features are extracted from them. These feature vectors are clustered and the Two-Stage Rejection Algorithm is applied for character recognition. With about the same character recognition rate as other OCRs, an OCR reinforced with the Two-Stage Rejection Algorithm is considerably faster.

Srivardhini Mandipati, Gottumukkala Asisha, S. Preethi Raj, S. Chitrakala
Transforming Election Polling from Electronic Voting to Cloud as a Software Service in India

Today election polls have been re-engineered using electronic voting system in India. The system does come with several drawbacks and huge expenses including cost for security of the machine, transport, etc. In-fact customizing the electronic firmware to booth requirements is still a manual procedure and takes more effort and cost than modifying the same in a software application. In this paper we propose a model to transform the electronic polling to a cloud polling system to bring less cost, scalability, save time, easier modification, centralized control, speedy results, voter’s convenience and running a smooth poll. We also discuss the issues and possible solutions in moving towards the cloud polling system.

P. Vidhya
Area Efficient Architecture for Frequency Domain Multi Channel Digital Down Conversion for Randomly Spaced Signals

A complete frequency domain processing based digital down conversion architecture is presented in this paper. The conventional complex NCO multiplication is achieved with direct spectrum rotation and various possibilities for frequency domain filtering are discussed. An FFT-IFFT based architecture is implemented in Xilinx Virtex-6 family XC6VLX240T FPGA platform and synthesis is verified. The overlap and add method at the output of IFFT is employed to avoid time domain overlapping. The results demonstrate highly optimized area implementation with respect to conventional DDC architectures. The synthesis results show that the developed core can work upto clock rates of 250 MHz while occupying only 10% of the FPGA slices.

Latha Sahukar, M. Madhavi Latha
New Distances for Improving Progressive Alignment Algorithm

Distance computation between sequences is an important method to compare between biological sequences. In fact, we attribute a value to the sequences in order to estimate a percentage of similarity that can help to extract structural or functional information. Distance computation is also more important in the progressive multiple alignment algorithm. Indeed, it can influence the branching order of the sequences alignment and then the final multiple alignment. In this paper, we present new methods for distance computation in order to improve the progressive multiple alignment approach. The main difference between our distances and the other existed methods consists in the use of all the sequences of the set in the pair-wise comparison. We tested our distances on BALIBASE benchmarks and we compared with other typical distances. We obtained very good results.

Ahmed Mokaddem, Mourad Elloumi
Service-Oriented Architecture (SOA) and Semantic Web Services for Web Portal Integration

Service Oriented Architecture (SOA) is gradually replacing monolithic architecture as the premier design principle for new business applications with its inherently systematic nature and capability. I T requires investment of advanced architectural thinking into definition of services before any development of services or service consumers can begin. Earlier efforts of notable styles of SOA such as CORBA and XATMI have failed to be adopted as main stream projects because of demanding design process requirement with sense-making activities and even have been residing with the modern SOA or Web services middleware. Thus this paper aims in incorporating such sense-making design activities with the proposed semantic web service based architecture. This paper tries to tackle the above problem by proposing a service-oriented architecture for web data and service integration. Firstly, it proposes a service-oriented platform independent architecture and Secondly, it presents a specific deployment of such architecture for data and service integration on the web using semantic web services implemented with the WSMO (Web Services Modeling Ontology).

T. G. K. Vasista, Mohammed A. T. AlSudairi
Evolutionary Multi-Objective Optimization for Data-Flow Testing of Object-Oriented Programs

This paper presents a Class-Based Elitist Genetic Algorithm (CBEGA) to generate a suite of tests for testing the object-oriented programs using evolutionary multi-objective optimization techniques. Evolutionary Algorithms (EAs) are inspired by mechanisms in biological evolution like reproduction, mutation, recombination, and selection. EA applies these mechanisms repeatedly to a set of individuals called population to obtain solution. Multi-objective optimization involves optimizing a number of objectives simultaneously. The objectives considered in this paper for optimization are maximum coverage, minimum execution time and test-suite minimization. The experiment shows that CBEGA gives 92% path coverage and simple GA gives 88% path coverage for a set of java classes.

P. Maragathavalli, S. Kanmani
The Simulation of the Assembling Production Process

This contribution presented the using of a Lanner Group’s Witness PWE simulator and its modeling resource PF network for creating a model of the automotive bumpers assembly system. The article also describes a solution of the problem with different types of the attribute in data loading. The subsequent simulation experiments with the model is aim to setting the minimal frequency of the input arrival times for the present orders plan and for the two estimated orders plans.

Róbert Pauliček, Tomáš Haluška, Pavel Važan
Parallel Performance of Numerical Algorithms on Multi-core System Using OpenMP

The current microprocessors are concentrating on the multiprocessor or multi-core system architecture. The parallel algorithms are recently focusing on multi-core system to take full utilization of multiple processors available in the system. The design of parallel algorithm and performance measurement is the major issue on today’s multi-core environment. Numerical problems arise in almost every branch of science which requires fast solution. System of linear equations has applications in fusion energy, structural engineering, ocean modeling and method of moment formulation. In this paper parallel algorithms for computing the solution of system of linear equations and approximate value of

π

are presented. The parallel performance of numerical algorithms on multicore system have been analyzed and presented. The experimental results reveal that the performances of parallel algorithms are better than sequential. We implemented the parallel algorithms using multithreading features of OpenMP.

Sanjay Kumar Sharma, Kusum Gupta
Efficiency Improvement of a-Si:H/μc-Si:H Tandem Solar Cells by Adding a-SiOx:H Layer between Ag Nano Particles and the Active Layer

In this article, we investigated the positive effect of a-SiOx:H layer between Ag surface plasmon polaritons and the active layer of a thin film a- Si:H/

μ

c-Si:H tandem solar cell by isolating the metal nano particles that are responsible of creating surface recombination centres on the top cell. We fabricated four identical a-Si:H/

μ

c-Si:H tandem cells having different thickness of a-SiOx:H layer like 0, 10, 20 and 30 nm just before the Ag nano particle development, and measured J-V characteristics of each to find out an optimum a-SiOx:H insulating layer thickness. We showed that the overall efficiency of a tandem cell with Ag nano particles could be improved up to 8.65% compared with the one having no a-SiOx:H layer. The most promising layer thickness for a small area tandem cell was obtained around 20 nm with an overall efficiency of 16.19%. An improvement of 14.6% in short circuit current density (JSC) and 2.64% in open circuit voltage (Voc) was achieved.

Ozcan Abdulkadir, Dhinaharan Nagamalai
Unsupervised Hidden Topic Framework for Extracting Keywords (Synonym, Homonym, Hyponymy and Polysemy) and Topics in Meeting Transcripts

Keyword is the important item in the document that provides efficient access to the content of a document. It can be used to search for information or to decide whether to read a document. This paper mainly focuses on extracting hidden topics from meeting transcripts. Existing system is handled with web documents, but this proposed framework focuses on solving Synonym, Homonym, Hyponymy and Polysemy problems in meeting transcripts. Synonym problem means different words having similar meaning are grouped and single keyword is extracted. Hyponymy problem means one word denoting subclass is considered and super class keyword is extracted. Homonym means a word can have two or more different meanings. For example, Left might appear in two different contexts: Car left (past tense of leave) and Left side (Opposite of right). A polysemy means word with different, but related senses. For example, count has different related meanings: to say number in right order, to calculate. Hidden topics from meeting transcripts can be found using LDA model. Finally MaxEnt classifier is used for extracting keywords and topics which will be used for information retrieval.

J. I. Sheeba, K. Vivekanandan, G. Sabitha, P. Padmavathi
Comprehensive Study of Estimation of Path Duration in Vehicular Ad Hoc Network

In this paper, we study the significance of path duration and link duration in Vehicular Ad hoc Networks (VANETs). In VANETs, the high mobility of nodes (vehicles) is the main issue of concern. Because of this mobility and connectivity graphs changes very frequently and it affects the performance of VANETs. Therefore, path duration can be used to predict the behavior of the mobile nodes in the network. Estimation of the path duration in VANETs can be a key factor to improve the performance of the routing protocol. Estimation of path duration is a challenging task to perform as it depends on many parameters including node density, transmission range, numbers of hops, and velocity of nodes. This paper will provide a comprehensive study for estimating the path duration in VANETs.

R. S. Raw, Vikas Toor, N. Singh
Securing DICOM Format Image Archives Using Improved Chaotic Cat Map Method

In healthcare industry, the patient’s medical data plays a vital role because diagnosis of any ailments is done by using those data. The high volume of medical data leads to scalability and maintenance issues when using healthcare provider’s onsite picture archiving and communication system (PACS) and network oriented storage system. Therefore a standard is needed for maintaining the medical data and for better diagnosis. Since the medical data reflects in a similar way to individuals personal information, secrecy should be maintained. Maintaining secrecy can be done by encrypting the data, but as medical data involves images and videos, traditional text based encryption/decryption schemes are not adequate for providing confidentiality. In this paper, we propose a method for securing the DICOM format medical archives by providing a better confidentiality. Our contribution in this method is of twofold: (1) Development of Chaotic based Arnold Cat Map for encryption/decryption of DICOM files and (2) Applying diffusion for those encrypted files. By applying this method, the secrecy of medical data is maintained and is tested with various DICOM format image archives by studying the following parameters i) PSNR-for quality of images and ii) Key-for security.

M. Arun Fera, Suresh Jaganathan
Brain Tumor Segmentation Using Genetic Algorithm and Artificial Neural Network Fuzzy Inference System (ANFIS)

Medical image segmentation plays an important role in treatment planning, identifying tumors, tumor volume, patient follow up and computer guided surgery. There are various techniques for medical image segmentation. This paper presents a image segmentation technique for locating brain tumor (Astrocytoma-A type of brain tumor). Proposed work has been divided in two phases-In the first phase MRI image database (Astrocytoma grade I to IV) is collected and then preprocessing is done to improve quality of image. Secondphase includes three steps-Feature extraction, Feature selection and Image segmentation. For feature extraction proposed work uses GLCM (Grey Level co-occurrence matrix). To improve accuracy only a subset of feature is selected using Genetic algorithm and based on these features fuzzy rules and membership functions are defined for segmenting brain tumor from MRI images of .ANFIS is a adaptive network which combines benefits of both fuzzy and neural network. Finally, a comparative analysis is performed between ANFIS, neural network, Fuzzy, FCM, K-NN, DWT+SOM, DWT+PCA+KN, Texture combined +ANN, Texture Combined+ SVM in terms of sensitivity, specificity, accuracy.

Minakshi Sharma, Sourabh Mukharjee
Model Oriented Security Requirements Engineering (MOSRE) Framework for Web Applications

In the recent years, tasks such as the Security Requirements Elicitation, the Specification of Security Requirements or the Security requirements Validation are essential to assure the Quality of the resulting software. An increasing part of the communication and sharing of information in our society utilizes Web Applications. Last two years have seen a significant surge in the amount of Web Application specific vulnerabilities that are disclosed to the public because of the importance of Security Requirements Engineering for Web based systems and as it is still under estimated. Therefore a thorough Security Requirements analysis is even more relevant. In this paper, we propose a Model oriented framework to Security Requirement Engineering (MOSRE) for Web Applications and applied our framework for E-Voting system. By applying Modeling technologies to Requirement phases, the Security requirements and domain knowledge can be captured in a well-defined model and it is better than traditional process.

P. Salini, S. Kanmani
Axial T2 Weighted MR Brain Image Retrieval Using Moment Features

Magnetic resonance images play a vital role in identifying various brain related problems. Some of the diseases of the brain show abnormalities predominately at a particular anatomical location which on MR appears at a slice at defined level. This paper proposes a novel technique to locate desired slice using Rotational, Scaling and Translational (RST) invariant features derived from a ternary encoded local binary pattern (LBP)image. The LBP image is obtained by labeling each pixel with a code of the texture primitive based on the local neighborhood. The ternary encoding on LBP identifies the boundary of the uniform region and thus reduces the time for calculating moments of different order. The distance function based on the RST features extracted from LBP between query and database image is used to retrieve similar images corresponds to the query image.

Abraham Varghese, Reji Rajan Varghese, Kannan Balakrishnan, J. S. Paul
Securing Data from Black Hole Attack Using AODV Routing for Mobile Ad Hoc Networks

Mobile Adhoc Networks has become an important and exciting technology in recent years in which security has become an important issue. Black hole Attack is one of the possible and severe security attacks in mobile ad hoc networks which block the communication of secret data. Black hole attack directly attacks the node’s data traffic on the path and intentionally drops, alters or delays the data traffic passing through that node. In another type of black hole attack which falsely replies for the route request which comes from the source that it has enough routes to the destination even it does not have path to the destination. This paper deals with prevention of both types of black hole attacks and secure data communication using secret sharing and Random Multipath Routing Techniques.

V. Kamatchi, Rajeswari Mukesh, Rajakumar
ISim: A Novel Power Aware Discrete Event Simulation Framework for Dynamic Workload Consolidation and Scheduling in Infrastructure Clouds

Today’s cloud environment is hosted in mega datacenters and many companies host their private cloud in enterprise datacenters. One of the key challenges for cloud computing datacenters is to maximize the utility of the Processing Elements (PEs) and minimize the power consumption of the applications hosted on them. In this paper we propose a framework called ISim, wherein a Datacenter manager playing the role of a Meta-scheduler minimizes power consumption by exploiting different power saving states of the processing elements. The considered power management techniques by the ISim framework are dynamic workload consolidation and usage of low power states on the processing elements. The meta-scheduler aims at maximizing the utility of the cores by performing dynamic workload consolidation using context switching between the cores inside the chip. The Datacenter manager makes use of a prediction algorithm to predict the number of cores that are required to be kept in active state to fulfil the input service request at a given moment, thus maximizing the CPU utilization. The simulation results show, how power can be conserved from the host level till the core level in a datacenter with the optimal usage of different power saving states without compromising the performance.

R. Jeyarani, N. Nagaveni, S. Srinivasan, C. Ishwarya
Processing RDF Using Hadoop

The basic inspiration of the Semantic Web is to broaden the existing human-readable web by encoding some of the semantics of resources in a machine-understandable form. There are various formats and technologies that help in making it possible. These technologies comprise of the Resource Description Framework (RDF), an assortment of data interchange formats like RDF/XML, N3, N-Triples, and representations such as RDF Schema (RDFS) and Web Ontology Language (OWL), all of which help in providing a proper description of concepts, terms and associations in a particular knowledge domain. Presently, there are some existing frameworks for semantic web technologies but they have limitations for large RDF graphs. Thus storing and efficiently querying a large number of RDF triples is a challenging and important problem. We propose a framework which is constructed using Hadoop to store and retrieve massive numbers of RDF triples by taking advantage of the cloud computing paradigm. Hadoop permits the development of reliable, scalable, proficient, cost-effective and distributed computing using very simple Java interfaces. Hadoop comprises of a distributed file system HDFS to stock up RDF data. Hadoop Map Reduce framework is used to answer the queries. MapReduce job divides the input data-set into independent units which are processed in parallel by the map tasks , which then serve as inputs to the reduce tasks. This framework takes care of task scheduling, supervising them and re-execution of the failed tasks. Uniqueness of our approach is its efficient, automatic allocation of data and work across machines and in turn exploiting the fundamental parallelism of the CPU cores. Results confirm that our proposed framework offers multi-fold efficiencies and benefits which include on-demand processing, operational scalability, competence, cost efficiency and local access to enormous data, contrasting the various traditional approaches.

Mehreen Ali, K. Sriram Bharat, C. Ranichandra
An Ant Colony Optimization Based Load Sharing Technique for Meta Task Scheduling in Grid Computing

Grid Computing is the fast growing industry, which shares the resources in the organization in an effective manner. Resource sharing requires more optimized algorithmic structure, otherwise the waiting time and response time are increased, ansd the resource utilization is reduced. In order to avoid such reduction in the performance of the grid system, an optimal resource sharing algorithm is required. The traditional min–min algorithm is a simple algorithm that produces a schedule that minimizes the makespan than the other traditional algorithms in the literature. But it fails to produce a load balanced schedule. In recent days, ACO plays a vital role in the discrete optimization problems. The ACO solves many engineering problems and provides optimal result which includes Travelling Salesman Problem, Network Routing, and Scheduling. This paper proposes Load Shared Ant Colony Optimization (LSACO) which shares the load among the available resources. The proposed method considers memory requirement as a QoS parameter. Through load sharing LSACO reduces the overall response time and waiting time of the tasks.

T. Kokilavani, D. I. George Amalarethinam
Optimum Sub-station Positioning Using Hierarchial Clustering

Selection of optimum location of a sub-station and distribution of load points to each available sub-station has been a major concern among researchers but all have made either the use of man machine interface or have made some approximations. Here in this paper, a soft computing approach Hierarchial Clustering method is used for grouping the various load points as per the number of distribution sub-stations available. The method further gives an optimum location of the distribution sub-station taking into aspects the distances of the various load points that it is feeding. The results of the discussed technique will lead to a configuration of distribution substations depending on the no. of load points and sub-stations required. It will have an effect of lowering the long range distribution expenses as it will lead to optimum feeder path. The application of the proposed methodology to a case study is presented.

Shabbiruddin, Sandeep Chakravorty, Amitava Ray
Comparison of PCA, LDA and Gabor Features for Face Recognition Using Fuzzy Neural Network

A face recognition system identifies or verifies face images from a stored database of faces when a still image or a video is given as input. The recognition accuracy depends on the features used to represent the face images. In this paper a comparison of three popular features – PCA, LDA and Gabor features - used in literature to represent face images is given. The classifier used is a Fuzzy Neural Network classifier. The comparison was performed using AT&T, Yale and Indian databases. From the experimental results, the LDA features provide better Recognition Rates in the case of face images with less pose variations. Where more pose variations are involved, the Gabor features performed better than LDA features. For recognition tasks where recognition of trained individuals and rejection of untrained individuals are considered, the LDA features provide better results in terms of very low False Acceptance Rates and False Rejection Rates.

Dhanya S. Pankaj, M. Wilscy
Investigations on Object Constraints in Unified Software Development Process

The Object constraints can be described as the expressions that are used to insert important data in object oriented models. The Object Management Group founded a worldwide standard for object-oriented analysis and design artifacts specifically the diagrams. The specification and standard, known as the Unified Modeling Language, comprises model diagrams and the allied and associated semantics. Unified Modeling Language is meant for modeling and, the Object Constraint Language is the specification and standard for specifying expressions. These expressions add essential, crucial and critical information to object-oriented models and other object modeling workproducts. In Unified Software Development Process we have analysis & Design discipline to have complete architecture and design of the system. Analysis & Design discipline is followed by implementation discipline. The activities of the implementation phase are mainly captured in construction phase. In unified software development life cycle the expressions in the design model are forward engineered to produce or generate the source code. The source code that is generated depends a lot on the platform and the technology that are selected for the software development. We have investigated how the expressions are developed and incorporated in the models in an electronic business solution. Further object constraint language is used to form the expressions that are attached to the object oriented model and artifacts. Then models are forward engineered to generate the code. In a nutshell we produce code from abstract, models or diagrams making use of object constraints language that is to achieve round trip engineering format between model and code. In this paper we have generated expressions from object constraint perspective for a reward point system as applied to a customer in e-business. We have also evolved the Context Definition, Initial Values & Derivation Rules, Query Operations, Attributes and Operations in this regard.

Meena Sharma, Rajeev G. Vishwakarma
Vehicle Safety Device (Airbag) Specific Classification of Road Traffic Accident Patterns through Data Mining Techniques

Rich developing countries suffer from the consequences of increase in both human and vehicle population. Road accident fatality rates depend upon many factors which could vary for different countries. It is a very challenging task and investigating the dependencies between the attributes become complex because of many environmental and road related factors. In this research work we applied data mining classification technique RndTree and RndTree using ensemble methods viz. Bagging, AdaBoost and Multi Cost Sensitive Bagging (MCSB) to carry out vehicle safety device based classification of which RndTree using Adaboost gives high accurate results. The training dataset used for the research work is obtained from Fatality Analysis Reporting System (FARS) which is provided by the University of Alabama’s Critical Analysis Reporting Environment (CARE) system. The results reveal that RndTree using Adaboost improvised the classifier’s accuracy.

S. Shanthi, R. Geetha Ramani
Design of Video Conferencing Solution in Grid

In grid environment, resources and users from different administrative domains are integrated and coordinated each other. Grid uses open standard, general-purpose protocols and interfaces to provide nontrivial quality of services. Videoconferencing is communication between two or more people from geographically different locations by simultaneous two-way video and audio transmissions. This is a type of ’visual collaboration’ with audio-video communication. It differs from telephone calls through video display. Audio and Video communications is achieved in form of conferencing through Internet. This paper proposes design of a Video Conferencing Solution (VCS), which enables the users to explore the power of collaborative conferencing in grid environment. This solution is capable of setting up live meetings between a host and a number of participants.

Soumilla Sen, Ajanta De Sarkar
Survey of Computer-Aided Diagnosis of Thyroid Nodules in Medical Ultrasound Images

In medical science, diagnostic imaging is an invaluable tool because of restricted observation of the specialist and uncertainties in medical knowledge. A thyroid ultrasound is a non-invasive imaging study used to understand the anatomy of thyroid gland which is not possible with other techniques. Various classifiers are used to characterize thyroid nodules into benign/malignant based on the extracted features to make correct diagnosis. Current classification approaches are reviewed with classification accuracy for thyroid ultrasound image applications. The aim of this paper is to review existing approaches for the diagnosis of Nodules in thyroid ultrasound images.

Deepika Koundal, Savita Gupta, Sukhwinder Singh
A Brief Review of Data Mining Application Involving Protein Sequence Classification

Data mining techniques have been used by researchers for analyzing protein sequences. In protein analysis, especially in protein sequence classification, selection of feature is most important. Popular protein sequence classification techniques involve extraction of specific features from the sequences. Researchers apply some well-known classification techniques like neural networks, Genetic algorithm, Fuzzy ARTMAP, Rough Set Classifier etc for accurate classification. This paper presents a review is with three different classification models such as neural network model, fuzzy ARTMAP model and Rough set classifier model. A new technique for classifying protein sequences have been proposed in the end. The proposed technique tries to reduce the computational overheads encountered by earlier approaches and increase the accuracy of classification.

Suprativ Saha, Rituparna Chaki
Hybrid Algorithm for Job Scheduling: Combining the Benefits of ACO and Cuckoo Search

Job scheduling problem is a combinatorial optimization problem in computer science in which ideal jobs are assigned to resources at particular times. Our approach is based on heuristic principles and has the advantage of both ACO and Cuckoo search. In this paper, we present a Hybrid algorithm, based on ant colony optimization (ACO) and Cuckoo Search which efficiently solves the Job scheduling problem, which reduces the total execution time. In ACO, pheromone is chemical substances that are deposited by the real ants while they walk. When it comes to solving optimization problems it acts as if it lures the artificial ants. To perform a local search, we use Cuckoo Search where there is essentially only a single parameter apart from the population size and it is also very easy to implement.

R. G. Babukarthik, R. Raju, P. Dhavachelvan
Hybrid Ant Colony Optimization and Cuckoo Search Algorithm for Job Scheduling

Job scheduling is a type of combinatorial optimization problem. In this paper, we propose a Hybrid algorithm which combines the merits of ACO and Cuckoo Search. The major problem in the ACO is that, the ant will walk through the path where the chemical substances called pheromone is deposited. This acts as if it lures the artificial ants. Cuckoo search can perform the local search more efficiently and there is only a single parameter apart from the population size. It minimizes the makespan and the scheduling can be used in scientific computing and high power computing.

R. Raju, R. G. Babukarthik, P. Dhavachelvan
Particle Based Fluid Animation Using CUDA

Particle based animation employing physical model is a highly compute-intensive technique for realistic animation of fluids. It has been used since its inception for the offline production of high quality special effects of fluids in the movies. However due to intense computational cost, it could not be adapted for real-time animations. This paper primarily focuses on formulation of parallel algorithms for particle based fluid animation using Smoothed Particle Hydrodynamics(SPH) approach employing CUDA enabled GPU to make it near real-time. SPH technique is highly suitable for SIMT architecture of CUDA enabled GPU promising better speedup than CPU based approaches. The most important hurdle in parallelization using CUDA is the existing parallel algorithms do not map efficiently to CUDA. In this paper we have employed parallel sorting based particle grid construction approach to reduce computational cost of SPH density and force computation from O (N2) to O (N).

Uday A. Nuli, P. J. Kulkarni
A Novel Model for Encryption of Telugu Text Using Visual Cryptography Scheme

This article presents a novel methodology of a visual cryptographic system based on Telugu Text. Visual Cryptography concerns with the methodology of hiding a secret message in

n

shares such that each individual receives one share and only the authorized person can identify the secret messages after superimposing one share upon the other. Many techniques are proposed in literature where in Rotation Visual Cryptography the underlying message of hiding is done by rotating the shares at different angles or directions. The proposed sliding scheme reveals the information by horizontal sliding of a share in downward direction by an angle of 180

0

.

G. Lakshmeeswari, D. Rajya Lakshmi, Y. Srinivas, G. Hima Bindu
Frequent Queries Selection for View Materialization

A data warehouse stores historical data for answering analytical queries. These analytical queries are long, complex and exploratory in nature and, when processed against a large data warehouse, consume a lot of time for processing. As a result the query response time is high. This time can be reduced by materializing views over a data warehouse. These views aim to improve the query response time. For this, they are required to contain relevant information for answering future queries. In this paper, an approach is presented that identifies such relevant information, obtained from previously posed queries on the data warehouse. The approach first identifies subject specific queries and then, from amongst such subject specific queries, frequent queries are selected. These selected frequent queries contain information that has been accessed frequently in the past and therefore has high likelihood of being accessed by future queries. This would result in an improvement in query response time and thereby result in efficient decision making.

T. V. Vijay Kumar, Gaurav Dubey, Archana Singh
HURI – A Novel Algorithm for Mining High Utility Rare Itemsets

In Data mining field, the primary task is to mine frequent itemsets from a transaction database using Association Rule Mining (ARM). Utility Mining aims to identify itemsets with high utilities by considering profit, quantity, cost or other user preferences. In market basket analysis, high consideration should be given to utility of item in a transaction, since items having low selling frequencies may have high profits. As a result, High Utility Itemset Mining emerged as a revolutionary field in Data Mining. Rare itemsets provide useful information in different decision-making domains. High Utility Rare Itemset Mining, HURI algorithm proposed in [12], generate high utility rare itemsets of users’ interest. HURI is a two-phase algorithm, phase 1 generates rare itemsets and phase 2 generates high utility rare itemsets, according to users’ interest. In this paper, performance evaluation and complexity analysis of HURI algorithm, based on different parameters have been discussed which indicates the efficiency of HURI.

Jyothi Pillai, O. P. Vyas, Maybin Muyeba
SMOTE Based Protein Fold Prediction Classification

Protein contact maps are two dimensional representations of protein structures. It is well known that specific patterns occuring within contact maps correspond to configurations of protein secondary structures. This paper addresses the problem of protein fold prediction which is a multi-class problem having unbalanced classes. A simple and computationally inexpensive algortihm called

Eight-Neighbour

algortihm is proposed to extract novel features from the contact map. It is found that of Support Vector Machine (SVM) which can be effectively extended from a binary to a multi-class classifier does not perform well on this problem. Hence in order to boost the performance, boosting algorithm called SMOTE is applied to rebalance the data set and then a decision tree classifier is used to classify “folds” from the features of contact map. The classification is performed across the four major protein structural classes as well as among the different folds within the classes. The results obtained are promising validating the simple methodology of boosting to obtain improved performance on the fold classification problem using features derived from the contact map alone.

K. Suvarna Vani, S. Durga Bhavani
Conceptual Application of List Theory to Data Structures

Following the approach of defining a set through its characteristic function and a multiset (bag) through its count function, Tripathy, Ghosh and Jena ([3]) introduced the concept of position function to define lists. The new definition has much rigor than the earlier one used in computer science in general and functional programming ([2]) in particular. Several of the concepts in the form of operations, operators and properties have been established in a sequence of papers by Tripathy and his coauthors ([3, 6, 7, 8]. Also, the concepts of fuzzy lists ([4]) and that of intuitionistic fuzzy lists ([5]) have been defined and studied by them. Recently an application to develop list theoretic relational databases and operations on them has been put forth by Tripathy and Gantayat ([9]). In the present article we provide another application of this approach in defining data structures like Stack, Queue and Array. One of the major advantages of this approach is the ease in extending all the concepts for basic lists to the context of fuzzy lists and intuitionistic fuzzy lists. We also illustrate this approach in the present paper.

B. K. Tripathy, S. S. Gantayat
A Comprehensive Study on Multifactor Authentication Schemes

In recent years, the development in Information technology has unleashed new challenges and opportunities for new authentication systems and protocols. Authentication ensures that a user is who they claim to be. The trust of authenticity increases exponentially when more factors are involved in the verification process. When security infrastructure makes use of two or more distinct and different category of authentication mechanisms to increase the protection for valid authentication, It is referred to as Strong Authentication or Multifactor Authentication. Multifactor authentication uses combinations of “Something you know,” “Something you have,” “Something you are” and “Somewhere you are”/“Someone you know”, to provide stronger remote authentication than traditional, unreliable single-factor username and password authentication. In this paper we do a survey on the different aspects of multifactor authentication, its need, its techniques and its impact.

Kumar Abhishek, Sahana Roshan, Prabhat Kumar, Rajeev Ranjan
Quality Assessment Based Fingerprint Segmentation

Lack of robust segmentation against degraded quality image is one of the open issues in fingerprint segmentation. Good fingerprint segmentation effectively reduces the processing time in automatic fingerprint recognition systems. Poor segmentation result in spurious and missing features thus degrading performance of overall system. Segmentation will be more effective if done in accordance to the quality of image. Fingerprint images with high quality have wide range of features which can be used for segmentation than the low quality image, where the fingerprint features are not clearly visible. This paper focus on the two folded segmentation process comprising of quality evaluation and segmentation based on it. Various global and local features are used for assessing quality of image and thereby using them for segmenting ridge area from plain background. The segmented images are compared using percentage of foreground area to total area, genuine number of minutiae points extracted from segmented area. The time taken for image segmentation is also used as a performance parameter. The proposed approach has been tested with images of different qualities from NIST and FVC data sets and the results are proven to be better than the conventional segmentation approaches.

Kumud Arora, Poonam Garg
Improved Algorithms for Anonymization of Set-Valued Data

Data anonymization techniques enable publication of detailed information, while providing the privacy of sensitive information in the data against a variety of attacks. Anonymized data describes a set of possible worlds that include the original data. Generalization and suppression have been the most commonly used techniques for achieving anonymization. Some algorithms to protect privacy in the publication of set-valued data were developed by Terrovitis

et al

.,[16]. The concept of k-anonymity was introduced by Samarati and Sweeny [15], so that every tuple has at least (k-1) tuples identical with it. This concept was modified in [16] in order to introduce

K

m

-anonymity, to limit the effects of the data dimensionality. This approach depends upon generalisation instead of suppression. To handle this problem two heuristic algorithms; namely the DA-algorithm and the AA-algorithm were developed by them.These alogorithms provide near optimal solutions in many cases.In this paper,we improve DA such that undesirable duplicates are not generated and using a FP-growth we display the anonymized data.We illustrate through suitable examples,the efficiency of our proposed algorithm.

B. K. Tripathy, A. Jayaram Reddy, G. V. Manusha, G. S. Mohisin
An Efficient Flash Crowd Attack Detection to Internet Threat Monitors (ITM) Using Honeypots

Now a days there is a rapid increase of traffic to a given web server within a short time as the number of Internet users increases, and such a phenomenon is called a flash crowd. Once flash crowds occurs a response rate decreases or the web server may crash as the load increases. In this paper we implement the Internet Threat Monitoring (ITM), is a globally scoped Internet monitoring system whose goal is to measure, detect characterize, and track threats such as distribute denial of service (DDoS) attacks and worms. To block the monitoring system in the internet the attackers are targeted the ITM system. In this paper we address flash crowd attack against ITM system in which the attacker attempt to exhaust the network and ITM’s resources, such as network bandwidth, computing power, or operating system data structures by sending the malicious traffic. We propose an information-theoretic frame work that models the flash crowd attacks using Botnet on ITM. Based on this model we generalize the flash crowd attacks and propose an effective attack detection using Honeypots.

K. Munivara Prasad, M. Ganesh Karthik, E. S. Phalguna Krishna
Hierarchical Directed Acyclic Graph (HDAG) Based Preprocessing Technique for Session Construction

Web access log analysis is to examine the patterns of web site usage and the features of user’s behavior. Preprocessing of the log data is very essential for efficient web usage mining as the normal log data is very noisy. Session construction is very vital step in the preprocessing phase and recently various real world problems can be modeled as traversals on graph and mining from these traversals provides effective results. On the other hand, the traversals on unweighted graph have been taken into consideration in existing works. This paper oversimplifies this to the case where vertices of graph are given weights to reflect their significance. Patterns are closed frequent Directed Acyclic Graphs with page browsing time. The proposed method constructs sessions using an efficient Directed Acyclic Graph approach which contains pages with calculated weights. Hierarchical Directed Acyclic Graph (HDAG) Kernel approach is used for session construction. The HDAG directly accepts several levels of both chunks and their relations, and then efficiently computes the weighed sum of the number of common attribute sequences of the HDAGs. This will help site administrators to find the interesting pages for users and to redesign their web pages. After weighting each page according to browsing time a DAG structure is constructed for each user session.

S. Chitra, B. Kalpana
Extending Application of Non-verbal Communication to Effective Requirement Elicitation

Requirements elicitation is the first stage in the process of developing a software product. The purpose of the requirements elicitation process is to build an understanding of the problem. It is fundamentally a communication process between a requirements elicitor and various Stakeholders. Interviewing stakeholder during Elicitation process is a communication intensive activity involving various techniques & tactics requiring communication skills apart from experience and knowledge. Interview involves Verbal and Non-verbal communication. Generally during Interview process Requirement is elicited via verbal Communication. A lot of work has been done to improve the Interview process and to observe and document verbal communication but the area of Non-verbal communication is still unexplored in the field of Requirement Elicitation. Interviewer should give emphasis to Non-verbal communication along with verbal communication so that he can elicit requirements more efficiently and effectively. In this paper we emphasize on Behavioral Aspects of Non-verbal communication i.e. the use of facial expressions, eye contact, gestures, Tone of voice, body posture, orientation, touch, and various cues and signals such as distance, amused, sleepy, pitch, sound, pacing, shaking, sweating are important in nonverbal communication during interviews for eliciting requirements. In this paper we have discussed our findings i.e. behavioral aspects, cues and signals during non verbal communication, these aspects of Non-Verbal Communication are to be followed to minimize problems encountered during requirement elicitation, so that the requirement are elicited and recorded properly.

Md. Rizwan Beg, Md. Muqeem, Md. Faizan Farooqui
A Critical Review of Migrating Parallel Web Crawler

The size of the internet is very large and it has grown enormously, search engines are the tools for World Wide Web navigation. In order to provide powerful search facilities, search engines maintain comprehensive indices for documents and their contents on the Web by continuously downloading Web pages for processing, known as web crawling. In this paper we reviewed various web crawlers and their performance attributes. We study mobile and parallel web crawling approach that makes web crawling system more effective and efficient. The major advantage of the mobile approach is that the analysis portion of the crawling process is done locally where the data resides rather than remotely inside the Web search engine. This can significantly reduce net- work load which, in turn, can improve the performance of the crawling process. The major advantage of parallel crawling is that as the size of the Web grows, it becomes imperative to parallelize a crawling process, in order to finish downloading pages in a reasonable amount of time. We identify fundamental issues related to migrating parallel crawling and also propose metrics to evaluate a migrating parallel crawler. Lastly, we summarize the web crawlers and their performance attributes that effects the process of web crawling.

Md. Faizan Farooqui, Md. Rizwan Beg, Md. Qasim Rafiq
Parallel Character Reconstruction Expending Compute Unified Device Architecture

Neural networks, or the artificial neural networks to be more precise, represents a technology that is rooted in many disciplines: neuroscience, mathematics, statistics, physics, computer science and engineering. Neural network finds applications in such fields as modeling, time series analysis, pattern recognition signal processing and control by virtue of an important property: the ability to learn from input data with or without a teacher .In a biological system, learning involves adjustments to the synaptic connections between neurons same for artificial neural networks (ANNs) works too that has made it applicable to valid applications. Neural Network architecture has the ability to learn for the things and then later on classify the things. Neural Network for Character Recognition is based over Multilayered Architecture having Back-propagation algorithm. First Network is been trained for the alphanumeric handwritten characters and then testing the network with the trained or untrained handwritten characters. We achieved a greater computation enhancement by using modified back- propagation algorithm having an added momentum term, which lowers the training time and speeds the system. The time is more reduced with its parallel implementation using CUDA.

Anita Pal, Kamal Kumar Srivastava, Atul Kumar
A Novel Algorithm for Obstacle Aware RMST Construction during Routing in 3D ICs

Three dimensional integrated circuits offer an attractive alternative to 2D planar ICs by providing increased system integration by either increasing functionality or combining different technologies. Routing phase during layout design of 3D ICs plays a critical role. The problem again becomes worse in presence of obstacles across the routing layers. This obstacle aware routing tree construction has become a challenging problem among the researchers recently. In this work, an efficient algorithm has been proposed for the construction of rectilinear minimum Steiner tree (RMST) in presence of obstacles across the routing layers using a shortest pair approach. Due to ever increasing design complexity issues, careful measures have been taken to reduce the time complexity of the proposed algorithm. The novelties of this work may be stated as follows (i) proposed algorithm helps to construct an RMST in presence of obstacles, (ii) time complexity of the proposed algorithm is very much competitive with available tools, (iii) proposed algorithm efficiently reduces the number of Steiner points during the construction of RMST in presence of obstacles in comparison to the standard solution available in absence of obstacles. Experimental results are quite encouraging.

Prasun Ghosal, Satrajit Das, Arindam Das
Study of the EEG Signals of Human Brain for the Analysis of Emotions

In this research, the emotions and the patterns of EEG signals of human brain will be studied. The aim of this research is to study the analysis of the changes in the brain signals in the domain of different emotions. The observations can be analysed for its utility in the diagnosis of psychosomatic disorders like anxiety and depression in economical way with higher precision.

Ashish R. Panat, Anita S. Patil
A Dynamic Approach for Mining Generalised Sequential Patterns in Time Series Clinical Data Sets

Similarity based stream time series is gaining ever-increasing attention due to its importance in many applications such as financial data processing, network monitoring, Web click-stream analysis, sensor data mining, and anomaly detection. These applications require managing data streams, i.e., data composed of continuous, real-time sequence of items. We propose a technique for pattern matching within static patterns and stream time series clinical data sets. The main objective of our project is to ascertain hidden patterns between incoming time series clinical data sets and the set of predetermined clinical patterns. By considering the incoming image data at a particular timestamp, we construct a MultiScale Median model at multiple levels to adapt to the stream time series, characterized by frequent updates. Further, we employ a pruning algorithm, Segment Median Pruning on clinical Image data for pruning all candidate patterns. Experiments have been carried out on retinal disease data set known as Age Related Macula Degeneration (ARMD) and simulation results show that the system is efficient in processing image data sets for making efficient and accurate decision.

M. Rasheeda Shameem, M. Razia Naseem, N. K. Subanivedhi, R. Sethukkarasi
Motion Correction in Physical Action of Human Body Applied to Yogasana

Here, we proposed framework for human motion analysis and efficient representation of motion as well as human body in the form of curve for motion correction. In our proposal Human body is represented by curves for which curve control points are identified, that may or may not be on body curve. Body motion is represented by motion trajectories for each control point. Control point motion is also represented by curves. As system proposes error correction in Yogasana, expert’s motion body curves and control point motion trajectory paths and practitioner’s motion are compared and error correction is suggested to practitioner. Framework can be applied for all scenes where standard requirements should meet, like sports, acting, physical exercise, and traditional dances.

Geetanjali Kale, Varsha Patil
Improvement the Bag of Words Image Representation Using Spatial Information

Bag of visual words (BOW) model is an effective way to represent images in order to classify and detect their contents. However, this type of representation suffers from the fact that, it does not contain any spatial information. In this paper we propose a novel image representation which adds two types of spatial information. The first type which is the spatial locations of the words in the image is added using the spatial pyramid matching approach. The second type is the spatial relation between words. To explore this information a binary tree structure which models the is-a relationships in the vocabulary is constructed from the visual words. This approach is a simple and computationally effective way for modeling the spatial relations of the visual words which shows improvement on the visual classification performance. We evaluated our method on visual classification of two known data sets, namely 15 natural scenes and Caltech-101.

Mohammad Mehdi Farhangi, Mohsen Soryani, Mahmood Fathy
Left Object Detection with Reduced False Positives in Real-Time

Identifying unattended objects in public places efficiently, is one of the major thrust areas of security. This paper proposes a real-time method to identify unattended or left objects in a region of interest that is under surveillance. This is a simple pixel based method for object detection which can be used for both indoor and outdoor environments. This method is robust to changes in illumination in case of high contrast foreground images, which is achieved through normalization. The false positives are eliminated through implementing a method following the ideas of codebook. The proposed method is tested on a live set up. It consists of an IP camera for video capture, an analytics server where the built in intelligence checks for the presence of any left object in the video and a user interface through which the concerned authority is intimated for any timely action. The entire communication in this system follows ONVIF (Open Network Video Interface Forum) standard. Apart from identifying unattended objects, it can be also used to keep designated areas clear of obstructions.

Aditya Piratla, Monotosh Das, Jayalakshmi Surendran
Adaptive Fusion Based Hybrid Denoising Method for Texture Images

This paper presents an efficient image denoising method by adaptively combining the features of wavelets and wave atom transforms. These transforms will be applied separately on the smooth areas of the image and the texture part of the image. The disintegration of the homogenous and nonhomogenous regions of noisy image is done by decomposing the noisy image into a noisy cartoon (smooth) image and a noisy texture image. Wavelets are good at denoising the smooth regions in an image and will be used to denoise the noisy cartoon image. Wave atoms better preserve the texture in an image hence is used to denoise the noisy texture image. The two images will be fused adaptively. For adaptive fusion different weights will be chosen for different areas in the image. Areas containing higher degree of texture will be allotted more weight, while the smoother regions will be weighed lightly. The information regarding the weights selection will be obtained from the variance map of the denoised texture image. Experimental results on standard test images provide better denoising results in terms of PSNR, SSIM, FOM and UQI. Texture is efficiently preserved and no unpleasant artifacts are observed.

Preety D. Swami, Alok Jain
An Efficient De-noising Technique for Fingerprint Image Using Wavelet Transformation

Fingerprint acts as a vital role for user authentication as it is unique and not duplicated. For this reason fingerprint images are taken for different computer security purposes. Unfortunately reference fingerprints may get corrupted with noise during acquisition, transmission, or retrieval from storage media. Many image-processing algorithms such as pattern recognition need a clean fingerprint image to work effectively which in turn needs effective ways of de-noising such images. In this paper, we propose an adaptive method of image de-noising in the wavelet sub-band domain assuming the images to be contaminated with noise based on threshold estimation for each sub-band. Under this framework, the proposed technique estimates the threshold level by apply sub-band of each decomposition level. This paper entails the development of a new MATLAB function based on our algorithm. The experimental evaluation of our proposition reveals that our method removes noise more effectively than the in-built function provided by MATLAB.

Ashish Kumar Dass, Rabindra Kumar Shial
Detection of Optic Disc by Line Filter Operator Approach in Retinal Images

The location of Optic Disc (OD) is of critical importance in retinal image analysis. This research paper carries out a new automated methodology to detect the optic disc (OD) in retinal images. OD detection helps the ophthalmologists to find whether the patient is affected by diabetic retinopathy or not. The proposed technique is to use line operator which gives higher percentage of detection than the already existing methods. The purpose of this project is to automatically detect the position of the OD in digital retinal fundus images. The method starts with converting the RGB image input into its LAB component. This image is smoothed using bilateral smoothing filter. Further, filtering is carried out using line operator. After which gray orientation and binary map orientation is carried out and then with the use of the resulting maximum image variation the area of the presence of the OD is found. The portions other than OD are blurred using 2D circular convolution. On applying mathematical steps like peak classification, concentric circles design and image difference calculation, OD is detected. The proposed method was evaluated using a subset of the STARE project’s dataset and the success percentage was found to be 96%.

R. Murugan, Reeba Korah
An Extension of FFT Based Image Registration

Image registration is considered as one of the most fundamental and crucial pre-processing task in image processing applications. It needs visual information from multiple images for comparison, integration or analysis. In this paper we present an extension of fast Fourier transform based image registration scheme. We have tested the proposed scheme with a number of selected images and found that the results are much better when compared to normal FFT method. The time complexity of our proposed method is of same order of the FFT based method [2].

P. Thangavel, R. Kokila
Scene Text Extraction from Videos Using Hybrid Approach

With fast intensification of existing multimedia documents and mounting demand for information indexing and retrieval, much endeavor has been done on extracting the text from images and videos. The prime intention of the projected system is to spot and haul out the scene text from video. Extracting the scene text from video is demanding due to complex background, varying font size, different style, lower resolution and blurring, position, viewing angle and so on. In this paper we put forward a hybrid method where the two most well-liked text extraction techniques i.e. region based method and connected component (CC) based method comes together. Initially the video is split into frames and key frames obtained. Text region indicator (TRI) is being developed to compute the text prevailing confidence and candidate region by performing binarization. Artificial Neural network (ANN) is used as the classifier and Optical Character Recognition (OCR) is used for character verification. Text is grouped by constructing the minimum spanning tree with the use of bounding box distance.

A. Thilagavathy, K. Aarthi, A. Chilambuchelvan
MSB Based New Hybrid Image Compression Technique for Wireless Transmission

It is observed that Digital images requires a large amount of memory to store and when retrieved from the internet, can take a considerable amount of time to download. Our method enables us to compress image in such a way that the utilization of memory will be less. The Proposed MSB based hybrid method has good compression rate (more than sixty percentage of compression) and has linear time complexity i.e.

O

(

MN

) where

M

and

N

denote number of pixels in horizontal and vertical directions. We have proved that the compressed image obtained after applying our algorithm has PSNR value greater than or equal to 32

dB

which is suitable for wireless transmission.

S. N. Muralikrishna, Meghana Ajith, K. B. Ajitha Shenoy
Multi-temporal Satellite Image Analysis Using Unsupervised Techniques

This paper presents flood assessment using non-parametric techniques for multi-temporal time series MODIS (Moderate Resolution Imaging Spectro radiometer) satellite images. The unsupervised methods like mean shift algorithm and median cut are used for automatic extraction of water pixel from the image. The extracted results presents a comparative study of unsupervised image segmentation methods. The performance evaluation indices like root mean square error and receiver operating characteristics are used to study algorithm performance. The result reported in this paper provides useful information for multi-temporal time series image analysis which can be used for current and future research.

C. S. Arvind, Ashoka Vanjare, S. N. Omkar, J. Senthilnath, V. Mani, P. G. Diwakar
Separable Discrete Hartley Transform Based Invisible Watermarking for Color Image Authentication (SDHTIWCIA)

In this paper a novel two-dimensional Separable Discrete Hartley Transform based invisible watermarking scheme has been proposed for color image authentication (SDHTIWCIA). Two dimensional SDHT is applied on each 2 × 2 sub-image block of the carrier image in row major order. Two bits are embedded in second, third and fourth frequency components of each 2 × 2 mask in transformed domain based on a secret key. Second and third bit position in each frequency coefficient has been chosen as embedding position. A delicate re-adjustment has incorporated in the first frequency component of each mask, to keep the quantum value positive in spatial domain without hampering the embedded bits. Inverse SDHT (ISDHT) is applied on each 2 × 2 mask as post embedding operation to produce the watermarked image. At the receiving end reverse operation is performed to extract the stream which is compared to the original stream for authentication. Experimental results conform that the proposed algorithm performs better than the Discrete Cosine Transform (DCT), Quaternion Fourier Transformation (QFT) and Spatio Chromatic DFT (SCDFT) based techniques.

J. K. Mandal, S. K. Ghosal
Normalised Euclidean Distance Based Image Retrieval Using Coefficient Analysis

The article presented a novel method based on normalized Euclidean distance using application of discrete wavelet transform and bins intensity measurement, which is then coupled to a parameterized framework for content-based image retrieval. The discrete wavelet transform captures both frequency and location information and make image retrieval efficient. It further facilitates to incorporate recent research work on feature based coefficient distributions. We demonstrate the applicability of the proposed method in the context of color texture retrieval on different image databases and compare retrieval performance to a collection of state-of-the-art approaches in the area. Our experiment results on a large database further include a thorough analysis of computations of the main building blocks and runtime measurements of images.

Nilofar Khan, Wasim Khan
SoC Modeling for Video Coding with Superscalar Projection

This paper presents a concept for a better quality of service in scalable video streaming services with an improved display scales. The available funds will be administered by multiple connections with feedback to support video streaming applications and for improving the visualization of imaging samples. The scaling factor is achieved by increasing the level of visualization of the display unit. In this paper, modular approach to SoC design and implementation of scalable video coding algorithm for digital video source in a low overhead SoC design proposed for the scaling operation forcibly source environment.

S. K. Fairooz, B. K. Madhavi
Invisible Image Watermarking Using Z Transforms (IIWZT)

This paper presents an invisible image watermarking technique in frequency domain through Z transform, with a hiding capacity of 1.5 bpB (1.5 bits per byte). Z(re

j

ω

) is a complex variable comes from Laplace transformation has two parameters, r denotes radius of Region of convergence and

ω

denotes angle respectively. In this technique (2 *2 ) sub matrix is taken from source image and converted into 1-diamensional (1*4) array which undergoes Z-transform with a is set of angular frequency(

ω

). 2 bits of secret message is embedded including the complex conjugate pair where multiple embedding is done. First co efficient is used for tuning purpose and not embedding for information. This process is repeated until exhaustion of secret image. Inverse Z transform is done at end to convert the image from frequency domain to spatial domain. Experimental result shows good PSNR and image fidelity which analytically suggest that the proposed scheme obtains better secrecy with improved fidelity.

J. K. Mandal, Rakesh Kanji
Region Identification of Infected Rice Images Using the Concept of Fermi Energy

Automated disease detection using the features of infected regions of a diseased plant image is a growing field of research in precision agriculture. Usually, infected regions are identified by applying different threshold based segmentation techniques. However, due to various factors like non-uniform illumination or noises, these techniques fail to provide sufficient information for classifying diseases accurately. In the paper, a novel region identification method based on Fermi energy has been proposed to detect the infected portion of the diseased rice images. From the infected region, neighboring gray level dependence matrix (NGLDM) based texture features are extracted to classify different diseases of rice plants. Performance of the proposed method has been evaluated by comparing classification accuracy with other segmentation algorithms, demonstrating superior result.

Santanu Phadikar, Jaya Sil, Asit KumarDas
Unsymmetrical Trimmed Midpoint as Detector for Salt and Pepper Noise Removal

A fixed 3x3 window, Unsymmetrical trimmed midpoint is used as a detector for the detection of fixed valued impulse noise is proposed for the increasing noise densities. The processed pixel is termed as noisy, if the absolute difference between processed pixels and unsymmetrical trimmed midpoint is greater than fixed threshold. Under high noise densities the processed pixel is noisy, so the median of the ordered array is found. The median is checked using the above procedure. If found true then the computed median is considered as noisy hence the corrupted pixel is replaced by the Unsymmetrical Trimmed midpoint of the current processing window. If median is not noisy then replace the median of the current processing window else if the pixel is termed uncorrupted, it is left unaltered. The proposed algorithm (PA) is tested on different varying detail images. The proposed algorithm is compared with the standard algorithms and found to give good results both qualitative and quantitatively for increasing noise densities. The proposed algorithm eliminates salt and pepper noise up to 80% and preserves edges up to 70%.

K. Vasanth, V. Jawahar Senthil Kumar, V. Elanangai
Identification of Handwritten Text in Machine Printed Document Images

In our daily lives we come across many documents where both printed and handwritten text co-exist and sometimes intermingle. As the OCR techniques for processing the two are quite different it is necessary to classify and distinguish them first. In this paper, a scheme has been proposed by which handwritten, printed and “mixed” text regions in the same document image can be identified and demarcated from each other for Bangla, the second most popular Indian script. The proposed scheme has been established on the basis of the structural and statistical idiosyncrasies of printed and handwritten Bangla text.

Sandipan Banerjee
Optimization of Integration Weights for a Multibiometric System with Score Level Fusion

The effectiveness of a multibiometric system can be improved by weighting the scores obtained from the degraded modalities in an appropriate manner. In this paper, we propose an integration weight optimization scheme to determine the optimal weight factor for the complementary modalities, under different noise conditions. Instead of treating the weight estimation process from an algebraic point of view, an attempt is made to consider the same from the principles of linear programming techniques. The performance of the proposed technique is analysed in the context of fingerprint and voice biometrics using sum rule of fusion. The weight factor is optimized against the recognition accuracy. The optimizing parameter is estimated in the training/ validation phase using Leave-One-Out Cross Validation (LOOCV) technique. The proposed biometric solution can be be easily integrated into any multibiometric system with score level fusion. More over, it finds extremely useful in applications where there are less number of available training samples.

S. M. Anzar, P. S. Sathidevi
A Two Stage Combinational Approach for Image Encryption

Information security is becoming more important in the field of information processing and management. Due to the fast development in the communication, a wide verity of information are transmitting in multiple medias like telephone, mobile, television, satellite communication, optical communication. Since image is also a very important information and there is a need for security in transmitting the image in most of the applications like satellite image transmissions, military applications, medical application and teleconferencing etc. In this paper, we describe a method for image encryption which has two stages. In first stage, each pixel of an image is converted to its equivalent eight bit binary number and in that eight bit number, number of bits equal to the length of password are rotated and then reversed. In second stage, a carrier image generated from the same password is added on the resultant image of first stage to which results final encrypted image.

H. S. Sharath Kumar, H. T. Panduranga, S. K. Naveen Kumar
Wavelet SIFT Feature Descriptors for Robust Face Recognition

This paper presents a new robust face recognition technique based on the extraction and matching of Wavelet-SIFT features from individual face images. Here, Biorthogonal wavelet 4.4 is employed as the basis for Discrete Wavelet Transform of the images. Then, SIFT Face recognition method is applied on LL and HH sub band combination of images for recognition. The results obtained with the proposed method are compared with basic SIFT face recognition and classic appearance based face recognition technique (PCA) over three face databases: Nottingham database, Aberdeen database and Iranian database.

Nerella Arun Mani Kumar, P. S. Sathidevi
Image Denoising Based on Neutrosophic Wiener Filtering

This paper proposes an image denoising technique based on Neutrosophic Set approach of wiener filtering. A Neutrosophic Set (NS), a part of neutrosophy theory, studies the origin, nature, and scope of neutralities, as well as their interactions with different ideational spectra. Now, we apply the neutrosophic set into image domain and define some concepts and operators for image denoising. Here the image is transformed into NS domain, which is described using three membership sets: True (T), Indeterminacy (I) and False (F). The entropy of the neutrosophic set is defined and employed to evaluate the indeterminacy. The

ω

-wiener filtering operation is used on T and F to decrease the set indeterminacy and remove noise. We have conducted experiments on a variety of noisy images using different types of noises with different levels. The experimental results demonstrate that the proposed approach can remove noise automatically and effectively. Especially, it can process not only noisy images with different levels of noise, but also images with different kinds of noise well without knowing the type of the noise.

J. Mohan, A. P. Thilaga Shri Chandra, V. Krishnaveni, Yanhui Guo
Modified Difference Expansion for Reversible Watermarking Using Fuzzy Logic Based Distortion Control

Digital watermarking using Difference Expansion (DE) is quite popular to embed reversibly the data followed by recovery of the original image. According to this algorithm, the least significant bit (LSB) of inter-pixel differences (between a pair of neighboring pixel) is used to embed data. It is seen that none of the DE works focuses on structural information retentions for the watermarked image at high embedding capacity. Moreover, security measure of the hidden data is not investigated under distortion constraint scenario. To this aim, a modification in DE is proposed that not only increase embedding space (hence, watermark payload) but also makes little change in structure and contrast comparison (imperceptibility) under similar luminance background. A simple fuzzy function is used to classify the image content into smooth, texture and edge region followed by adaptive distortion control. Modification also makes little change in relative entropy between the host and the watermarked data that leads to better security of the hidden data.

Hirak Kumar Maity, Santi P. Maity, Debashis Maity
A Novel Genetic Algorithm Based Data Embedding Technique in Frequency Domain Using Z Transform (ANGAFDZT)

In this paper a transformed domain based gray scale image authentication/data hiding technique using Z transform (ZT) termed as ANGAFDZT, has been proposed. Z-Transform is applied on 2 x 2 mask of the source image to transform into corresponding frequency domain. Four bits of the hidden image are embedded in each mask of the source image. Resulting image masks are taken as initial population. New Generation, Crossover and Mutation are applied on the initial population to obtain stego image. Genetic algorithm is used to enhance the security level. During the process of embedding, dimension of the hidden image followed by the content of the message/hidden image are embedded. Reverse process is followed during decoding. High PSNR obtained for various images compared to existing Chin-Chen Chang et al.[1] conform the quality of invisible watermark of ANGAFDZT.

J. K. Mandal, A. Khamrui, S. Chakraborty, P. Sur, S. K. Datta, I. RoyChoudhury
Speckle Noise Reduction Using Fourth Order Complex Diffusion Based Homomorphic Filter

Filtering out speckle noise is essential in many imaging applications. Speckle noise creates a grainy appearance that leads to the masking of diagnostically significant image features and consequent reduction in the accuracy of segmentation and pattern recognition algorithms. For low contrast images, speckle noise is multiplicative in nature. The approach suggested in this paper makes use of fourth order complex diffusion technique to perform homomorphic filtering for speckle noise reduction. Both quantitative and qualitative evaluation is carried out for different noise variances and found that the proposed approach out performs the existing methods in terms of root means square error (RMSE) value and peak signal to noise ratio (PSNR).

Jyothisha J. Nair, V. K. Govindan
A New Hybrid Approach for Denoising Medical Images

The most significant feature of diagnostic medical images is the removal of impulse noise which is commonly found in medical images and to make better image quality. In recent years, technological development has significantly improved in analyzing medical imaging. This paper proposes different hybrid filtering techniques for the removal of noise, by topological approach. The hybrid filters used here are hybrid median filter [hybrid min filter (H

1

F) and hybrid max filter (H

2

F)]. These filters are treated in terms of a finite set of certain estimation and neighborhood building operations. A set of such operations is suggested on the base of the analysis of a wide variety of nonlinear filters described in the literature. It is suggested from the simulation results that the proposed scheme yields better image quality after denoising. This approach is incorporated with spatial domain and frequency domain analysis. Results obtained by hybrid filtering technique are measured by the statistical quantity measures: Root Mean Square Error (RMSE) and Peak Signal-to-Noise Ratio (PSNR). Overall results indicate that the enhancement quality was performed well in proposed method when compared to other filtering techniques.

Deepa Bharathi, Sumithra Manimegalai Govindan
Detection of Cerebral Aneurysm by Performing Thresholding-Spatial Filtering-Thresholding Operations on Digital Subtraction Angiogram

Cerebral aneurysm (CA) has been emerging as one of the life threatening diseases which have developed a deep concern amongst the neurologists in recent years. To be specific, it shows devastating characteristic due to the formation of abnormal bulging of artery in human brain followed by its rupture. Therefore detection of this abnormality prior to the rupture becomes inevitably essential to save our lives to a great extent. This paper throws enough light in detecting cerebral aneurysm of various sizes by combining the operations of spatial filtering and thresholding in an elegant way. A number of Digital Subtraction Angiogram (DSA) images, affected by cerebral aneurysm of various magnitudes, have been taken into consideration in this connection. Finally, the affected area has been marked with red colour to make it more prominent than the other parts of the image.

Jubin Mitra, Abhijit Chandra
Backmatter
Metadaten
Titel
Advances in Computing and Information Technology
herausgegeben von
Natarajan Meghanathan
Dhinaharan Nagamalai
Nabendu Chaki
Copyright-Jahr
2013
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-642-31552-7
Print ISBN
978-3-642-31551-0
DOI
https://doi.org/10.1007/978-3-642-31552-7