Skip to main content

2018 | Buch

Computational Vision and Bio Inspired Computing

insite
SUCHEN

Über dieses Buch

This is the proceedings of the International Conference On Computational Vision and Bio Inspired Computing (ICCVBIC 2017) held at RVS Technical Campus, September 21-22, 2017. It includes papers on state of the art innovations in bio-inspired computing applications, where new algorithms and results are produced and described. Additionally, this volume addresses evolutionary computation paradigms, artificial neural networks and biocomputing. It focuses mainly on research based on visual interference on the basis of biological images. Computation of data sources also plays a major role in routine day-to-day life for the purposes such as video transmission, wireless applications, fingerprint recognition and processing, big data intelligence, automation, human centric recognition systems. With the advantage of processing bio-inspired computations, a variety of computational paradigms can be processed. Finally, this book also treats the formation of neural networks by enabling local connectivity within it with the aid of vision sensing elements.

The work also provides potential directions for future research.

Inhaltsverzeichnis

Frontmatter
Genetic Algorithm Based Hybrid Attribute Selection Using Customized Fitness Function

Attribute selection is an important step in the analysis of gene expression for cancer or illnesses in general. The huge dimensionality of gene expression data that includes many insignificant and redundant genes reduces the classification accuracy. In this study, we propose a hybrid attribute selection method to identify the small set of the most significant genes associated with the cause of cancer. The proposed method integrates the advantages of filter and a wrapper to perform attribute selection by devising a customized fitness function for the genetic algorithm. Three data sets are used that includes leukemia, CNS and colon cancer. Results of our technique are compared with the other standard techniques available in literature. The proposed hybrid approach produces comparably better accuracy than the standard implementation of the genetic algorithm.

C. Arunkumar, S. Ramakrishnan, Siva Sai Dheeraj
Application of Evolutionary Particle Swarm Optimization Algorithm in Test Suite Prioritization

Regression testing is a software verification activity carried out when the software is modified during maintenance phase. To ensure the correctness of the updated software it is suggested to execute the entire test suite again but this would demand large amount of resources. Hence, there is a need to prioritize and execute the test cases in such a way that changed software is tested with maximum coverage of code in minimum time. In this work, Particle Swarm Optimization (PSO) algorithm is used to prioritize test cases based on three benchmark functions Sphere, Rastrigin and Griewank. The result suggests that the test suites are prioritized in least time when Griewank is used as benchmark function to calculate the fitness. This approach approximately saves 80% of the testing efforts in terms of time and manpower since only 1/5 of the prioritized test cases from the entire test suite need to be executed.

Chug Anuradha, Narula Neha
An Autonomous Trust Model for Cloud Integrated Framework

Cloud integrated frameworks (CIF) lay the ground work for controlling and managing data gathered from various sources. Data quality trust addresses detection of manipulation, concealment and rollbacks on data exchanged in CIF. The work presented here proposes a trust enhanced CIF that comprehensively takes into account processing, transmission, privacy, data quality and recommendation in addition to availability, bandwidth of the servicing. A weighted average computation using weights derived from Service Level Agreement given by CSU is proposed over a moving window on the performances maintained by Trusted Centre Entity (TCE). TCE monitors the system for malicious behaviours of CSPs that may change the behaviour of the system. It assures data quality using Merkle hash tree (MHT) and manages the trust and reputation scores for the CSPs in the trust enhanced CIF. It performs public auditability to detect data roll back, data concealment and incorrect data. The work proposes a scheme that provides for autonomous trust management in CIFs.

C. K. Shyamala, Ashwathi Chandran
Combined Classifier Approach for Offline Handwritten Devanagari Character Recognition Using Multiple Features

Offline handwritten character recognition is the process of recognizing given characters from the large set of characters. OCR system mainly focuses on the recognition of printed or handwritten characters of a scanned image. The proposed system extracts features that are based only on gradient of image which is helpful in exact recognition of characters. A technique to recognize handwritten Devanagari characters using combination of quadratic and SVM classifiers is presented in this paper. Features used are directional features that are strength, angle and histogram of gradient (SOG, AOG, HOG). Using a Gaussian filter, the strength and the angle features are down sampled to obtain a feature vector of 392 dimensions. These features are finally concatenated with HOG feature. Applying these to the combination of quadratic and SVM classifiers to obtain maximum accuracy of 95.81% using 3 fold cross validation.

Milind Bhalerao, Sanjiv Bonde, Abhijeet Nandedkar, Sushma Pilawan
Comprehensive Study on Usage of Multi Objectives in Recommender Systems

Recommender systems have changed its purview from prediction accuracy oriented to finding more relevant and useful recommendations to user. “Usefulness” of items are different in different applications. This paper summarizes the works that have been done in this direction. Personalization, context awareness, multiple objectives of recommendations and evaluation metrics are reviewed in this paper.

M. Sruthi, Sini Raj Pulari, Ramesh Gowtham
Visual Analysis of Genetic Algorithms While Solving 0-1 Knapsack Problem

This paper presents heat map based visual analysis of Genetic Algorithm (GA) solving 0-1 Knapsack Problem (KP). The current work is a preliminary investigation to understand the search strategy of GA solving KP through visual means. A simple GA has been employed to solve 50, 100 and 500 items 0-1 KP. Heat map based visualization of best chromosomes shows clearly the explorative and exploitative search strategies of GA in conjunction with convergence characteristics. This paper demonstrates the potential of visualization to analyze and understand Evolutionary Algorithms (EA) in general.

B. P. Sathyajit, C. Shunmuga Velayutham
Securing Image Posts in Social Networking Sites

The most unbeatable technology, Internet brings to people for communication is social networks. Starting from exchange of text messages it goes up to posting of images and videos in social networking sites which are viewed by many people. When these social networks are available to all the users for free, it will lead to various types of security issues. Image security has been a topic of research over decades. Enhancements to individual techniques and combinations proposed till date have offered different levels of security assurances. This paper aim to present a technique for secure sharing of image posts in social network. The significant feature of the scheme lies in the selection of security technique based on image content, evaluation of peers with whom the image can be shared based on text classification, transliteration and tone analysis. The proposed scheme a cost effective solution as it does not require any additional hardware. The utility of the model is demonstrated by mapping the scheme with Facebook and analyzing its performance through simulation.

M. R. Neethu, N. Harini
A Types of Multi-granular Nanotopology and Its Applications

In this paper the difference, relationship and the means of calculating the significant measured, nano degree of dependence of decision attributes in practical application of Multi- Granular nanotopology and Multi*-Granular nanotopology, which are the types of nanotopological models are studied. Moreover an example is taken to highlight the Multi*-Granular nanotopology which are examined for the practical implementation of this hypothesis.

K. Indirani, G. Vasanthakannan
Early Detection of Lung Cancer Using Wavelet Feature Descriptor and Feed Forward Back Propagation Neural Networks Classifier

A Computed Tomography (CT) scan is the most used technique for distinguishing harmful lung cancer nodules. A Computer Aided Diagnosis (CAD) framework for the recognition of lung nodules in thoracic CT pictures is implemented. A lung nodule which can be either benign or malignant can be easily classified as the better support for treatment. The proposed work is based on using Wavelet feature descriptor and combined with an Artificial Neural network for classification. The computed statistical attributes such as Autocorrelation, Entropy, Contrast, and Energy are obtained after applying wavelet transform and used as input parameters for neural network Classifier. The NN Classifier is designed by considering training functions (Traingd, Traingda, Traingdm, and Traingdx) using feed forward neural network and feed forward back propagation network. The feed forward back propagation neural network gives better classification results than feed forward. The proposed classifier produced Accuracy of 92.6%, specificity of 100% and sensitivity of 91.2% and a mean square error of 0.978.

R. Arulmurugan, H. Anandakumar
A Study on Fuzzy Weakly Ultra Separation Axioms via Fuzzy Β-Kernel Set

In this paper, we introduce a new class of fuzzy closed sets called fuzzy $$ \widehat{\mu } $$β-closed sets also we introduce the concept of fuzzy $$ \widehat{\mu } $$β-kernel set in a fuzzy topological space. We also investigate some of the properties of weak fuzzy separation axioms like fuzzy $$ \widehat{\mu } $$β-Ri space, i = 0, 1, 2, 3 and fuzzy $$ \widehat{\mu } $$β-Ti-space, i = 0, 1, 2, 3, 4.

J. Subashini, K. Indirani
Selection of Algorithms for Pedestrian Detection During Day and Night

This paper presents an image processing based pedestrian detection system for day and night contributing to Advance Driver Assistance System (ADAS). The process, Histogram of Oriented Gradient (HOG) with Support Vector Machine (SVM) as linear classifier is compared and analyzed against Convolution Neural Network (CNN) for performance selection of best algorithm for pedestrian detection during both day and night. Performance analysis was done on standard datasets like INRIA, ETH, etc. and locally created datasets on Intel Processor with Ubuntu operating system. Implementation of HOG-SVM algorithm was performed, using DLib, python (2.7) and OpenCV (3.1.0) and accuracy of 96.25% for day and 96.55% for night was obtained. The implementation of Convolution Neural Network was performed using Anaconda3, TFLearn and python (3.6) and the scheme achieved an accuracy of 99.35% for day and 99.9% for night.

Rahul Pathak, P. Sivraj
Hybrid User Recommendation in Online Social Network

Social web and recommendation system has become an indispensable part of today’s e-world. In such environment, most of the communications and transactions happen with unfamiliar persons. Any communication with unfamiliar persons is problematic as it lacks trust. Thus trust is an important factor to integrate recommendation system with the social web. In current scenario recommendation of trustable individuals with the similarity in behavior or trusted products is what is essential. The work reported in his paper aims towards recommending a trustable similar individual. The current work proposes user-user recommendation methods. Since the proposed approach is a hybrid approach it uses content-based technique and collaborative technique for the recommendation. Similar users are identified using content-based technique with Formal Concept Analysis (FCA) and Jaccard index and trustable users are identified using collaborative based technique. The trusted similar users are recommended using decision tree algorithm. The recommendation provided by this hybrid system is evaluated for its accuracy.

A. Christiyana Arulselvi, S. SendhilKumar, G. S. Mahalakshmi
Geometrical Method for Shadow Detection of Static Images

The current applications consist of a number of services, of which there are these basic services to which additional services are added, called as the add on services, as per the requirement of the basic services. The application domain of tracking has object identification as one of its basic services. QoS conditions then specified, identify the objects of interest and a number of false identifications. Thus the object identification with false identification may further lead to false tracking. This problem basically arises due to other entities having the shape similar to that of the object of interest. One such entity identified is the shadows of the object of interest. In this paper, removal of these shadows forms the objective which is presented in detail utilizing geometrical method.

Manoj K. Sabnis, Kavita, Manoj Kumar Shukla
Medical Diagnosis Through Semantic Web

Semantic web work towards mining of the semantics of data and further processing from the collection of current web resources rather than pattern matching during the information extraction process, there by leading towards the automation of knowledge extraction procedure. Healthcare is one among the major domains, where huge data production happens on daily basis. There is no specific technique or model to successfully utilize the available information during the course of diagnosis. The key to upgrade is to raise awareness among the people. This paper aims at developing a model with the usage of Semantic Web, Ontology concepts and Apache Jena reasoner to improve and refine the basic clinical skills required to provide effective and efficient primary care. The proposed work—Healthub is being evaluated with respect to correctness and accuracy of diagnosis. Results obtained using Apache Jena reasoner show promising responses approximately much nearer to expert conclusions.

P. Monika, M. R. Vinutha, B. N. Srihari, S. Harikrishna, S. M. Sumangala, G. T. Raju
An Algorithm for Text Prediction Using Neural Networks

Neural networks have become increasingly popular for the task of language modeling. Whereas feed-forward networks only exploit a fixed context length to predict the next word of a sequence, conceptually, standard recurrent neural networks can take into account all of the predecessor words. In this paper, an algorithm using machine learning which, when given a dataset of conversations, is able to train itself using Neural Networks which can then be used to get suggestions for replies for any particular input sentence is proposed. Currently, smart suggestions have been implemented in chat applications. Google uses similar techniques to provide smart replies in Email through which the user can reply to a particular email with just a single tap.

Bindu K. R., Aakash C., Bernett Orlando, Latha Parameswaran
A Deeper Insight on Developments and Real-Time Applications of Smart Dust Particle Sensor Technology

Sensors are the major technological advancements which have contributed abundantly in the field of IoT, Big Data etc. Sensors are used to sense, detect and respond to various electrical and optical signals. In earlier days, sensors had very large size with high power consumption and with poor computational capabilities. Hence the smart sensors in the size ranging from tiny dust particles with higher computational, processing capabilities were introduced. This paper takes you on to a tour of various developments in the field of Smart dust technology and the various real-time applications that use the concept.

Shreya Sathyan, Sini Raj Pulari
Comparative Analysis of Biomedical Image Compression Using Oscillation Concept and Existing Methods

Medical image compression has an important role in hospitals as they are moving towards filmless imaging and go completely digital. Medical imaging produces the great challenge of compression algorithms. While compressing the data to avoid diagnostic errors and yet have high compression rates for reduced storage and transmission time. In this paper, comparative analysis of biomedical image compression using oscillation concept & existing methods is done and we have achieved maximum compression ratio by using oscillation concept method. Other parameters like PSNR, MSE, MSSIM & Std. Deviation are also good as compared to existing methods.

Satyawati S. Magar, Bhavani Sridharan
Clustering of Trajectory Data Using Hierarchical Approaches

Large volume of spatiotemporal data as trajectories are generated from GPS enabled devices such as smartphones, cars, sensors, and social media. In this paper, we present a methodology for clustering of trajectories to identify patterns in vehicle movement. The trajectories are clustered using hierarchical method and similarity between trajectories are computed using Dynamic Time Warping (DTW) measure. We study the effects on clustering by varying the linkage methods used for clustering of trajectories. The clustering method generate clusters that are spatially similar and optimal results are obtained during the clustering process. The results are validated using Cophenetic correlation coefficient, Dunn, and Davies-Bouldin Index by varying the number of clusters. The results are tested for its efficiency using real world data sets. Experimental results demonstrate that hierarchical clustering using DTW measure can cluster trajectories efficiently.

B. A. Sabarish, R. Karthi, T. Gireeshkumar
Sentiment Analysis of Twitter Data on Demonetization Using Machine Learning Techniques

Social media like twitter and Facebook is seen as a space where public opinions are formed in today’s world. The data from these tweets and posts can provide valuable insights for policy makers and other agencies to propose and implement policies better. An attempt is made in this paper to understand the public opinion on the recently implemented demonetization policy in India. A sentiment analysis is carried out on twitter data set using machine learning approaches. Twitter data from November 9th to December 3rd is considered for analysis. The data set is pre-processed for cleaning the data and making it possible for analysis. A final set of 5000 tweets are analysed using machine learning techniques like SVM, Naïve Bayes classifier and Decision tree and the results are compared.

N. M. Dhanya, U. C. Harish
Curvelet Based ECG Steganography for Protection of Data

A ECG steganography provides safe communication over internet such as transformation of patient information embed on ECG signals. The data hiding technique are used here in which steganography is hiding the patient information in image, video, audio and signal etc. The very important challenge to ECG steganography is extract data and signal without loss. İn this technique, we use the curvelet transform for evaluating coefficients, converts information into binary format by using quantization and adaptive LSB algorithm for embedding. The ECG signals for steganography are taken from database of MIT-BIH. From outcome we have observed that the signal loss is less after embedding when the coefficients near to zero. The overlapping of embed watermark is avoided by using sequence. The performance is observed by using parameter such as peak signal to noise ratio, percentage residual difference, correlation and time elapsed at embedding. The extraction performance is measured by using correlation, BER and time elapsed parameters. At extraction bit error rate is used to calculate extract capacity. The size of patient information is raises then BER is zero but the original signal deteroiates. Curvelet transform removes the limitation of wavelet transform and watermarking technique. Hence the curvelet transform is very efficient technique.

Vishakha Patil, Mangal Patil
Kernel Based Approaches for Context Based Image Annotatıon

The Exploration of contextual information is very important for any automatic image annotation system. In this work a method based on kernels and keyword propagation technique is proposed. Automatic annotation with a set of keywords for each image is carried out by learning the image semantics. The similarity between the images is calculated by Hellinger’s kernel and Radial Bias Function kernel(RBF)kernel. The images are labelled with multiple keywords using contextual keyword propagation. The results of using the two kernels on the set of features extracted are analysed. The annotation results obtained were validated based on confusion matrix and were found to have a good accuracy. The main advantage of this method is that it can propagate multiple keywords and no definite structure for the annotation keywords has to be considered

L. Swati Nair, R. Manjusha, Latha Parameswaran
Analysis of Trabecular Structure in Radiographic Bone Images Using Empirical Mode Decomposition and Extreme Learning Machine

Biomechanical function is a primary concern in musculo-skeletal supporting system in our body. The functional adaptation of femur bone to the mechanical environment is an important component of bonemechanics research. The strength and architecture of these structures in femur bones are routinely analyzed by varied image based methods for diagnosis and monitoring of osteoporosis like metabolic disorders. In this work, an attempt has been made for investiagtion of trabecular femur bone architecture using spatial frequency decomposition and neural networks. Conventional radiographic femur bone images are recorded using standard protocols in this analysis. The compressive and tensile regions in the femur bone images are delineated using preprocessing procedures. The delineated images are analyzed using Fast and Adaptive Bi-dimensional Empirical Mode Decomposition (FABEMD) to quantify pattern heterogeneity and anisotropy of trabecular bone structure. The characteristic feature vectors are extracted and further subjected to classification using Extreme Learning Machine (ELM). Results show that FABEMD analysis combined with ELM could differentiate normal and abnormal images.

G. Udhayakumar
Design of an Optimization Routing Model for Real Time Emergency Medical Service System in Chennai Using Fuzzy Techniques

This paper mainly deals with the design of an Optimization Routing model for Emergency Medical Service System (EMSS) in Chennai. The effectiveness of Emergency Medical Service System (EMSS) plays a vital role towards society protection. The major idea of this article is to examine the real time flexible dispatching strategy so that crucial response time can be saved for EMSS. An optimization routing model is designed for developing flexible dispatching strategies with the help of duration information. This mathematical model and the is expressed as an IPP technique in which diminished path is assigned. Based on the numerical calculations and graphical representation it reveals to the fact that the different parameters are being analyzed such as duration prediction, incident/vehicles tracking, and consign Optimization and it is validated for road networks.

C. Vijayalakshmi, N. Anitha
Simulation of Cortical Epileptic Discharge Using Freeman’s KIII Model

Advancements in Neuroscience have put forth many networks that are able to mimic cortical activity. One such biologically motivated network is the Freeman KIII Model. The Freeman KIII model is based on the mammalian olfactory system dynamics. It consists of a collection of second-order non-linear differential equations. This paper attempts to solve the equations using MATLAB in order to simulate cortical electroencephalographic (EEG) signals. We also attempt to simulate cortical epileptic discharge using this model. The degenerate state of epileptic seizure is analyzed by obtaining its frequency using the power spectrum and by plotting the phase plots.

Pooja Vijaykumar, R. Sunitha, N. Pradhan, A. Sreedevi
Perceptual Features Based Rapid and Robust Language Identification System for Various Indian Classical Languages

In this paper, we have investigated the development of the robust recognition system to identify the language of the spoken utterance in several classical Indian languages. The ability of the machines to identify the language in which communication takes place globally is a paramount in the current scenario. In this work on VQ clustering based language identification, feature vectors extracted from training speeches are converted into set of language specific clustering models. During testing, features extracted from test speeches are applied to the language specific clustering models and hypothesized language is identified based on minimum of averages corresponding to the model using minimum distance classifier. Clustering technique is evaluated with variations in cluster size and length of the test utterances. Better performance is observed for the cluster size of 64 and 900 test utterances of 2–3 s duration. Noteworthy point to be mentioned is that though the clustering technique is an old technique, it provides 97% as average recognition accuracy, 3.13% as average false acceptance rate and 3.13% as false rejection rate for the testing done on the language specific clustering model with 64 as cluster size. To improve the accuracy, formants are also used as a feature and it is observed that formant frequencies do provide complementary evidence in ensuring better accuracy for some of the languages. Performance of the system is assessed by calculating the identification rate as the one corresponding to the correct identification with respect to either MFPLPC or Formants and the overall average accuracy obtained is 99.4% for clustering model with 32 as cluster size. Experiments are conducted on the database containing speech utterances in seven classical and phonetically rich speaker specific Indian languages such as Bengali, Hindi, Kannada, Malayalam, Marathi, Tamil and Telugu.

A. Revathi, C. Jeyalakshmi, T. Muruganantham
Premature Cardiac Verdict Plus Classification of Arrhythmias and Myocardial Ischemia with k-NN Classifier

The exploration force cultivates a distinctive outline for feature extraction procedure based on Discrete Wavelet Transform with ECG Signal for Arrhythmias detection in addition to Stenosis detection of Myocardial Ischemia with Real Cardiac Computed Tomography Angiogram images of both the gender with different age groups. Succeeding both the cardiac diseases ingests been classified with the k-NN classifier separately based on their levels of severity as normal and abnormal for arrhythmia disease patients and moreover as standard, early, minor and serious for Myocardial Ischemia patients. The objective of this work is to pigeonhole perfectly and flourish a creative arrhythmia finding taxonomy and One hundredth stenosis detection that can sign to great exciting premature Myocardial Ischemia (Lack of oxygenated blood supply to the heart) analysis. In the virtual reality result, DWT features works commendable for the classifier with the utmost 94% truthfulness and with RCCTA images clearly detects the blockage area and classification also done more closely to the groundtruth values.

M. Inbalatha, S. Kalaivani
A Novel Algorithm for Triangulating 2d Point Clouds Using Multi-Dimensional Data Structures and Nearest Neighbour Approach

Generating a mesh from a point cloud has always been a complex and tedious task. It requires many guidelines to be followed and geometry specific boundary condition to be taken care of. Mostly we end up writing a new algorithm for each scenario. The existing algorithms like Delaunay triangulation end up taking a lot of time without taking care of the internal holes properly. Also the convergence is not guaranteed. In this paper, we present a recursion free algorithm for triangulating a structured surface with n-vertices with or without holes using Ball trees. The algorithm takes the input in the form of an image, creates a mask for the object to be triangulated and then creates a mesh for it with respect to computed vertices.

Sundeep Joshi, Shriram K. Vasudevan
Cane Free Obstacle Detection Using Sensors and Navigational Guidance Using Augmented Reality for Visually Challenged People

The project concentrates on obstacle detection for visually impaired people using IR, Ultrasonic sensors and TSOP receiver which is of less cost, fast and efficient. Shortest distance algorithm is the method used to find the obstacles at each region sent from the TSOP receiver mounted on each angles. This works in environments such as indoor and outdoor. Experience comes from learning and the important requirements for learning will be the sense of vision of what normal human being sees and observes in the real world. But in the case of VI people, sense of vision will not help them in learning where other senses like ears, touch, smell helps them in various situations. The whole setup is wearable and lightweight mounted on a belt to make the VI people to walk into any situation like pits, steps by providing haptic feedback as the output. The project uses Arduino mega which helps us to program ease and also helps in testing various parameters with the available open source library routines. Since the targeted user is VI people, navigational guidance can be done through Augmented Reality domain as the technology is growing wide and can give more reliable to the user.

Sriram Ramachandran, Prashant R. Nair, Shriram K. Vasudevan
Preprocessing of Lung Images with a Novel Image Denoising Technique for Enhancing the Quality and Performance

Recently, digital image processing has attracted many researchers due to its significant performance in real-time applications such as bio-medical systems, security systems and automated computerized diagnosis systems. Lung cancer detection and diagnosis system is one such real-time health care application which requires automated processing, where in, the images are captured and processed through computer. Although various automated systems are already in place for this application, precise automatic lung cancer detection still remains a challenging task for researchers because of the unwanted signals get added into original signal during image capturing process which may degrade the image quality that intern resulting in degraded performance. In order to avoid this, image preprocessing has become an important stage with the key components as edge detection, image resampling, image enhancement and image denoising for enhancing the quality of input image. This paper aims to improve the quality of lung image with image denoising technique for enhancing the overall performance of the automated diagnosis system. A novel approach for image denoising by applying pixel classification using Multinomial Logistic Regression (MLR) and Gaussian Conditional Random Field (GCRF) is proposed in this paper. Proposed approach comprising of two major steps such as parameters generation by considering MLR and designing an inference network whose layer perform the computations which are tangled in GCRF formulation. Experimental analysis is done on LIDC, an open source benchmark database and the performance is compared with the state-of-the-art filtering schemes. Results show that proposed approach performs better in respect of PSNR values by factor 2.7 when compared to existing approaches.

C. Rangaswamy, G. T. Raju, G. Seshikala
Computer Aided Diagnosis of Skin Tumours from Dermal Images

Skin tumour is uncontrolled growth of skin cells which may be cancerous. The aim is to develop computer aided diagnosis for skin tumours. The dermal images of three types such as benign tumour, malignant melanoma and normal moles obtained from the authorised PH2 database. Pre-processing performed to remove hair cells. Contour based level set technique for segmentation of the lesion from which clinical and morphological features are extracted. The significant features are obtained using Random Subset Feature Selection technique. Classification is performed using three classifiers such as back propagation, pattern recognition and support vector machine. Classifier Efficiency of three classifiers is determined to be 94, 96 and 98% respectively with the Classifier performance parameters. One way ANOVA test is performed to analyse the efficiency of the three classifiers. With these results, Support vector machine is configured as accurate classifier for classification.

T. R. Thamizhvani, Suganthi Lakshmanan, R. Sivaramakrishnan
Segmentation of MRI Brain Images and Creation of Structural and Functional Brain Atlas

In this work an analysis of the different segmentation algorithms has been done, in order to select the most appropriate method for segmenting the brain into its corresponding regions. This selection is done by comparing the segmented regions with the ground truth images using measures like Dice, Precision, Recall and Score. The segmented regions are then labelled by their respective part names and used for creation of structural and functional brain atlas. This atlas would be of great use to doctors in providing assistance in disease analysis and can also be used as a teaching/learning kit.

Hema P. Menon, Reshma Hiralal, A. Anand Kumar, Davidson Devasia
Noninvasive Assistive Method to Diagnose Arterial Disease-Takayasu’s Arteritis

Takayasu’s arteritis (TA) is a rarely studied primary systemic vasculitis involving the aorta and other major arteries of the body. This work proposes a novel noninvasive assessment method, combining both time domain and frequency domain analysis of peripheral signals such as photo plethysmography (PPG) in normal and TA patients providing information about the severity of the TA disease. The novelty of the proposed method is twofold: one, a novel signal processing technique Auto-correlated Spectrum for analyzing PPG signals, and two, use of noninvasive techniques from multiple-site PPG for quantifying the severity. PPG from twenty TA patients and twenty normal subjects have been acquired from five different peripheral sites in the body and compared. The Auto-correlated Spectrums of multiple-site PPG signals are calculated. A novel parameter called P-measure is derived using the relation between the number of peaks and the average distance of the peaks from origin of the spectrum. P-measure is used for classifying normal and diseased using a binary classification method, when greater than or equal to 0.32 the subject is considered as normal and otherwise diseased. The sensitivity and specificity values of this classification method are 96 and 83% respectively. This method is also compared with other frequency domain analysis and this technique can be a simple cost-effective assessment tool to reduce cardiovascular morbidity and mortality in the rarely studied TA, and perhaps other arterial diseases. The small group of TA population due to the rarity of TA disease is a major problem in acquiring data.

Suganthi Lakshmanan, Dipanjan Chatterjee, Manivannan Muniyandi
An Automated Vision Based Change Detection Method for Planogram Compliance in Retail Stores

Planogram are visual representations of a store’s products and services designed to help retailers ensure that the right merchandise is consistently on display, and that inventory is controlled at a level that guarantees that the right number of products are on each and every shelf. The main objective of this work is to propose an algorithm using image processing and machine learning as its base to find and detect the changes in the arrangement of objects present in the retail stores. The proposed algorithm is capable of identifying void space, count objects of similar type and thus helps in tracking the changes. Blob detection superseded by classification using a discriminative machine learning approach with the extracted statistical features of the objects has been used in this proposed algorithm. Experimental results are quite promising and hence this algorithm can be used to detect any changes occurring in a scene.

Muthugnanambika M., Bagyammal T., Latha Parameswaran, Karthikeyan Vaiapury
A Study on Various Quantification Algorithms for Diabetic Retinopathy and Diabetic Maculopathy Grading

Diabetes also known as diabetes mellitus (DM) is a prominent disease all over the world. It is a metabolic disorder occurring due to high blood sugar levels over a prolonged period. Prolonged diabetes will cause diabetic retinopathy affects retina. Diabetes affecting macular area is called diabetic maculopathy. Developing automated systems for identification, grading and quantification of the retinal pathologies associated with DM is on the rise. There are four popular modalities that are useful for clinical diagnosis and treatment of diabetic maculopathy. They are slit-lamp biomicroscopy, color fundus images, fundus fluorescein angiograms (FFA) and optical coherence tomography (OCT). It is observed that FFA plays an vital role in the treatment of diabetic macular edema (DME). There are two major types of diabetic retinopathy: non-proliferative diabetic retinopathy (NPDR) and proliferative diabetic retinopathy (PDR). NPDR shows up as retinal exudates or cotton wool spots or microvascular abnormalities or as superficial retinal hemorrhages or as microaneurysms. PDR is characterized by severe small retinal vessel damage and reduced oxygenization of retina. Here a survey on the quantification of the macular edema, retinal exudates, microaneurysms and other retinal pathologies in diabetic maculopathy and diabetic retinopathy are elaborated.

Parvathy Ram, T. R. Swapna
Mango Leaf Unhealthy Region Detection and Classification

Diseases in any plant decrease the productivity and quality of product. Identification of plant leaf diseases by naked human eye is very difficult. Image processing techniques can identify the diseased leaf by preprocessing and classifying leaf unhealthy regions. This paper delivers an implementation on Mango leaf unhealthy region detection and classification. In the Proposed work Multiclass SVM is used for diseases classification and segmentation through k-means. The experimental results show the effectiveness of the proposed method in recognizing the diseases affected mango leaf.

K. Srunitha, D. Bharathi
Personalized Research Paper Recommender System

Personalization is an emerging topic in the field of Research paper recommender systems and academic research. It is a technique to creative and efficient user profiles to achieve improved recommendations. Our work proposes a new user model to understand user behavior for personalization. This model initially extracts keywords based on the online behaviour of the user. The subsequent steps include concept extraction and user profile ontology construction to derive inferences and define relationships. The suggested model clearly depicts hierarchical ordering of the user’s long-term and current research interests. Furthermore, the adoption of our model contributes to improvement of recommendations.

Thota Sripadh, Gowtham Ramesh
A Vision Based DCNN for Identify Bottle Object in Indoor Environment

Vision based detection and classification is an emerging area of research in the field of automation. Due to the demand in automation different fields artificial intelligent architectures plays vital role to address the issues. Conventional architectures used for dealing computer vision problems are heavily under control on user features. But the new deep learning techniques have provided a substitute of automatically learning problem related features. The classification problem can be designed based on feature learned from DCNN. The performance of the DCNN algorithm vary based on the training. In this paper the performance of Deep Convolutional Neural Network (DCNN) is analyzed in classifying categories of bottle object.

Lolith Gopan, R. Aarthi
Segmentation of Brain Parts from MRI Image Slices Using Genetic Algorithm

In this work a genetic algorithm based approach for segmenting the parts of brain MRI (Magnetic Resonance Imaging) image slices has been presented. Segmentation of the brain MRI image has been a challenging task and an open area for research off late due to reason that, the intensity differences between the different regions present in the image is very less. Hence a complete automation of segmentation process is difficult. In this work the various parameters of the genetic algorithm has been analyzed and an oprimized threshold value has been determined based on the slice type. The complexities in the segmentation algorithm and the challenges have also been reported.

K. Vikram, Hema P. Menon, Dhanya M. Dhanalakshmy
Air Pollution Modelling from Meteorological Parameters Using Artificial Neural Network

The aim of this study is to develop neural network air quality prediction model for PM10 (particle whose diameter is less the 10 µm), NO2 and SO2. A multilayer neural network model with a hidden recurrent layer is used to predict pollutant concentrations at four monitoring sites in Belagavi city of Karnataka State, India. The Levenberg Marquardt algorithm is used to train the network. A combination of input variables were investigated taking into the predictability of meteorological input variables and the study of model performance. The meteorological variables air temperature, wind speed, wind direction, rainfall and relative humidity were considered as input variables for this study. The results show very good agreement between measured and predicted pollutant concentrations. The performance of the developed model was assessed through performance index. The models developed have good prediction performance (>85%) for all the pollutants. The proposed models were predicted pollutant concentration with relatively good accuracy and outputs were proven to be satisfactory by measuring of the goodness of fit and by mean absolute percentage error.

Sateesh N. Hosamane, G. P. Desai
Grayscale Image Encryption Based on Symmetric-Key Latin Square Image Cipher (LSIC)

The increased computing power and improving technologies demand the necessity for stronger encryption algorithms. Here we are introducing a new encryption method, Latin Square Image Cipher (LSIC) for grayscale image. This includes Latin square whitening, S-box, P-box and also LSB noise embedding for probabilistic encryption. As a result, using all the above primitives, LSIC is constructed as a Substitution-Permutation Network (SPN) consisting of eight stages of whitening, substitution and permutation using different Latin squares of order 256 at each stage. The proposed method has a good resistance against brute-force attacks, ciphertext attacks and plaintext attacks.

V. Pawan Kumar, A. R. Aswatha, Smitha Sasi
Enhanced Defogging System on Foggy Digital Color Images

Images which are captured using camera can cause degradation in images by the effect of climatic conditions such as haze and fog. Image restoration makes a notable change in performing different application of computer vision and pattern recognition. The main aim of this paper is to improve the effect of fog and hazy images compared to the existing methods. The enhanced defogging system [EDS] consists of different image improvement techniques with a Dark Channel Prior [DCP] Algorithm to estimate the amount of fog is there in the images and transmission as well. Fusion based fog removal will reduce the amount of haze remained in those images. Experiments were done more than 100 images and the results are discussed below.

Sarath Krishnan, B. A. Sabarish, V. Gayathri, S. Padmavathi
Identification of the Risk Factors of Type II Diabetic Data Based Support Vector Machine Classifiers upon Varied Kernel Functions

In the innovation in making a data driven decision model, the technological impacts across medical data processing has been increasing rapidly. Type II diabetes is one among the majorly increasing non-communicable disease across India. Advancements in the field of medicine its therapeutic procedures have to be considered in determining the risk factor related to disease prediction and its major cause. This research work, focus towards the applicability of Support Vector Machine (SVM) classifiers for predicting the risk related to type II diabetes. The model involves the deployment of various kernel functions upon the building of SVM models. The experimental results shows that data classification using SVM upon varied kernel function has improvement over the accuracy with 86.65%, precision 76.21% and recall with 81.11% respectively. Among the kernels experimented, polynomial kernel performed better than the other kernels with increased correlation results. The variation in the kernel with model enhancements can be deployed for risk factor prediction in type II diabetes.

A. Sheik Abdullah, N. Gayathri, S. Selvakumar, S. Rakesh Kumar
Role of Data Analytics in Blended Learning Models of Educational Technology

Blended models the new buzz word” blended” is making its impact on the society, even though not introduced in recent times their existence are dated from age old days. With several models in use their role based for learners and teachers vary according to their accessibilities. This paper discusses the models and need for study on blended models with a role based difference, its benefit’s (from teacher’s and student’s perception) and challenges. It also emphasises the need for the introduction to data analytics of models and the impact on learners.

Ananthi Sheshasaayee, S. Malathi
Query by Example—Retrieval of Images Using Object Segmentation and Distance Measure

Image segmentation is the process of extracting object and regions of interest which have applications in computer vision, object detection, classification of retrieval. Many researchers have developed algorithms for segmenting objects from digital images. This paper is an attempt to retrieve images based on existing segmentation algorithms and distance measures. A diverse dataset consisting of images of various categories has been used for experimentation and this work suggests the best segmentation algorithm for retrieving the best match for given query images using distance measures.

S. Sathya, Latha Parameswaran, R. Karthika
Adaptive Weighted Median Filter for Motion Estimation

Smoothing techniques can be constructive to the motion vectors that can detect defective vectors and put forward alternatives. The substitute motion vectors can be used in place of those recommended by the block match algorithm. If frames are going to be interpreted by the receiver then motion vector amendment is expected to be precious. Design of an adaptive weighted median filter whose weights alters according to the confined characteristics is possible which can be used for smoothing intention.

Prashant Bhalge, Salim Amdani
Detection of Menisci Tears in Sports Injured and Pathological Knee Joint Using Image Processing Techniques

In many physical and sports activity, knee joint encounters extreme level of stresses leading to traumatic injury. The menisci are located between the tibial plateau and femoral condyles. These menisci act to distribute body weight evenly across the knee joint. Any damage to menisci may lead to uneven weight distribution and cause the development of abnormal excessive forces leading to early damage of the knee joint. Menisci act as ‘shock absorbers’ between femur and tibia, and prevent the lateral movement of the joint when the knee is fully extended. It also helps to provide a lubricating effect on the knee joint, provides some degree of stability, and is essential for the normal biomechanics of the knee joint. The menisci are often injured, particularly during athletics. Magnetic resonance imaging (MRI) is the widely used imaging modality in detection of menisci injuries. There is a chance of missing visibility of menisci tears in MRI during diagnosis. In this work, image processing method based on seeded region growing is developed to detect and visualize menisci tears from MRI. The segmentation accuracy is evaluated using dice similarity coefficient (DSC). The developed method segments the menisci from knee joint MRI with good accuracy and visualizes the tears in different compartments of menisci. The developed method is helpful in treatment and surgery planning of knee joint injured patients.

Mallikarjunaswamy M. S., Rajesh Raman, Mallikarjun S. Holi, Sujana Theja J. S.
An Optimized Adaptive Random Partition Software Testing by Using Bacterial Foraging Algorithm

Software testing is a procedure of investigating a software product to find errors. Optimized Adaptive Random Partition Testing (OARPT) is a combined approach which comprises of Adaptive Testing (AT) and Random Partition Testing (RPT) in an alternative manner depending on test case. ARPT consists of two strategies are ARPT 1 and ARPT 2. The parameters of ARPT 1 and ARPT 2 need to be assessed for different software with different number of test cases and programs. The process of assessing the parameters of ARPT consumes high computational overhead and therefore it is more necessary to optimize parameters of ARPT. In this paper, the parameters of ARPT 1 and ARPT 2 are optimized by using Bacterial Foraging Algorithm (BFA) which improves the performance of ARPT software testing strategies. The experiments are conducted in different software and the proposed method improves defect detection efficiency and high code coverage, reduces time consumption and reduces memory utilization.

K. Devika Rani Dhivya, V. S. Meenakshi
Global Skew Detection and Correction Using Morphological and Statistical Methods

In this paper we have proposed a technique for skew detection and correction for printed documents, and have used an existing Optical Character Recognition (OCR) to recognize the characters. The proposed algorithm has the following steps (a) Applying the morphological dilations by defining the various structure elements (SE) (b) extracting the longest connected components (CC) (c) finding the global skew angle by statistical analysis of connected component (d) reference text line estimation and regression line fit to rotate the individual line by estimated angle of rotation. We have conducted experiment using printed images having different languages i.e. English, Devanagari, and Arabic (custom dataset) and have achieved significant performance.

Sharfuddin Waseem Mohammed, Narasimha Reddy Soora
Image Pyramid for Automatic Segmentation of Fabric Defects

Automatic fabric detection is required by the textile industries to improve their quality. For extraction of defective fabric areas, process of segmentation is needed to distinguish the defective region from the background. This paper investigates a method to construct image pyramid by Gaussian method wherein the images are decomposed into multiple levels. Noises are removed and features are extracted for fifteen different defects. Various levels were analyzed and the best level required for proper segmentation is identified for each defect. Region based watershed segmentation and edge based Sobel edge segmentation were experimented on multiple levels. The base level and best level of all decomposed images were compared for all fabric defects investigated.

Ankita Sarkar, S. Padmavathi
Multi Focus Region-Based Image Fusion Using Differential Evolution Algorithm Variants

This work focuses on an optimum process of image fusion on multiple focus images using an optimization algorithm viz., Differential Evolution (DE) algorithm. The input image is divided into regions and sharper regions are selected from these two images. The selected clear blocks are used for constructing final resultant image. The main purpose of using differential evolution algorithm is to find out optimum block size, which is more useful during division of image rather than fixed block size. And also, this work compares different variants of differential evolution algorithm based image fusion to find out which one will be suitable for getting more focused image. The major focus of the research is finding out which type of differential evolution algorithm is best suitable for almost all type of images. Block based and pixel based method are used together to achieve a better resultant image. Performance of fused image is calculated using image quality measures and found out better fusion method, which can be used in almost all situations.

K. C. Haritha, S. Thangavelu
A Hybrid Classification Model Using Artificial Bee Colony with Particle Swarm Optimization and Minimum Relative Entropy as Post Classifier for Epilepsy Classification

One of the most striking features of the human brain is that it has an amazing spatio temporal dynamics. Due to the excessive and irregular electrical action flowing in the human brain, the seizure activity happens. Because of epileptic seizures, it gives rise to unusual and strange sensations thereby affecting the behaviour and quality of life of the person. The electrical activities of the human brain can be easily measured with the help of Electroencephalogram (EEG) signals. In this paper, the hybrid model of Particle Swarm Optimization (PSO) and Artificial Bee Colony (ABC) algorithms is implemented to get a first level optimization in the epilepsy risk level of the EEG signals. Further Minimum Relative Entropy (MRE) is used as a second level post classifier to optimize it further for the perfect classification of epilepsy risk levels from EEG signals. The results show that an average post classification accuracy with MRE classifier is 97.90% along with an average time delay of 2.06 s.

Harikumar Rajaguru, Sunil Kumar Prabhakar
Fuzzy Optimization with Modified Adaboost Classifier for Epilepsy Classification from EEG Signals

Epilepsy is recognized as a chronic neurological condition with the occurrence of recurrent seizures that alters the normal electrical activities in the neurons of the brain. Epilepsy can occur to any person irrespective of time and age as the exact cause of epilepsy is very difficult to know. For the diagnosis of epilepsy, Electroencephalograph (EEG) signals are widely used. The EEG signals are non-stationary and non-linear sequences of data which can be easily traced and detected when the electrodes are placed on the scalp of the patient. In this work, fuzzy optimization is used as a first level classifier to classify the epilepsy risk levels and then for second level classification, Adaboost Classifier and the Modified Adaboost Classifier are used for classification of epilepsy risk levels from EEG signals. Modified Adaboost Classifier is done through the implementation of Linear Discriminant Analysis (LDA) to Adaboost Classifier thereby enhancing the performance of the classifier. Results show that an average accuracy of 96.68% and an average quality value of 21.77 are obtained when the Modified Adaboost Classifier is used and an average accuracy of 97.20% and an average quality value of 22.51 is obtained when the Adaboost Classifier is used for the classification of epilepsy risk levels from EEG signals.

Harikumar Rajaguru, Sunil Kumar Prabhakar
Application of Morphological Filtering with Modifications in Linear Discriminant Analysis Classifier for Epilepsy Classification from EEG Signals

One of the most prominent neurological disorders posing a huge peril to the human community is epilepsy. Because of certain electrical disturbances happening in the function of brain, epilepsy occurs and it is characterized by recurrent seizures. Because of these epileptic seizures, both the physical and mental condition of the patient deteriorates thereby the patient is prone to more physical attacks and injury. Only if the seizures are detected and classified properly, then a good health care can be provided to the patients. For detection of the seizure activities, Electroencephalograph (EEG) signals are used. In this paper, morphological filtering concept is applied to the code converters which is obtained from processing EEG signals and it is employed as a preclassifier, and later it is post classified with Linear Discriminant Analysis (LDA), Log LDA (L-LDA) and Kernel LDA (K-LDA) classifiers. Results show that when LDA is used as a post classifier, an average classification accuracy of 97.39% along with an average quality value of 21.3 is obtained. Similarly if L-LDA is used as a post classifier, then an average classification accuracy of 96.87% along with an average quality value of 21.4 is obtained and when K-LDA is used as a post classifier, then an average classification accuracy of 96.45% along with an average quality value of 20.7 is obtained.

Harikumar Rajaguru, Sunil Kumar Prabhakar
Analysis of Dimensionality Reduction Techniques with ABC-PSO Classifier for Classification of Epilepsy from EEG Signals

Epilepsy is a commonly occurring neurological disorder which is characterized by recurrent seizures and it is of different types. The seizures can occur at various times irrespective of any symptoms. The central nervous systems are severely disturbed by the seizures activity. The abnormal electrical behaviour of a collection of cells in the brain leads to seizures and it can be detected by clinical symptoms. For the detailed study and diagnosis of epilepsy, Electroencephalography (EEG) signals are most commonly used. The electrical activity representation which results due to the generation by the cerebral cortex neurons are shown by EEG recordings. Because of this reason, it forms an integral component in the brain activity evaluation, epilepsy diagnosis and perception of epileptic attack. As the EEG recordings are quite long in nature, processing it is quite difficult. Therefore in this paper, the dimensions of the original EEG recordings are reduced with the help of dimensionality reduction techniques such as Singular Value Decomposition (SVD), Principal Component Analysis (PCA), Independent Component Analysis (ICA), Fast ICA and Linear Discriminant Analysis (LDA). The dimensionally reduced values are then fed inside the hybrid classifier called as Artificial Bee Colony-Particle Swarm Optimization (ABC-PSO) Classifier and the epilepsy risk level classification from EEG signals are analyzed. The results show that the highest classification accuracy of 97.42% along with a highest quality value of 22.76 is obtained when Fast ICA is used as a dimensionality reduction technique and classified with ABC-PSO Classifier.

Harikumar Rajaguru, Sunil Kumar Prabhakar
E-health Design with Spectral Analysis, Linear Layer Neural Networks and Adaboost Classifier for Epilepsy Classification from EEG Signals

About 1–2% of the population in the whole world is suffering from a serious neurological disorder called epilepsy which is characterized by spontaneous seizures. A lot of temporary disruptions occur in the ongoing electrical activities of the brain if the seizure attack is present. Antiepileptic drugs may be favourable for some patients while for other patients it may not respond well. To explore the electrical behaviour of the human brain, the measurement and the recordings of the electrical brain activity is done. By analyzing the Electroencephalography (EEG) signals and extracting all its features including both univariate and multivariate, various algorithms for seizure prediction, detection, classification have been developed. In this paper, an e-health design for epilepsy classification with the help of spectral analysis, Linear Layer Neural Networks (LLNN) and Adaboost Classifier has been proposed. The LLNN has been used as the preliminary level classifier and as the results obtained through it are not satisfactory, further optimization and classification is done with the help of Adaboost Classifier. Results show that when classified with Adaboost Classifier an average classification accuracy of about 99.43%, an average quality value of 24.38, an average less time delay of 1.99 s along with an average performance index of 99.13% is obtained.

Harikumar Rajaguru, Sunil Kumar Prabhakar
Biomedical Voice Based Parkinson Disorder Identification for Homosapiens

A long term neuro degenerative genetic and sporadic based Parkinson disorder affects the central nervous system of homosapiens. Most of the existing approaches have detected the Parkinson Disorder using more number of voice features such as MDVP:Fo(Hz), jitter, shimmer, and so on, which causes complexity O(mxn) because of inclusion of unrequired extra features in the field n. In order to reduce the processing time, the feature subset is identified using Correlation based Feature Selector, Principal Component Analysis and Genetic Algorithm. In this paper supervised approach named Neural Network based Genetic Algorithm is used to construct the probabilistic model for existing data set. This model is compared with existing approach. The experiment reached a performance improvement of 95.38%.

B. Anusha, P. Geetha
Cancelable Biometrics Using Geometric Transformations and Bio Hashing

Biometrics is a security system which includes the physical traits of a person such as facial features, fingerprints, iris detection etc for the verification and authentication of a system. These features are stored as a template also known as a biometric template in database or server. Once the Biometric template is compromised it cannot be changed like the passwords and therefore to ensure cancelability of several biometric templates several papers were introduced. Cancelability is a method of generating a new biometric template from the original template using several approaches and algorithms. In this paper cancelability in biometrics is achieved by applying algorithms such as geometric transformations and bio hashing algorithms. First, at the time of registering user, the traits of a user such as a fingerprint and a face is scanned using a scanner and the biometric template thus obtained is encrypted first using geometric transformation algorithms such as Cartesian transform, polar transform and functional transforms. Then bio hashing is applied to the user specified tokenized random number to the transformed template to obtain the encrypted template. The encrypted template is stored in the database which can be further used for verification. In the case of security attacks which may compromise the biometric template a new set of a template is obtained from the same user by using the different set of tokenized random numbers.

R. Parkavi, K. R. Chandeesh Babu, T. Neelambika, P. Shilpa
Automated Tool for Assessment of Satellite Image Quality and Characteristics

The functionality of automated tool developed for computing quality metrics to assess the satellite image quality and characteristics is presented in this paper. The tool facilitates to analyse quantitatively different sets of satellite images. This tool can also be used to compare the different sets of satellite images. In this paper, we present quality metrics based on single image and multiple images, thus aiding in relative characterization. The tool estimates the Modulation Transfer Function (MTF) and converts the gray image to reflectance image. GUI has been developed for operational convenience and a log file is maintained for recording the generated parameters.

P. Muni Deepak, V. Kamalaveni, E. Venkateswarlu, Thara Nair, G. P. Swamy, B. Gopala Krishna
Reconstruction of Recaptured Images Using Dual Dictionaries of Edge Profiles

The recapture detection based on high-quality LCD screen is really challenging as the recaptured image from LCD screen seems to be like the original and very difficult to distinguish by human eye. An image recapture detection algorithm is used to classify single captured and recaptured image. This paper proposes a novel approach for the reconstruction of recaptured images for improving quality using a dual dictionary of which, one is for single captured and the other for recaptured images. Here K-SVD algorithm is used to train both dictionaries in which the orthogonal matching pursuit algorithm has been used to generate sparse approximation from the dictionary of recaptured image and Line Spread Profile matrix. With the help of sparse approximation of recaptured set and dictionary of captured set, captured image can be reconstructed from recaptured image. The Experimental results indicate that the proposed algorithm results better quality of captured image from recaptured set in terms of PSNR.

J. Hridhya, A. Shyna
Brain Tumor Segmentation Based on Clustering Using Pixel Intensity Variance of Pattern Recognition

In medical image processing of tumor segmentation process are very effective process of Magnetic Resonance Imaging (MRI) segmentation are using the radiology or clinical diagnosis. The proposed a novel technique of image clustering to segment the MRI brain tumor and segment the region of filtering process. The image filtering and image histogram analysis of the Circulation based Non-Linear Median Filtering (CNMF). The pixel intensity of labelling process of the neighboring structures are the pixel image intensity and quality recognize the Neighborhood Interior Edge detection (NIED) of segmentation technique. The neighboring pixel variation and detection of the image pattern recognition of the normal and abnormal image category level of processing to the Pixel Intensity Variance of Pattern Recognition (PIVPR) technique. The techniques are using the predict the abnormal category level of tumor spot and high accuracy level information.

M. Muthalakshmi, R. Dhanasekaran
Outlier Detection in Time-Series Data: Specific to Nearly Uniform Signals from the Sensors

In this paper, the complexity of detecting an outlier has been shown. The importance of an outlier has been presented with the need to interpret these outliers. The sensors collect data with certain sampling time period and these data are stored which contribute to the huge database. These sensors can be electrocardiogram sensor which monitors electrical and muscular functions of the heart, they can be pollution monitoring stations at the airports, they can be heat sensors in a building, and they can be flight data recorders (FDR) and so on. Sometimes, these sensors miss to detect the signal due to some technical fault and hence the output is “Not Available (NA)”. These NA time stamps create unnecessary problems which lead to unwanted outputs when the data is processed. In this paper, an algorithm is presented which replaces these NA values with most probable values. When the data is ready with all NA values removed, the data is processed for detecting the outlier. In this paper, an outlier is being detected by analyzing the signal in the frequency domain along with the mean in the time domain. A large data set is divided into equal sized blocks. Each block is then converted to its frequency domain and mean is calculated in the time domain. These two parameters are considered to detect any outlier in the block. This approach removes the complexity in the algorithm without compromising in the efficiency of detecting an outlier. Hence, a large database of values is processed in relatively less time with appreciable accuracy.

Sourabh Suman, B. Rajathilagam
Smart Energy Management System Based on Image Analytics and Device Level Analysis

Large educational institutes, organization, and industries face large challenges on energy utilities, consumption and its management strategies. But smart energy management technology solutions help the high energy consumption complexities during while putting the best and greenest foot forward. Smart Energy Management technology solutions, improve and respond quickly to power spikes at times of demand, expedite data gathering, reporting and regulatory compliance, automate services to control operating costs and enable to save energy. Connecting smart meters to data stores requires a reliable, intelligent network. The Smart Energy Management System introduced here deals with a device level analysis that gives information of the device that has caused the peak rise in the total power consumption of the organization. Predictive analysis technique is used on the database to predict the future maximum demand and load balancing technique is applied to reduce the consumption of power from generator source. Therefore the total power consumption from exceeding the maximum demand can be avoided and the maximum demand of the power supply for the organization can be maintained. Further, on application of AI techniques this system control becomes fully automated.

S. Birindha, V. Ananthanarayanan, P. Bagavathi Sivakumar
Assisting Visually Challenged Person in the Library Environment

Nowadays, there is lot of assistive technologies to support visually impaired people. Among them computer vision based methods provide a feasible solution. An indoor environment provides challenges like object recognition, character and scene recognition. It is important to understand that people need to know information about things; places and they may feel insecure in places like their working places, shopping mall because of their challenges in vision. It is essential that a technology based solution can be provided to support the people so that they can be guided along in their pathways, rooms, shopping malls and they can also access things in their living environment. In this paper a model is proposed for detecting text from library book shelf scene video and informing the user about book name through audio to assist visually impaired people in accessing their book which is kept on shelf in library. Key frames are extracted using PSNR and Edge Change Ratio method. Text on the key frame is detected and localized using MSER and Projection profiles. CNN is used to recognize characters from the localized text. This paper gives an outline of different techniques which are combined to extract key frames, localize and recognize text from natural library scenes.

D. Kavin Kumar, Senthil Kumar Thangavel
Tensor Flow Based Analysis and Classification of Liver Disorders from Ultrasonography Images

In the field of medical imaging, Ultrasonography is a popular and most frequently used diagnostic tool owing to its hazard-free, non–invasive and the cost effective nature. Liver being the largest and vital organ in the human body, liver disorders are treated very important and initial detection of the disorder is made using ultrasound imaging by the radiologists that leads to additional biopsies for confirmation, if necessary. This work focusses on the automated classification of nine types of both focal and diffused liver disorders using ultrasound images. A deep convolutional neural network architecture codenamed Inception is used. The technique achieves a new state for classification and detection of liver disease. The disease is predicted based on the score obtained as a result of training. The classification is achieved using tensor flow and it outputs the predicted labels and the corresponding scores. The method achieves reasonable accuracy using the trained model.

K. Raghesh Krishnan, M. Midhila, R. Sudhakar
A Powerful and Lightweight 3D Video Retrieval Using 3D Images Over Hadoop MapReduce

Content Based Video Retrieval (CBVR) is an approach for redeeming 3D videos from several video sources such as YouTube, Viki etc. Various modern video retrieval applications are revealed by using approaches like key frame selection, features extraction, similarity matching, etc. We propound a new framework that gathers key frame selection, feature extraction, denoising and similarity matching for effective retrieval of videos from 3D images. We initiate Relative Entropy based Fast Key Frame Selection (REFKFS) for nominating optimal key frames for a video. BM3D filter with Bayesian threshold (BB) is proposed for decreasing white Gaussian noises on 3D key frames. We extract texture features, shape features, motion features and color features from the key frames. For extricating shape features, we prefer moments of objects and regions. After extricating features, we propose similarity matching process which is established from Multi-Featured Light-weight (MFLW) matching scheme for successful retrieval of videos with 3D images. Here we qualify Hadoop environment for handling massive sized database on video retrieval. Our experimental outcome includes larger database and furnish efficacious result as 98% of accuracy obtained for overall proposed framework.

Chandra Mohan Ranjith Kumar, Sangayah Suguna
An Improvement of Iterative Algebraic Reconstruction Technique by Using Bisectors: An Illustration in Computerized Tomography

This paper proposes the utilization of bisectors for iterative algebraic reconstruction techniques to attain better image reconstruction. We use bisector hyperplanes and the average of the multiple solutions for beam equations. We also discussed why bisector hyperplanes may improve the approximate solution for pixel intensities and how they are obtained. Then, we defined the ART procedure with bisector hyperplanes and implemented the algorithm. Through an application, we compared the solution points obtained with other ART based methods and our ART procedure. As closer solution points may be obtained at each iteration; the images formed by pixels intensities based on these solutions may have higher quality.

Mohamad Soubra, Ömer Özgür Tanrıöver
Use of Predictive Analytics Towards Better Management of Parking Lot Using Image Processing

As more and more smart cities are planned in India, there is a growing need for smart parking and smart transportation. Parking has been identified as a major challenge to traffic network and urban life quality. Already most of the cities are facing the problem of pollution. Due to drivers struggling for finding the parking area, 30% of traffic congestion occurs according to industry data. There is also a need for secure, efficient, intelligent and reliable systems that can be used for searching the unoccupied parking facilities, guide towards the parking facilities, and negotiate the parking fee. This would help in the proper management of the parking facility. There is no publically available data on parking in India. This work would be useful in creation of such datasets. Image based model has been proposed to identify the slot occupancy status. A prediction model has also been incorporated in the system to predict the occupancy rate and thereby help the management in better management of parking lots. One of the machine learning method, linear regression is used for predicting the number of car parked every hour. A slot based approach was used and the performances of prediction algorithms were compared.

K. A. Maheshwari, P. Bagavathi Sivakumar
Enforcement of Automatic Penalty (e-Penalty) to Govern the Traffic Rule Violators in Digitized INDIA Using I.C.T.

Government of INDIA (GOI) has a dream to make every organization and transaction digitized. Government has initiated so many steps to digitize the organizations and forced the citizens to use digitized mode in their day to day life. According to the W.H.O. Road accidents are not a small matter now a days, it is listed in their top health agendas now a days. According to the statistics road accidents are the leading cause of death in 15–29 years age group. In this digitized country there should be some solution which can enforced automatic penalty on the traffic rule violator so that they will stop themselves to do the road safety offences. Government should also go ahead to curb such road offenders by penalize them forcibly from their AADHAR linked accounts automatically. In this paper a solution is proposed which can integrate easily with the existing methods of penalty in INDIA to penalize the violators automatically.

Shiv Kumar Goel, Kavita, Manoj Shukla
Optimization of Rules in Neuro-Fuzzy Inference Systems

Optimization of rule based system with Neuro Fuzzy Inference system results better regarding accuracy and interpretability. Dynamic Evolving Neuro Fuzzy Systems (DENFIS) model is used to find out an optimized rule base using computational intelligence techniques for a target search application. The process of optimization starts at the beginning of the target search process by selecting an appropriate selection of rule based Fuzzy Inference System. Further, optimization has been addressed in the choice of the number of rules by reducing the number of attributes used in the input. The integrated approach of input selection and rule selection results in accurate target predictions. The ability of knowledge-representation, highly interpretable if…then rules and imprecision tolerance are the major features of the proposed model.

J. Amudha, D. Radha
DNA Based Image Steganography

Providing security to all kinds of data is an essential task in the world of digital data. The data can be secured using the several methods like cryptography, steganography and watermarking. Steganography hides the data behind the cover object to secure it. To increase the security of the data, dual cover objects can be used. In this paper, image and DNA are the two covers, which are used to secure the data. The DNA insertion algorithm is used to hide the data in the DNA sequence and it results a fake DNA sequence. The capacity, payload, BPN and Cracking Probability are calculated for the fake DNA sequence to ensure the security. The fake DNA is hidden in a cover image using LSB and F5 algorithm. The MSE and PSNR values are calculated and compared. To conform the security to the data, steganalysis method called histogram attack is performed and compared. This works shows that F5 Algorithm is better compared to LSB algorithm in the second layer of hiding the data.

R. E. Vinodhini, P. Malathi
Performance Analysis of Combined k-mean and Fuzzy-c-mean Segmentation of MR Brain Images

Magnetic resonance imaging (MRI) plays a vital role among the advanced techniques for the imaging of internal organs. It is the least harmful method compared to other existing medical imaging techniques like computed tomography scan, X-ray etc. Image segmentation is the basic step to analyse images and hence to extract data from them. In this paper, we concentrate on brain MRI segmentation, where the performance of algorithms such as k-mean, fuzzy-c-mean (FCM) and their combination (k-FCM) is evaluated. In the proposed methodology, MR brain images of different tumor types like meningioma, sarcoma, glioma, etc. are preprocessed and separate segmentation are being performed using k-mean and FCM methods. Further, the k-mean segmented image is given to the FCM and their performance is compared. The hybrid segmentation scheme gives better results for extraction of tumor regions. The segmented image can be given to a good classifier to detect tumor types and hence the physicians can execute better treatment.

K. V. Ahammed Muneer, K. Paul Joseph
Generation of Author Topic Models Using LDA

Copyright and ownership of research ideas is questionable as to which author the credit should be attached to. Mining author contributions has to be approached more semantically to solve this issue. Representing the research ideas using topic distributions substantiate the measuring of author contributions. Author Topic Models (ATM) are generally obtained by applying topic modeling approaches over an author’s research articles. ATMs form the blueprints of an author. Given a research paper and the blueprints of it’s’ authors, identifying the contribution of every author in the article becomes easy. This paper proposes the generation of ATMs by applying Latent Dirichlet Allocation (LDA).

G. S. Mahalakshmi, G. Muthu Selvi, S. Sendhilkumar
Comparison Analysis of Biowatermarking Using DWT, DCT and LSB Algorithms

Nowadays data exchange through computer networks has increased to a great extent. Though the advancement in internet technology has made our life more easy and convenient, this rapid growth in internet usage and data sharing has given many challenges to the researchers. Providing security to the data sent through the network which may be a normal text, image, audio or video has been a big challenge to the network community till now. Watermarking, a technique applied to hide an information, has seen a lot of research interest recently. Many E-Commerce and E-Governance applications needs some method to protect ownership and copyrights by which the unauthorized access to digital products can be detected and controlled. This paper presents an implementation and comparison of Biowatermarking with palmprint using DWT, DCT and LSB algorithms. The experimental results show that the quality of watermarking using LSB is higher than DCT and DWT.

N. V. Brindha, V. S. Meenakshi
Diagnosis of Carious Legions Using Digital Processing of Dental Radiographs

Diagnosis of caries is a difficult process in the clinical setting, because of the obvious reasons like edge line in the Approximal surfaces is not so clear and also the complicated setting of pits and fissures in the Occlusal surface. Due to this, dependency on the expert advice of doctor increases which changes, with difference in opinion of different doctors. This indirectly affects the Sensitivity and Specificity of the disease in the radiographs used. The goal of this work is to study digital radiographs in this case we are using IOPA for dental caries. Then find out which image is having caries, whether it is visible or not, if yes than what amount of caries it is comprised of, if no than what is the reason behind it. Now to find the visibility we have preprocessed the image using an algorithm which de noises the image and also enhances the image quality, maintaining its medical standards. Also, we have used artificial intelligence to initially diagnose the carious legions by detecting those areas where caries generally occurs like the approximal surface and occlusal surface of the tooth. This approach of diagnosing the carious legions has opened gates for diagnosis of some others diseases also.

Harsh Vikram Singh, Raghav Agarwal
Development of Glare Recognition for Advanced Driver Assistance System

Glare poses a major threat in the implementation of Advanced Driver Assistance Systems. The video or image captured by vehicle bound camera can be affected by sun glare at daytime or any artificial light during night hindering the visibility and causing problems for object detection by ADAS. The paper proposes an algorithm that effectively and precisely detects the glare affected regions in video by the method of finding and drawing contours. The simulations are performed in Visual studio platform and the results show that the glare boundaries are detected accurately and detection speed of proposed method is 3 times faster than existing method of Circle Hough Transform.

N. Madan, K. S. Geetha
High Resolution Remote Sensing Image Denoising Algorithm Based on Sparse Representation and Adaptive Dictionary Learning

Aiming at the denoising algorithm of high resolution remote sensing images, in this paper, we propose a novel method based on sparse representation and adaptive dictionary learning. The proposed algorithm uses the strong correlation between the bands of high resolution remote sensing images, which combines the non local self similarity of the image with the local sparsity to improve the denoising performance. By means of sparse representation of image noise, we extract texture information from image noise so as to improve the quality of image denoising. A learning based super-resolution algorithm learns a dictionary through a set of training examples, and combines the missing high-frequency information from the low resolution image, and finally obtains the corresponding high-resolution image. The traditional denoising algorithm still has noise residue after noise removal, and the image denoising effect is not obvious when the noise is large. Experimental results show that the peak signal-to-noise ratio of the proposed method is higher than the existing similar algorithms, and it can better preserve the details and texture information of the image, and improve the visual effect.

Yuwei Qiu, Yuda Bi, Yang Li, Haoxiang Wang
4-Share VCS Based Image Watermarking for Dual RST Attacks

The startling improvement about web need aggravated the transmission, conveyance Also right to advanced networking really useful. Therefore, networking makers need aid more usually managing illegal What’s more unapproved utilization from claiming their productions. In our proposed approach, first enter the user name and password then generate QR-code using zxing library that will converted into the three shares using Binary Visual cryptography algorithm. Now share-4 is save in the database that is for future reference at receiver side. Remaining share-1, share-2 and share-3 are embedding into the Red, Green and Blue-component LL bit using of block DWT-SVD and Pseudo Zernike moment. After embedding add RGB Component. Now Color watermark image transfer from the network. As in network there are different attackers apply combination of Rotation, Scale and Translation attacks on the color watermark image. For recover the attacks first apply Pseudo Zernike moment, Surf feature on R, G and B-component they will extract the attacks pixel and recover the scale-angle using affine transformation. Now share-1, share-2, share-3 and another share-4 is in data base so we will apply EX-OR operation to get the QR-code. The final QR-code is decoded and we get the user name and password. This research work can give a way for providing authentication to all online Services.

Sheshang D. Degadwala, Sanjay Gaur
An Intelligent Framework for Road Safety and Driver Behavioral Change Detection System Using Machine Intelligence

Road accidents have been a major concern for every countries all over the world. According to the report submitted by WHO 3400 people die on roads every day and millions of people are getting injured or disabled every year. In India itself 1214 (according to the report by NCBI) people met in road accidents every day in which 377 people die, 20 of them are children below the age of 14 years. More than half of all road traffic deaths occur among young adults aged 15–44. These alarming rate of deaths call for immediate attention and finding a way to reduce this high rate of death toll on the world road. Keeping all these in our mind we propose to develop a safety system in a form of a “smart-car” which deduces the safety of the driver based on his past driving patterns or driving habits. The Driving patterns could be tracked by taking the vehicle GPS coordinates and usage in general. would be recorded, passing these features into Regression Analysis would help us in predicting a possible accident.

Idhant Haldankar, Mohit Tiwari, G. Usha, S. Aruna
Highlight Generation of Cricket Match Using Deep Learning

In this paper, we propose various algorithms to generate highlight of a cricket video by detecting important events such as replay, pitch view, boundary view, bowler, batsman, umpire, spectator, player’s gathering etc. The proposed method work in two parts. The first part contains five levels. At level one key frame are identified by using hue histogram difference. At level two classify frames to replay frames or real-time frames by detecting the absence of scoreboard. At level three real time frames are classified into field view or non field view based on Dominant Grass Pixel Ration. At level 4a onfield frames are classified into pitch view and boundary view. At level 4b by using edge detection method close-up and crowd frames are detected. Level 5a and 5b again divide each close-up and crowd frames into umpire, bowler, batsman, players gathering and spectator. Concept mining is done on the second part by using the Apriori algorithm and labeled frame events are input. Then combines all the concept detected to form a summarized video. Results at the end of paper show the accuracy of our approach.

K. Midhu, N. K. Anantha Padmanabhan
Detection of Pulmonary Nodules Using Thresholding and Fractal Analysis

Automated detection of pulmonary nodules helps radiologists in early detection of lung cancer from computed tomography (CT) scans. It is very costly computationally because of its complexity of the process. The CT scan has more advantages than other computational algorithms. The preprocessed CT scan is thresholded using Otsu’s method and the lung region is segmented using K-means Clustering which is based on geometric features. Texture based feature analysis algorithm is used to identify the major descriptors. The Artificial Neural Networks (ANN) is used for training, testing and validation process takes place to identify nodule and classify in stages i.e. Stage 1 (initial), Stage 2 (middle) and Stage 3 (critical). The results obtained in this method has been checked for accuracy.

Deepthi Ramesh, Deepa Jose, R. Keerthana, V. Krishnaveni
PSO Based Blind Deconvolution Technique of Image Restoration Using Cepstrum Domain of Motion Blur

In this paper, blind deconvolution technique is planned based on the Particle Swarm Optimization (PSO) and cepstrum method. Angle and distance is obtained from motion blurred images using cepstrum method. The parameters of cepstrum are optimized through PSO technique. Here, we are optimizing values of theta and length from cepstrum of blurred image, which will help in PSF calculations. To extend the result of our previous work (Almeida and Almeidain in IEEE Trans Image Process 19(1):36–52, 2010 [1]) we have used PSO technique which is giving better result than GA. Also the convergence rate is faster and computational time of an algorithm is also reduced. Hence, the proposed method outperforms the previous method.

G. Ramteke Mamta, Maitreyee Dutta
Traffic Density Analysis Employing Locality Sensitive Hashing on GPS Data and Image Processing Techniques

Recent development of GPS enabled devices helps in tracking the approximate location of any device. Any GPS enabled device with working internet can be tracked at any point of time. The data obtained from GPS serves several purposes, such as tracking lost devices, providing directions to a certain destination, etc. In several public environments, difficulty arises in plugging the rescue operation during any emergency needs. In case of traffic, the raw data about the traffic closure will not help the authority to reach right location. Instead, the information such as, near accurate location and time of traffic collected from GPS helps the authority to reach the destination. In this paper location of vehicle crowd formation is detected by applying similarity detection with locality sensitive hashing to the collected GPS data and two approaches with LSH (on numerical computation and on image processing) are proposed. The LSH technique is used to hash the location data to find the vehicles at similar locations and time. The proposed approaches are lightweight and needs less computational effort, since dual hashing is employed thus making it suitable for real time applications. This paper uses Apache spark for detecting vehicle crowd by applying LSH to the GPS data, since it is very fast and handles enormous data.

K. Sowmya, P. N. Kumar
A Novel Transfer Learning Approach upon Hindi, Arabic, and Bangla Numerals Using Convolutional Neural Networks

Increased accuracy in predictive models for handwritten character recognition will open up new frontiers for optical character recognition. Major drawbacks of predictive machine learning models are headed by the elongated training time taken by some models, and the requirement that training and test data be in the same feature space and consist of the same distribution. In this study, these obstacles are minimized by presenting a model for transferring knowledge from one task to another. This model is presented for the recognition of handwritten numerals in Indic languages. The model utilizes convolutional neural networks with backpropagation for error reduction and dropout for data overfitting. The output performance of the proposed neural network is shown to have closely matched other state-of-the-art methods using only a fraction of time used by the state-of-the-arts.

Abdul Kawsar Tushar, Akm Ashiquzzaman, Afia Afrin, Md. Rashedul Islam
Diabetic Retinopathy Detection in Fundus Image Using Cross Sectional Profiles and ANN

An eye disease which destroys the normal vision ability of Diabetic Patients is known as diabetic retinopathy. Early diagnosis of this disease is necessary because, it is severe in the later stages. The presence of the microaneurysm (MA) is the first clear clinical symptom of this disease. MAs are red dots formed by swelling of the weak part of the capillary wall. The detection of microaneurysms in retinal fundus images is an important task for applications such as diabetic retinopathy screening and early treatment. The proposed method detects MAs by the use of directional cross sectional profiles of some central pixels. The cross sectional profiles of each local maximum pixels of the preprocessed images are drawn and then these profiles are analyzed. For each profile, peak detection step is employed and some attributes which includes the shape, height and size of the peak are computed. The numerical measures of these attributes are included in the feature set. This feature set is used as an input for the classifier. Artificial Neural Network (ANN) is used as the classifier. The results are compared with the well-known classifier Naïve Bayes and obtained good results.

M. Smitha, A. K. Nisa, K. Archana
An Application of Image Processing Technique for Compression of ECG Signals Based on Region of Interest Strategy

In this paper, a novel Region of Interest (ROI) based 2-Dimensional (2D) compression of ECG signals using JPEG2000 compression standard is proposed. Because of its high efficiency JPEG2000 is the global benchmark for compression of stationary images. This work is to illustrate that the JPEG2000 compression technique is not only restricted to compress images but also it can be applied to compress ECG signals. First the one dimensional ECG signal is transformed to 2D representation or image to explore the correlation among the samples and among the beats. This necessitates few steps that includes QRS detection and arrangement of QRS complex at relative position, Period sorting, mean extension, period normalization, amplitude normalization and locating ROI. Then this resultant 2D ECG data array is compressed using JPEG2000. The ROI region is extracted by applying the Otsu thresholding and the vertical projection profile on the resultant 2D ECG data. The core idea in this work is to compress the ROI region at low compression rate and non-ROI region at high compression rate. The proposed method is evaluated on the selected data from MITs Beth Israel hospital and it was conceded that this method surpasses some of the prevailing methods in the literature by attaining a higher Compression Ratio (CR) and moderate Percentage root- mean square difference (PRD).

T. Shreekanth, R. Shashidhar
Face Authentication Using Thermal Imaging

The objective of the paper is to develop an algorithm for face authentication using thermal imaging. Thermal imaging makes use of thermal cameras to detect infrared radiation emitted by objects. Thermal images are especially useful for face detection and authentication because of its low sensitivity to illumination changes and its ability in detecting disguises. The objective of the project is achieved through three stages. In the first stage, an existing algorithm is verified by implementing it on the benchmark dataset. Currently existing face authentication algorithms have only been verified by implementing on datasets having plain background i.e. images without any background objects. There is a need to investigate the efficiency of face authentication algorithms on realistic scenarios. So, in the second stage an effort had been put forth to create a new dataset of images with realistic background and analyse the performance of various existing face authentication algorithms on the new dataset.

S. Athira, O. V. Ramana Murthy
Meme Classification Using Textual and Visual Features

Social networks became a global phenomenon and made an enormous impact in different fields of society in a few years. These days, a huge number of memes have available in social networks. An internet meme is a cultural style that propagates from one to another in social media. It is a unit of information that jumps from place to place with slight modification. These memes play a vital part in expressing emotions of users in social networks and serve as an effective promotional and marketing tool. Visual memes are important because they will show emotion, humor, or portray something that words cannot. This paper recommends a framework that could be utilized to categorize internet memes by certain visual features and textual features.

E. S. Smitha, S. Sendhilkumar, G. S. Mahalaksmi
Image Based Group Happiness Intensity Analysis

The objective of this project is to compute happiness index of an image containing single or multiple persons. There are situations in social gatherings such as marriage and festivals, where large number of group photos is taken. If only few photos, say best ten, have to be selected for representing the whole occasion, manual selection is based on individual perception. Different individuals have different perception. In this project the perceptions are ranked in a 0–5 scale of happiness emotion content; thus trying to automate the process. Image processing techniques are used for computing happiness index of the image. Initially, faces in the image are detected, followed by expression classification such as happy, neutral and so on. If the image contains multiple persons, each person’s expression is given an individual happiness index. All the individual happiness indices are integrated to yield the image’s happiness index. Perceptions of different experts are used to validate the happiness indices. The different combinations of feature extraction, classification and integration is experimented so that the validation is as high as possible. This technique is validated on the benchmark HAPPEI database.

V. Lakshmy, O. V. Ramana Murthy
Earthquake Analysis: Visualizing Seismic Data with Python

Earthquake is one of the most destructive and life wrecking natural calamity that basically happens due to energy released from Earth’s crust. In this paper we propose to serve previous years’ earthquake data as a parameter and visualize relationships, if any, among various attributes of the data set. In conclusion we aim to discover regions which are most and least prone to earthquake and the probable magnitude of earthquake at various latitudes and longitudes. Studying raw data from excel sheet is old century and very hard so we have used data mining techniques such as clustering and regression to achieve the obtained results.

Saksham Tulsyan, Bharat Bahl, Shivi Kaya, G. Thippa Reddy
Instill Dental Antennas for Minimally Interfering Bio-medical Devices

Implantable medical devices are mainly in use for the wireless electronic health devices and remote home care applications because of their miniature techniques and low-power consumption integrated circuits. The demand for bio-medical applications that employ wireless telemetry systems has increased significantly. Antennas can be implanted into human bodies to form a bio-communication system between medical devices and exterior instruments for short-range bio-telemetry applications. A compact antenna is being implanted in dental applications. The antenna consists of Archimedean spiral antenna and a Hilbert-based curve gives high gain. In this design two Hilbert based curves and one spiral antenna is embedded. This 3D folded antenna is implanted in teeth. It gives uni-directional radiation pattern and eliminates the back radiation. Due to unidirectional radiation the power loss is reduced. In Archimedean spiral antenna FR4 material is used as a substrate with dielectric constant 2.2. Here probe feed technique is used to excite the current. Simulation is done by using HFSS software. Shortening wall technique is used to enhance the bandwidth. The simulated antenna performs high gain (>6 dB), return loss (<−10 dB) and broad bandwidth. This antenna can be used for bio-medical applications.

T. Gomathi, S. Maflin Shaby, B. Priyadharshini
A Survey on Multi-feature Hand Biometrics Recognition

Biometrics is the one of the most emerging technology in our day to day life. Biometrics is going to be a future of security and applications. Why we need biometrics? The password is not user-friendly, we could not dump all password in our brain. Sometimes we couldn’t remember which application which password we used. To overcome all those problems, Biometrics provides security “You are the password for your application”. Biometrics provides trustworthiness. Both behavioral and physiological characteristics of biometric features recognize the individuality of a person whether the user is genuine or an imposter. Hand biometrics is one of the traditional biometric systems. Generally, hand biometrics can be either captured by contact and contactless-based approach. Hand biometrics comprises of palm textures, knuckle, hand geometry, fingerprints which can be used for recognition. Hand biometrics consist of more uniqueness and individuality of the person has been identified. In this survey paper, we are going to present the working of the hand geometry and palm print technically. The primary objective is to study in-depth of hand biometric system and architecture of hand biometrics. The secondary objective is to literature survey on hand geometry and palmprint. This paper also studies the pre-processing and feature extraction of various systems used in hand and palm print. Finally paper addresses the performance evaluation techniques of hand biometrics.

E. GokulaKrishnan, G. Malathi
Early Detection and Classification of Diabetic Retinopathy Using Empirical Transform and SVM

Diabetic Retinopathy is the name given to ‘disease of retina’. The objective of this work is for timely diagnosis and classification of diabetic retinopathy using curvelet transforms and SVM. Firstly, retinal images are enhanced using empirical transform. Canny edge detection is applied for extracting eyeball from retinal fundus image. Then morphological operations are applied for locating the imperfections in the images. At the end, images are classified into normal, proliferative or non-proliferative by using SVM. Both accuracy and sensitivity of the images is improved when compared with previous technique in which only k-means and fuzzy classifier is used. The number of exudates detected in present work is more than that of the process without enhancement. The sensitivity, specificity and accuracy of system are calculated as 96.77, 100 and 97.78 respectively.

Sumandeep Kaur, Daljit Singh
A Prototype to Detect Anomalies Using Machine Learning Algorithms and Deep Neural Network

Artificial Intelligence is making a huge impact nowadays in almost all the applications. It is all about instructing a machine to perceive an object like a human. Making the machine to excel at this perception requires training it by feeding large number of examples. In this way machine learning algorithms find many applications for real time problems. In the last five years, Deep Learning techniques have transfigured the field of machine learning, data mining and big data analytics. This research is to investigate the presence of anomalies in given data. This model can be used as a prototype and can be applied in domains like finding abnormalities in medical tests, to segregate fraud applications in banking, insurance records, to monitor IT infrastructure, observing energy consumption and vehicle tracking. The concepts of supervised learning and unsupervised learning are used with the help of machine learning algorithms and deep neural networks. Even though machine learning algorithms are effective for classification problems, the data in question, determines the efficiency of these algorithms. It has been showed in this paper; the traditional machine learning algorithms fail when the data is highly imbalanced and necessitate the use of deep neural networks. One such neural network called deep Autoencoder, is used to detect anomalies present in a large data set which is largely biased. The results derived out of this study, proved to be very accurate.

Malathi A., Amudha J., Puneeth Narayana
Histogram Modification and Bi-level Moment Preservation Based Reversible Watermarking

Reversible watermarking is technique to hide the secret information in an image and to reconstruct the original image after the extraction of the watermark. In this paper, we present a reversible watermarking algorithm based on block separation, moment preservation, histogram shifting and embedding secret information. The peak points of the blocks are identified and histogram shifting is performed to improve the embedding capacity of the image. Due to the increase in the embedding capacity of the proposed method, the distortion of the host image increases. The bi-level moment preservation technique is used, in order to improve the image quality. The performance of the proposed algorithm has been evaluated based on the parameters like embedding capacity, Peak Signal to Noise Ratio (PSNR) and Color Peak Signal to Noise Ratio (CPSNR) and compared with the existing histogram modification scheme and it is found to give better results.

R. Rajkumar, A. Vasuki
Combining Diffusion Filter Algorithms with Super—Resolution for Abnormality Detection in Medical Images

One of the most significant areas of image research is Image Enhancement. The main aspect of image enhancement involves the improvisation of the visual manifestation of an image. Poor contrast and noise affect many kinds of images today, such as satellite images, remote sensing images, medical images, real-life images and electron microscope images. Therefore, noise removal and resolution increment are important as well as necessary to ensure and enhance the quality of images. There are many imaging modalities and each of them performs different functions ranging from the provision of information about human anatomy/structure to the provision of location statistics about specific activities and tasks. Physical constraints of system detectors—which are tuned to signal-to-noise and timing considerations are used to determine the resolution of imaging systems. The hybrid techniques designed n this paper uses algorithms are mostly based on standard diffusion filters and SR algorithms. Results demonstrate the potential in introducing SR techniques into practical medical applications.

Shilpa Joshi, R. K. Kulkarni
Design of an Algorithm for People Identification Using Facial Descriptors

The proposed work aims at identifying and greeting people. A new technique is introduced that incorporates the facial features and KNN classifier. Since human face is the one which is said to be the most representative, the features of the face (eyes, nose and mouth) are extracted and are used for training the classifier. The proposed frame work consists of four phases: Facial Features Detection (FFD), Detected Features Positioning (DFP), Descriptive Features Extraction (DFE) and Face Identification (FI). The proposed algorithm can be used in a wide variety of scenarios such as campus, office etc. after being trained with the corresponding dataset. The performance of the system is analyzed for various scenarios. Good average accuracy of 96.05% has been achieved.

Pranav V., R. Manjusha, Latha Parameswaran
Image Tampering Detection Based on Inherent Lighting Fingerprints

Digital Imaging experienced unprecedented growth in the past few years with the increase in easily accessible handheld digital devices. This furthers the applications of digital images in many areas. With the amassed recognition and accessibility of low cost editing software, the integrity of images can be easily compromised. Recently, image forensics has gained popularity for such forgery detection. However, these techniques still lag behind to prove the authenticity of the images against counterfeits. Therefore, the issue of credibility of the images as a legal proof of some event or location become imperative. The proposed work showcases a state-of-the art forgery detection methodology based on the lighting fingerprints available within digital images. Any manipulation in the image(s) leaves dissimilar fingerprints, which can be used to prove the integrity of the images after the analysis. This technique performs various operations to obtain the intensity and structural information. Dissimilar features in an image are obtained using Laplacian method followed by surface normal estimation. Applying this information, source of the light direction is estimated in terms of angle $$\psi$$. The proposed technique demonstrates an efficient tool of digital image forgery detection by identifying dissimilar fingerprints based on lighting parameters. Evaluation of the proposed technique is successfully done using CASIA1 image dataset.

Manoj Kumar, Sangeet Srivastava
Backmatter
Metadaten
Titel
Computational Vision and Bio Inspired Computing
herausgegeben von
D. Jude Hemanth
Dr. S. Smys
Copyright-Jahr
2018
Electronic ISBN
978-3-319-71767-8
Print ISBN
978-3-319-71766-1
DOI
https://doi.org/10.1007/978-3-319-71767-8

Neuer Inhalt