Skip to main content
Top

2019 | Book

Proceedings of the International Conference on ISMAC in Computational Vision and Bio-Engineering 2018 (ISMAC-CVB)

insite
SEARCH

About this book

These are the proceedings of the International Conference on ISMAC-CVB, held in Palladam, India, in May 2018. The book focuses on research to design new analysis paradigms and computational solutions for quantification of information provided by object recognition, scene understanding of computer vision and different algorithms like convolutional neural networks to allow computers to recognize and detect objects in images with unprecedented accuracy and to even understand the relationships between them.

The proceedings treat the convergence of ISMAC in Computational Vision and Bioengineering technology and includes ideas and techniques like 3D sensing, human visual perception, scene understanding, human motion detection and analysis, visualization and graphical data presentation and a very wide range of sensor modalities in terms of surveillance, wearable applications, home automation etc.

ISMAC-CVB is a forum for leading academic scientists, researchers and research scholars to exchange and share their experiences and research results about all aspects of computational vision and bioengineering.

Table of Contents

Frontmatter
Digital Image Watermarking Using Sine Transform Technique

In this chapter, digital image watermarking using sine transformation has been introduced in the frequency domain. In the proposed technique, the secret image is first compressed using a sine transformation and then embedded into the host image. The resultant images obtained from the proposed algorithm are subjected to various security attacks and the results are compared with other existing algorithms. The results obtained are better compared to the existing techniques.

S. N. Prajwalasimha, S. Sonashree, C. J. Ashoka, P. Ashwini
Logarithmic Transform based Digital Watermarking Scheme

In this chapter, a new frequency domain transformation based digital watermarking technique has been introduced. The proposed technique adopts logarithmic transformation in order to embed the watermark image into the host image. The algorithm is subjected for various security attacks and compared with other frequently available digital watermarking schemes. The results obtained from the proposed algorithm are more satisfied compared to the existing systems.

S. N. Prajwalasimha, A. N. Sowmyashree, B. Suraksha, H. P. Shashikumar
A Survey of Medical Imaging, Storage and Transfer Techniques

Medical data keeps growing with the growing number of scans every year. Patient experience plays a vital role in development of healthcare technologies. The speed with which the data can be accessed when the patient really wants to get diagnosed be it the same hospital or a different hospital becomes a very important requirement in future healthcare research. With growing amount of modality techniques and size of the captured images, it is very important to explore the latest technologies available to overcome bottlenecks. With (Computed Tomography) CT and Magnetic Resonance Imaging (MRI) modalities increasing the number of slices and size of the image captured per second, the diagnosis becomes accurate from the radiology perspective, but the need to optimize storage and transfer of the images without losing vital information becomes obviously evident. In addition security also plays an important role. There are various problems and risks when it comes to handling medical images because it is of key use to diagnose a disease which may be life threatening for the patient. There are evidences of radiologists waiting for the data for a considerable time to access the data for diagnosis. Hence time and quality plays a very important role in healthcare industry and it is major area of research which has to be explored. This scope of this survey is to discuss about the open issues and techniques to overcome the existing problems involved in medical imaging and transfer. This survey concludes the few optimization techniques with the medical imaging and transfer applications. Finally, limitation and future scope of improving medical imaging and transfer performance is discussed.

R. R. Meenatchi Aparna, P. Shanmugavadivu
Majority Voting Algorithm for Diagnosing of Imbalanced Malaria Disease

Vector borne diseases like malaria fever is one of the most elevating issues in medical domain. Accurate identification of a patient from the given set of samples and classification becomes one of the challenging task when dealing with imbalanced datasets. Many conventional machine learning and data mining algorithms are shows poor performance to classify skewed distributed data because they are trained very well with the majority class samples only. Proposing an ensemble method called majority voting defined with a set of machine learning algorithms namely decision tree—C4.5, Naive Bayesian and K-Nearest Neighbor (KNN) classifiers. Classification of samples can be done based on the majority voting of classifiers. Experiment results stating that voting ensemble method shows classification accuracy of 95.2% on imbalanced malaria disease data whereas dealing with balanced malaria disease data voting ensembler shows 92.1% of accuracy. Consequently voting shows 100% classification report on precision, Recall and F1-Score on imbalanced malaria disease data sets whereas on balanced malaria disease data voting shows 96% of Precision, Recall and F1-Score metrics.

T. Sajana, M. R. Narasingarao
Automatic Detection of Malaria Parasites Using Unsupervised Techniques

The focus of this paper is towards comparing the computational paradigms of two unsupervised data reduction techniques, namely Auto encoder and Self-organizing Maps. The domain of inquiry in this paper is for automatic malaria identification from blood smear images, which has a great relevance in healthcare informatics and requires a good treatment for the patients. Extensive experiments are performed using the microscopically thick blood smear image datasets. Our results reveal that the deep-learning-based Auto encoder technique is better than the Self-organizing Maps technique in terms of accuracy of 87.5%. The Auto encoder technique is computationally efficient, which may further facilitate its malaria identification in the clinical routine.

Itishree Mohanty, P. A. Pattanaik, Tripti Swarnkar
Study on Different Region-Based Object Detection Models Applied to Live Video Stream and Images Using Deep Learning

There is a plenty of very interesting problems in the field of computer vision, from the very basic image classification problem to 3d pose estimation problem. One among the many interesting problems is object detection, which is the computer capability to accurately identify the multiple objects present in the scene (image or video) with the bounding boxes around them and the appropriate labels indicating their class along with the confidence score indicating the degree of closeness with the class. In this work, we have discussed in detail different types of region-based object detection models applied on both live video stream and images.

Jyothi Shetty, Pawan S. Jogi
Digital Video Copy Detection Using Steganography Frame Based Fusion Techniques

An effective and exact technique for copying video location in a huge dataset is the utilization of video pictures. We have exactly picked the shading format description, a minimal and powerful casing-based description to make pictures which are additionally determined by vector quantization (VQ). We recommend a new nonmetric length measure to discover the likeness among the question and a dataset video picture and tentatively demonstrate its better execution over other length measures for exact copy identification. The effective look cannot be executed for high-dimensional information utilizing a nonmetric distance measure with accessible ordering systems. Consequently, we create a novel search algorithm in the view of precompiled distances and new dataset reduce systems yielding reduced recovery times. We perform different things with colossal dataset recordings. For singular questions with a normal span of 60 s (around half of the normal dataset video duration), the copy videos are recovered in 0.032 s, on Intel Xeon with CPU 2.33 GHz, with a high exactness of 98.5%.

P. Karthika, P. Vidhyasaraswathi
Color Image Encryption: A New Public Key Cryptosystem Based on Polynomial Equation

Image security is the need of today’s era. The security task is being convoluted as there are a variety of data to secure as different levels of data security is needed based on the data type. Encryption is the maximum expedient approach to guarantee the safety of images over public networks. We have proposed a new public key cryptosystem based on polynomial equation. This approach has been desired because of their resistance to the cryptanalysis attacks. Parameters such as correlation coefficient, entropy, histogram analysis are used for the efficiency of our proposed method. The theoretic and simulation result provides a high quantity of protection. Thus, it is suitable for practical use in secure communication.

P. K. Kavitha, P. Vidhya Saraswathi
Thermal Imaging of Abdomen in Evaluation of Obesity: A Comparison with Body Composition Analyzer––A Preliminary Study

Changes in body composition parameters lead to obesity progression which has an impact on body metabolism. The aim and objectives of the study was: (i) to estimate body composition parameters such as fat-free mass, skeletal muscle mass, lean body mass, subcutaneous fat mass using body composition analyzer; (ii) and to measure mean surface temperature of various regions of the body such as abdomen, chest, forearm front and back, shank front and back using thermal imaging method. Adult volunteers of age 20–29 years were participated in this preliminary study. The body composition parameter such as fat mass, fat free mass, muscle mass, and skeletal muscle mass was measured using body composition analyzer. The thermal imaging of the forearm, abdomen, and shank region was obtained and the skin surface temperature was measured in both normal and obese. The FFM negatively correlated with mean surface temperature at the abdomen region and found to be highly significant with p < 0.05. In this study, BCA parameters correlated significantly with the skin surface temperature measured at forearm and abdomen region in obese subjects.

S. Sangamithirai, U. Snekhalatha, R. Sanjeena, Lipika Sai Usha Alla
A Review on Protein Structure Classification

A massive amount of sequence data is gradually produced by the genome projects that have to be annotated in terms of structure, molecular, and biological functions. In structural genomics, the aim is to resolve several protein structures in an efficient way and to exploit the solved protein structures for assigning the biological function to theoretically solved protein structures. In earlier stages, the protein structures are classified manually in a successful manner and now it suffers from updating problem because of the high throughput of recently solved protein structures. To overcome this issue, several data mining techniques have been examined for the structural classification of the protein world. This review article presents an overview of the existing classification techniques, databases, tools, and performance metrics used for evaluating the performance of protein structure classification algorithms.

N. Sajithra, D. Ramyachitra, P. Manikandan
Emotion Analysis Through EEG and Peripheral Physiological Signals Using KNN Classifier

Emotions are the characteristics of human beings which are triggered by the mood, temperament or motivation of an individual. Emotions are nothing but the response to the stimuli that are experienced by the brain. Any changes in one’s emotional state results in changes in electrical signals generated by the brain. The emotions can be explicit or implicit, i.e. either emotion may be expressed or remain unexpressed by the individual. As these emotions are experienced by the individual as the result of the brain stimulus, we can observe Electroencephalogram (EEG) signal to classify the emotions. Some of the physiological signals may also be taken into account as any change in emotional state result in some physiological changes. For the analysis, we have used the standard DEAP dataset for emotion analysis. In the dataset, the 32 test subjects are shown with 40 different 1-minute music videos and the EEG and other physiological signals are recorded. On the basis of the Self-Assessment Manikins (SAM), we classify the emotion state in the valence arousal plane. The K-Nearest Neighbour classifier is used to classify the multi-class emotions as higher/lower levels of the valence arousal plane. The comparison of KNN with other classifiers depicts that KNN has produced best average accuracy of 87.1%.

Shourya Shukla, Rahul Kumar Chaurasiya
Enhancement of Optical Coherence Tomography Images: An Iterative Approach Using Various Filters

Speckle is an important noise in the optical coherence tomography (OCT) images that play a vital role in the degradation of the visual quality of images and makes it difficult to assess the quality of the images. In order to improve the quality, filtering is essential which removes the noises and reproduce the OCT images. This proposed work addresses various filters like Mean, Median, Adaptive Median, Gaussian, and Wiener in order to enhance the OCT image. This paper gives estimate of various filtering techniques on the basis of the performance indices calculated from the experimental results. The results of this research work suggest the filter suitable for OCT images depending on the Cross-Correlation, Mean Square Error-(MSE) and Peak Signal-to-Noise Ratio-(PSNR) which are very much used as performance indices in medical image applications and its analysis. Initially, the filter is tested with standard medical image and performance measures are calculated. Next, noise-corrupted OCT image is applied as an input to the filter for the purpose of denoising and finally, a comparative analysis was performed.

M. Saya Nandini Devi, S. Santhi
Hybrid Method for Copy-Move Forgery Detection in Digital Images

Digital image authenticity is significant in many social areas. Image forgery detection becomes a challenging task. Copy-move forgery is one of the tampering techniques which is frequently used, part of the image is copied and pasted to other parts of the same image. This paper proposes a new method for copy-move forgery detection. Proposed method integrates both block-based and keypoint-based forgery detection. Host image is first divided into blocks and keypoints are extracted from each image block. Blocks are compared based on the keypoints in them. Number of similar keypoints identified from a pair of blocks exceeds a preset threshold, then those block pair is matched. Matched blocks are considered as the forged region and Output is displayed after neighbour pixel merging and morphology operations. The accuracy of the method is calculated and analysed with different images.

I. J. Sreelakshmy, Binsu C. Kovoor
Gait Recognition Using Normal Distance Map and Sparse Multilinear Laplacian Discriminant Analysis

In visual surveillance applications, gait is the preferred candidate for recognition of the identity of the subject under consideration. Gait is a behavioral biometric that has a large amount of redundancy, complex pattern distribution and very large variability, when multiple covariate exist. This demands robust representation and computationally efficient statistical processing approaches for improved performance. In this paper, a robust representation approach called Normal Distance Map and multilinear statistical discriminant analysis called Sparse Multilinear Discriminant Analysis is applied for improving robustness against covariate variation and increase recognition accuracy. Normal Distance Map captures geometry and shape of silhouettes so as to make representation robust and Sparse Multilinear Discriminant Analysis obtains projection matrices to preserve discrimination.

Risil Chhatrala, Shailaja Patil, Dattatray V. Jadhav
Comparative Study and Analysis of Pulse Rate Measurement by Vowel Speech and EVM

The paper presents two noncontact pulse rate measurement techniques from vowel speech signals and Eulerian Video Magnification. The proposed methods use signals those are neither audible nor visible to naked eyes. The signals are recorded and their characteristic plots and spectrum analysis by Short-Time Fourier Transform reveal some peaks from which pulse rate can be calculated. The methods are then compared with the conventional methods where the accuracy differs by only 3.9% for vowel speech and by 0.4% for Eulerian Video Magnification. The Bland–Altman plot for the techniques shows that both are acceptable as they lie between ±1.96 Standard Deviation. The data collected from the methods are processed in MATLAB and also implemented on FPGA using serial communication by RS232.

Ria Paul, Rahul Shandilya, R. K. Sharma
Real-Time Input Text Recognition System for the Aid of Visually Impaired

It is estimated that 285 million people globally are visually impaired. A majority of these people live in developing countries and are among the elderly population. Reading is essential in daily life for everyone. Visually impaired persons can read only by use of special scripts specially designed for them such as Braille language. Further, only trained people can read and understand. Since every product does not provide the product information on product cover in Braille, the present work proposes an assistive text reading framework to help visually impaired persons to read texts from various products/objects in their daily lives. The first step in implementation captures the image of the required by extracting frames from real-time video input from the camera. This is followed by preprocessing steps which includes conversion to grey scale and filtering. The text regions are further extracted using MSER followed by canny edge detection. The text regions from the captured image are then extracted and recognized by using Optical Character Recognition software (OCR). The OCR engine Tesseract is used here. This extracts the text of various fonts and then sizes can be recognized individually and then combined to form a word. Further, producing audio output by using Text to Speech module. The result obtained is very much comparable with other existing methods with better time efficiency. The real-time input is taken and passed through the algorithm which applies filters and removes noise then later image is passed through MSER, OCR, Canny edge detection to get the final audio output.

B. K. RajithKumar, H. S. Mohana, Divya A. Jamakhandi, K. V. Akshatha, Disha B. Hegde, Amisha Singh
Compromising Cloud Security and Privacy by DoS, DDoS, and Botnet and Their Countermeasures

Today, every firm either academic or private or government sectors are using the cloud as a platform to store and communicate information over the internet. Lots of data are being exchanged using various applications software and services. Services are integrated and reused the information over WWW. To effectively deal with all information, people are moving on cloud computing platform because of improving the cost of hardware and software, only pay what you use and setting up infrastructure easily and available for 24 × 7. Even though lots of advantages of sharing data on a cloud platform, their obvious question arise is that are those data secure and maintain its privacy by the service provider? Data privacy and security is an essential thing in today’s world. And to provide security and privacy over cloud computing is often challenging part for many organizations. In this paper, we represent some security challenges for cloud platform as services. We state with two security attack using DDoS and Botnet, and also show some countermeasure.

Martin K. Parmar, Mrugendrasinh L. Rahevar
A New Automated Medicine Prescription System for Plant Diseases

In the current situation, agriculture is facing a wide number of problems to address the increasing global population. Also, the plant diseases affect the production and quality of crops. Specifically, plant disease severity identification is the most important problem in the agricultural field which can avoid the excess use of pesticides and minimize the yield loss. In the existing systems, no methodology exists to identify the disease severity and to prescribe the required quantity of medicines to be sprayed. In order to solve this problem, an automated medicine prescription system is proposed in this paper, which takes the images from the uncontrolled environment, enhances, and preprocesses the images received for the identification of disease. Precisely, in the proposed framework, k-means and SVM algorithms are used for clustering and disease identification tasks, respectively. Experimental setup and snapshots of results demonstrate the performance of the proposed system, by means of indicating the severity of the identified disease.

S. Sachin, K. Sudarshana, R. Roopalakshmi, Suraksha, C. N. Nayana, D. S. Deeksha
A Comparative Assessment of Segmentations on Skin Lesion Through Various Entropy and Six Sigma Thresholds

We present four entropy-based methods for colour segmentation within a lesion in a dermoscopy image for classification of the image as melanoma or benign. Four entropy segmentation methods are based on Tsallis, Havrda and Charvat, Renyi and Kapur entropy measures. Segmentation through Six Sigma threshold as preprocessor is also evaluated by this assessment approach. The proposed methods are inspired by two clinical observations about melanoma. First, colours within a lesion provide the most useful measures for melanoma detection; second, the disorder in colour variety and arrangement provides the best assessment of melanoma colours. These observations lead to the hypothesis that colour disorder is best measured by entropy. The five different models for colour splitting studied with SSIM measures taken from each region in the colour-split image for segmentation assessment. Based on the score helps to understand segmentation region assessment effectively.

Srinivasan Sankaran, Jason R. Hagerty, Muthukumaran Malarvel, Gopalakrishnan Sethumadhavan, William V. Stoecker
Impact of Speckle Filtering on the Decomposition and Classification of Fully Polarimetric RADARSAT-2 Data

Decomposition and classification are vital processing stages in polarimetric synthetic aperture radar (PolSAR) information processing. Speckle noise affects SAR data since backscattered signals from various targets are coherently integrated. Current study investigated the impact of speckle suppression on the target decomposition and classification of RADARSAT-2 fully polarimetric data. Speckle filters should suppress the speckle noise along with the retention of spatial and polarimetric information. The performance of improved Lee–Sigma, intensity-driven adaptive neighborhood (IDAN), refined Lee, and boxcar filters were assessed utilizing the spaceborne dataset, that is, fully polarimetric RADARSAT-2 C-band SAR data for the Mumbai region, India. The effect of speckle suppression on target decomposition was analyzed in this study. Different speckle noise suppression techniques were applied to RADARSAT-2 dataset, followed by Yamaguchi three-component and VanZyl decompositions. The obtained findings revealed that the improved Lee–Sigma filter demonstrated better volume scatterings in forest areas and double bounce in urban areas than the other techniques considered in the analysis. Additionally, the efficacy of the different speckle suppression techniques listed above was assessed. The effectiveness of the speckle filtering algorithm was evaluated by applying the Wishart supervised classification to the filtered and unfiltered data. IDAN, boxcar, refined Lee, and improved Lee–Sigma filters were assessed to find the classification accuracy improvement. A considerable amount of improvement was observed in the classification accuracy for mangrove and forest classes. Minimal enhancement was detected for settlement, bare soil, and water classes.

Sivasubramanyam Medasani, G. Umamaheswara Reddy
Estimation of Precipitation from the Doppler Weather Radar Images

Estimating the rainfall from Radar observation plays an important role in the hydrological research. The Radar Rainfall plays a fundamental role in weather modeling and forecasting applications. Doppler Weather Radar (DWR) is used estimating the rainfall within 120 km from the Radar station. Rainfall intensity data obtained from the Surface Rainfall Intensity (SRI) product of DWR has been validated with the rain gauges located at Automatic Weather Station (AWS) data. Image processing methods such as edge detection and color identification are used to extract the rainfall from the SRI product. Time series rainfall over a particular location is compared with the AWS data using statistical parameters like correlation coefficient and Squared Pearson coefficient. The experimental results convey that the proposed method yields the high amount of accuracy. Graphical User Interface is developed to extract the point rainfall and time series rainfall over different locations within the range of Radar.

P. Anil Kumar, B. Anuradha
Text-Independent Handwriting Classification Using Line and Texture-Based Features

This paper addresses the problem of making a machine recognize the writer by means of the handwriting. It delineates the preprocessing methods used to enhance handwritings, so as to ease the process of feature extraction. It discusses six statistical texture-based features that characterize a handwriting. Once these features are extracted, a nearest neighbor approach is used to classify a sample handwriting into one of those in the database. The methods are verified on a self-compiled database, and a performance evaluation is also performed. This method can be used to identify an unknown handwriting and is in specific demand in the forensic domain. Unlike other biometric identification methods, handwriting-based identification is the least intrusive.

T. Shreekanth, M. B. Punith Kumar, Akshay Krishnan
A Unified Preprocessing Technique for Enhancement of Degraded Document Images

The field of Document Image Processing has encountered sensational development and progressively across the board relevance lately. Luckily, propels in PC innovation have kept pace with the fast development in the volume of picture information in different applications. One such utilization of Document picture preparing is OCR (Optical Character Recognition). Pre-preparing is one of the pre-imperative stages in the handling of record pictures which changes the archive to a frame reasonable for ensuing stages. In this paper, various preprocessing techniques are proposed for the enhancement of degraded document images. The algorithms implemented are adept at handling variety of noises that include foxing effect, illumination correction, show through effect, stain marks, and pen and other scratch marks removal. The techniques devised works based on noise degradation models generated from the attributes of noisy pixels which are commonly found in degraded or ancient document images. Further, these noise models are employed for the detection of noisy regions in the image to undergo the enhancement process. The enhancement procedures employed include the local normalization, convolution using central measures like mean and standard deviation, and Sauvola’s adaptive binarization technique. The outcomes of the preprocessing procedure is very promising and are adaptable to various degraded document scenarios.

N. Shobha Rani, A. Sajan Jain, H. R. Kiran
An Efficient Classifier for P300 in Brain–Computer Interface Based on Scalar Products

In this paper, a simple but efficient method for detection of P300 waveform in a Brain–Computer Interface (BCI) is presented. The proposed method is based on computing scalar products between the waveforms to be classified and a P300 pattern. Depending on the degree of concentration of the subject and the number of trails, rates of recognition between 85 and 100% have been obtained.

Monica Fira, Liviu Goras
Detection of Weed Using Visual Attention Model and SVM Classifier

Agriculture is one of the provenances of human ailment in this heavenly body. It plays an extrusive role in the economy. Flourishing crops are a constituent of agriculture. Weeds are the additional plants to the crop. Removal of weeds is a challenging job for the farmers as it is a periodic, time–consuming, and cost-intensive process. Different ways to remove those weeds are by hand labor, spraying pesticides and herbicides, and machines but with their own disadvantages. The software solution can overcome these drawbacks to an extent. The main concern in software is in the identification of weeds among the crops in the field. The proposed system helps in detection of weeds in the agriculture field using computer vision methods. The method works with a dataset of crops and weeds. The plants are identified as salient regions in visual attention model and the identified plants are classified as crops or weeds using support vector machine classifier.

Manda Aparna, D. Radha
Design and Development of Scalable IoT Framework for Healthcare Application

With increasingly fast-paced life and alarmingly high rate of chronic ailments in general population, there is a need for quickening the current process of healthcare monitoring, especially in emergency situations. The recent developments in communication technology, especially in the field of Internet of Things (IoT) have enhanced the accessibility of such systems. This can be achieved by transmitting/uploading the data of various health parameters acquired by different physiological sensors with wireless sensor networks onto the cloud platform. This data later can be accessed by the concerned medical authorities when required for diagnosis. In this work, we focus on the development of the scalable IoT framework for monitoring the physiological parameters of the patient. The customized MATLAB-based GUI is designed to perform real-time analysis of sensor’s data which is used for continuous monitoring of vital parameters of the body. We have developed a wearable band which can be worn as a wristband by the patient. The band consists of temperature, pulse, and ECG sensors those are used to transmit the vital parameters of the patient integrated with the ultra-low-power battery-operated Texas Instruments MSP430 microcontroller with CC110L sub-1 GHz RF wireless transceiver. The concept is successfully demonstrated by transmitting three physiological parameters wirelessly over 100 m distance as well as over the cloud platform.

Siddhant Mukherjee, Kalyani Bhole, Dayaram Sonawane
Template-Based Video Search Engine

The exponential increase in video-based information has made it challenging for users to search specific video from a huge database. In this paper, template-based video search engine is proposed to improve the retrieval efficiency and accuracy of search engines. To begin with, the system splits the video sequence into eight key frames and then the fused image is created. The visual features like color and texture are extracted from the fused image and stored as complete feature set in a database. Now, the query clip is selected from the query database and then the template image is selected from the fused query image. The template query image features are compared with stored feature database using various similarity measures. The relevant retrieval experiments show that template-based video search engine using wavelet-based feature extraction gives better result in terms of average precision and recall using Euclidean distance as a similarity measure.

Sheena Gupta, R. K. Kulkarni
Gray-Level Feature Based Approach for Correspondence Matching and Elimination of False Matches

Matching of interest points (feature points) is a basic and very essential step for many image processing applications. Depending on the accuracy of the matches, the quality of the final application is decided. There are various methods proposed to tackle the problem of correspondence matching. In this paper, a method that makes use of the textural features, mainly gray-level features with respect to a pixel’s neighborhood has been discussed for point matching with an emphasis on its use for image registration. Feature points are obtained from the images under consideration using SURF and are matched using gray-level features. Then, the false matches are removed using a graph-based approach.

R. Akshaya, Hema P. Menon
A New Approach for Image Compression Using Efficient Coding Technique and BPN for Medical Images

Medical images produce a digital form of human body pictures. Most of the medical images contain large volumes of image data that is not used for further analysis. There exists a need to compress these images for storage issues and to produce a high-quality image. This paper discusses an image compression using Back Propagation Neural network (BPN) and an efficient coding technique for MRI images. Image compression using BPN produces an image without degrading its quality and it requires less encoding time. An efficient coding technique—Arithmetic coding is used to produce an image with better compression ratio and redundancy is much reduced. Back-Propagation Neural Network with arithmetic coding gives the better results.

M. Rajasekhar Reddy, M. Akkshya Deepika, D. Anusha, J. Iswariya, K. S. Ravichandran
Person Identification Using Iris Recognition: CVPR_IRIS Database

In recent days iris is as one of the useful traits for authentication using biometric recognition. Almost all publicly available iris image databases contain data correspondent to heavy imaging constraints and suitable to evaluate by various algorithms (CASIA Iris Image Database [1]; Proença et al. in IEEE Trans Pattern Anal Mach Intell 32(8):1529–1535, 2010 [2]). This paper is first to prepare our own IRIS IMAGE DATABASE and make the availability of it to the new researchers. We have thus proposed our own database named as CVPR_IRIS DATABASE with not heavy imaging constraints. This set up contains Iris Image Capture IR Sensitive CCD camera by which eye images are captured in the visible lighting conditions with IRLED source at a distance. Clear images from this database are separated from noisy images by quality assessment technique. Thus proposed CVPR_IRIS DATABASE containing total 485 eye images of 49 individuals is made available for researchers concerned with the implementation of algorithms based on research in iris recognition. Second using proposed database we have proved good accuracy using 2D Discrete Wavelet Transform.

Usha R. Kamble, L. M. Waghmare
Fusion-Based Segmentation Technique for Improving the Diagnosis of MRI Brain Tumor in CAD Applications

Diagnosing the brain tumor from Magnetic Resonance Imaging (MRI) in Computer-Aided Diagnosis (CAD) applications is one of the challenging task in medical image processing. Traditionally many segmentation methods are used to address this issue. This paper introduces a segmentation method along with image fusion. Here a Discrete Wavelet Transform (DWT) method is chosen, for image fusion followed by segmentation using Support Vector Machine (SVM) for detecting the abnormality region. The types of MRI images considered here include T1-weighted (T1-w), T2-weighted (T2-w) and FLAIR images. The various fusion combinations are T1-w and T2-w, T1-w and FLAIR, T2-w and FLAIR. Experimental results suggest that on an average, fusion-based segmented result is superior to non-fusion-based segmented result.

Bharathi Deepa, Manimegalai Govindan Sumithra, Venkatesan Chandran, Varadan Gnanaprakash
Identification of Cyst Present in Ultrasound PCOS Using Discrete Wavelet Transform

The Polycystic Ovary Syndrome (PCOS) is an endocrine abnormality; it affects females during their reproductive cycle. It is hormone imbalance of female and it skips the menstrual cycle and makes it harder to get pregnant. The side effects of PCOS are causing blood pressure, heart disease, diabetes, obesity, etc. Thus, there is an imbalance in hormone that creates many cysts in ovary and it is called as polycystic ovary syndrome. It can be diagnosed by using ultrasound scan to identify the count, size, and severity of cyst. Preprocessing the medical image is the basic and initial step for medical image processing to remove the speckle noise, present in the ultrasound image and also helpful to create medical image applications. Compared with all other modalities of scan images, ultrasound scan is less cost-effective, but it contains more speckles due to image acquisition. Speckle is a granular noise that inherently exists in and it degrades the quality of the radar, ultrasound, and CT scan images. It also causes the difficulties for image interpretation. The aim of this research paper is to apply the modified Daubechies—discrete wavelet filters for ultrasonic scan image of PCOS for removing speckle noise for better diagnose cyst. This paper concludes the effectiveness of the proposed filter to identify cyst present in ultrasound PCOS. The following metrics—SNR, PSNR, and SSIM are used to measure the effectiveness of various categories of discrete wavelet transform.

R. Vinodhini, R. Suganya
Design and Development of Image Retrieval in Documents Using Journal Logo Matching

This research focuses on the detailed study of document image retrieval form World Wide Web using the existing techniques, text and image information which are extracted in the database. Retrieval based on the user’s given keywords of journal papers. It fetches the information from the extracted folder and analyzing with logo matching technique using efficient SURF (Speeded-Up Robust Features) method. Precision, recall, f-measure, time, and memory are calculated to compute the accuracy of the given input. Logos are clustered using image descriptor methods using shape, edge, and color techniques. This paper proposes a new technique for logo matching in journal database depends upon the documents.

S. Balan, P. Ponmuthuramalingam
Feature Enhancement of Multispectral Images Using Vegetation, Water, and Soil Indices Image Fusion

Land cover characteristics of satellite images are analyzed in this research paper. Remote sensing indices are calculated for multispectral image. In the proposed method, satellite image indices, i.e., NDVI (Normalized difference vegetation index), NDWI (Normalized difference water index), and BSI (Bare soil index), are calculated for various classes such as land, vegetation, water, and in land cover categories. All these remote sensing indices are fused to get composite bands and to enhance all features in multispectral image. This technique increases visual perception of human eye for multispectral images. Fusion plays vital role in remote sensing and medical images interpretation. In case of remote sensing, we cannot get entire information in one spectral band. So multispectral bands are combined, which leads to feature enhancement. This method depends on green (G), infrared (IR), near infrared (NIR), and short wave infrared (SWIR) bands and their fusion. Finally, error matrix is generated with reference data and classified data. The main application is to calculate vegetation, bare soil, and water indices in three land covers and to get better feature enhancement. Producer’s accuracy, consumer’s accuracy, commission, omission, kappa coefficient, F1score, over all accuracy, and over all kappa coefficients are calculated.

M. HemaLatha, S. Varadarajan
Detection of Heart Abnormalities and High-Level Cholesterol Through Iris

Iridology is medicine technique to claim the colours, characteristics and pattern of the iris. Iridology is used to determine the existence of basic genetics, irregularities in the body, dam circulation, toxin deposition and other weakness. This paper discusses the determination of high-level cholesterol in blood and heart conditions through several stages. In this study, we examine the heart condition and cholesterol through preprocessing, segmentation, feature extraction and classification from the captured iris image. Due to the high level of cholesterol in blood, sodium ring is to be formed around the iris.

P. A. Reshma, K. V. Divya, T. B. Subair
Wavelet-Based Convolutional Recurrent Neural Network for the Automatic Detection of Absence Seizure

In this paper, the new model is proposed to automatically detect and predict absence seizure using hybrid deep learning algorithm [Convolutional Recurrent Neural Network (CRNN)] along with the Discrete Wavelet Transform (DWT) with Electroencephalography (EEG) as input. This model comprises of four steps (1) Single-channel segmentation process (2) Decomposition of segmented signal using wavelet transform (3) Extraction of relevant feature using statistical method (4) Deep learning algorithms for classification, detection, and early detection. This model enhances the feature extraction and also the overall performance by feeding the segmented data into Long Short Tern Memory (LSTM) which is one of the Recurrent Neural Network (RNN). And also the output of this network is used to calculate the extracted feature along with the classification results. The values in hidden state are used to diagnose the seizure by locating the pattern using the extracted features of time window. The proposed model achieves 100% accuracy on detection and 95% overall accuracy on early detection of normal, abnormal and absence seizure.

Kamal Basha Niha, Wahab Aisha Banu
New Random Noise Denoising Method for Biomedical Image Processing Applications

Since the inception of digital image processing, noise removal in images has always been a challenge to researchers and experts of the field. Most significant of these noises are the randomly varying impulse noises developed while image is acquired. Hence, the need for methodical denoising method has led to extensive research and development of various innovative methods to remove the random valued impulse noise. For this, a method which detects and filters random valued impulse noise in medical images is employed. The method proposed in this paper uses a decision tree based impulse detector and an edge preserving filter to rebuild noise free images. This method is more efficient than the existing techniques due to its lower complexity. Different gray scale Magnetic Resonance Imaging (MRI) brain images are tested by using this algorithm and have given better Peak Signal to Noise Ratio (PSNR) than the other techniques.

G. Sasibhushana Rao, G. Vimala Kumari, B. Prabhakara Rao
Importance of LEDs Placing and Uniformity: Phototherapy Treatment Used for Neonatal Hyperbilirubinemia

Phototherapy devices are most popularly used in medical applications to treat jaundice in infants called Neonatal jaundice. Neonatal jaundice or neonatal hyperbilirubinemia is the symptom of excessive bilirubin in blood in 60% of term babies and almost up to 70% of preterm babies. In earlier studies, it is found that the efficacy of phototherapy devices depends on spectral wavelength, body surface area, irradiance and duration of exposure. And it is important to know even importance of uniformity, peak wavelength and optimum height contribution in reducing phototherapy duration time. The paper explains the importance of uniformity and the optimum height. The uniformities for both the sets of LEDs are determined and selected for further testing. The optimum height at which the LEDs must be mounted is determined by comparing the uniformity values. The temperature is compared with the standard maximum ambient temperature up to which the infant remains healthy.

J. Lokesh, K. Shashank, Savitha G. Kini
Automated Glaucoma Detection Using Global Statistical Parameters of Retina Fundus Images

Glaucoma is an eye disorder which is prevalent in the ageing population and causes irreversible loss of vision. Hence, computer-aided solutions are of interest for screening purposes. Glaucoma is indicated by structural changes in the Optic Disc (OD), loss of nerve fibres and atrophy of the peripapillary region of optic disc in retina. In retina images, most changes appear in form of subtle variation in appearance. Hence, automated assessment of glaucoma from colour fundus images is a challenging problem. Prevalent approaches aim at detecting the primary indicator, namely, the optic cup deformation relative to the disc and use the ratio of the two diameters in the vertical direction, to classify images as normal or glaucomatous. An attempt is made to detect glaucoma by combining image processing and neural network techniques. The risk of blindness can be reduced by 50% with screening patients vulnerable to eye diseases specially glaucoma. The global statistical features of the dataset images are used to detect images as glaucoma or normal. The technique involves screening for the vital signs such as intensity values in the fundus image for detecting glaucoma in patients. The result shows the feasibility of detection of glaucoma for vulnerable patient.

Prathiksha R. Puthren, Ayush Agrawal, Usha Padma
Enhanced Techniques to Secure Medical Image and Data Transit

This paper displays the work related to the secured medical picture transmission in light of watermarking and encryption. Client particular watermark is installed into the LSB of unique picture. Implanting watermark in LSB does not influence the nature of picture. This watermarked picture is then encoded by utilizing a pixel repositioning calculation. Every pixel is repositioned in light of the rest of after division by number 10. This leftover portion lattice goes about as encryption key and is required at the season of unscrambling as well. Comprehensive analyses are completed on proposed approach. The outcomes demonstrate that the watermark inserted is intangible and can be effectively separated at the recipient. Likewise, the encoded picture has no visual importance with the first picture and histogram of scrambled picture is changed. Encoded picture can be decoded with no loss of data from the picture. From this decoded picture watermark can be removed which endures no misfortune in the watermark. PSNR esteems for an arrangement of medicinal pictures are fulfilling.

S. Uma Maheswari, C. Vasanthanayaki
A Spectral Approach for Segmentation and Deformation Estimation in Point Cloud Using Shape Descriptors

In this paper, we propose a new framework for segmentation and deformation estimation in texture-less point clouds. Given a reference point cloud and a corresponding deformed point cloud, our approach first segments both the point clouds using OBB-LBS (Oriented Bounding Box-Laplace Beltrami Spectral) and estimates the semi-global dense spectral shape descriptors. These coarse descriptors identify the segments which need to be further investigated for localizing the area of deformation at a finer level.

Jajula Kalyani, Karthikeyan Vaiapury, Latha Parameswaran
A Study on Firefly Algorithm for Breast Cancer Classification

One of the most serious and prominent cancers which affect more women in this world is breast cancer. Various kinds of breast cancer have been reported in medical literature that significantly differ in their capability to spread to other tissues in the human body. Plenty of risk factors have been traced through research though the exact reasons of breast cancer are not yet fully understood. Advances in medicine and technology help to improve the quality of life for breast cancer patients. A lot of contribution especially in the area of artificial intelligence and data mining is done to aid the diagnosis and classification of breast cancer. In order to provide assistance to the doctors, oncologists, and clinicians, neural networks and other optimization techniques serves as a great boon. Here, in this paper, firefly algorithm is utilized effectively as a powerful tool for the analysis and classification of breast cancer. Results show that an average classification accuracy of 98.52% is obtained when firefly algorithm is used for the classification of breast cancer.

Harikumar Rajaguru, Sunil Kumar Prabhakar
Fuzzy C-Means Clustering and Gaussian Mixture Model for Epilepsy Classification from EEG

Due to various disorders in the functionality of the brain, epileptic seizures occur and it affects the patient’s mental, physical and emotional health to a great extent. The prediction of epileptic seizures before the beginning of the onset is pretty useful for seizure prevention by medication. One of the major causes for epilepsy is molecular mutation which results in irregular behaviour of neurons. Though the exact reasons for epilepsy are not known, early diagnosis is very useful for the treatment of epilepsy. Various computational techniques and machine learning algorithms are utilized to classify epilepsy from Electroencephalography (EEG) signals. In this paper, Fuzzy C-Means (FCM) Clustering algorithm is used as a clustering technique initially and then the features obtained through it is classified with the help of Gaussian Mixture Model (GMM) used as a post-classification technique. Results report that an average classification accuracy of 97.64% along with an average performance index of 95.01% is obtained successfully.

Harikumar Rajaguru, Sunil Kumar Prabhakar
Analysis on Detection of Chronic Alcoholics from EEG Signal Segments—A Comparative Study Between Two Software Tools

Alcohol consumption is vulnerable to the brain and has a high risk of brain damage and other neurobehavioral deficits. This paper primarily focuses on massive data generated from EEG signals and its characterization with respect to various states of the human brain under influence of alcohol. A single trial 64-channel EEG database is utilized for classification of alcoholic states for a single patient. Singular Value Decomposition (SVD) features of EEG segments are computed. Even though EEG signals are acquired from alcoholic patient some of the EEG signal segments resemble EEG segments of normal, alcoholic, and epileptic persons. Depending on the SVD values, EEG segments are labeled as normal, alcoholic, and epileptic and then classified through Hard Thresholding and K-means clustering techniques. The classification is done using two different softwares in this paper, namely, MATLAB and R studio and then the results are compared. The results show that MATLAB software classifies better than R studio software with comparatively highest classification accuracy of 83.5% which is obtained when Hard Thresholding method is utilized.

Harikumar Rajaguru, Vigneshkumar Arunachalam, Sunil Kumar Prabhakar
A System for Plant Disease Classification and Severity Estimation Using Machine Learning Techniques

In India, more than 80% of agrarian crops are produced by smallholder farmers. The reports point that almost half the yield loss is mainly due to pests and diseases. Unlike pests, diseases are more difficult to detect and treat. Numerous studies and researches have been put forward to identify the behaviour of different diseases. Traditionally, farmers use naked eye observation for detecting disease but one of the areas considered today is processing the images with machine learning concepts to assist the farmers technologically. This paper presents an image processing strategy to classify sort of disease in a cucumber plant and gives a severity measure of malady spots in the cucumber leaf caught under real field condition.

Anakha Krishnakumar, Athi Narayanan
A Clinical Data Analytic Metric for Medical Ontology Using Semantic Similarity

Ontology is a set of concepts in a domain that shows their properties and the relations between them. Medical domain Ontology is widely used and very popular in e-healthcare, medical information systems, etc. The most significant benefit that Ontology may bring to healthcare systems is its ability to support the indispensable integration of knowledge and data (Pisanelli et al, Proceedings biological and medical data analysis, 6th international symposium, 2005, [1]). Graph structure is very important tool for Foundation, Analysis, and Domain Knowledge. Ontology as a graphical model envisages the process of any system and present appropriate analysis (Pedrinaci, Ontology-based metrics computation for business process analysis, [2]). In this study, the knowledge provided by the Ontology is further explored to obtain the related concepts. An algorithm to compute the related concepts of Ontology is also proposed in a simplified manner using Boolean Matrix. The inferences from this study may serve to improve the diagnosis process in the field of Biomedical Intelligence and Clinical Data Analysis.

Suraiya Parveen, Ranjit Biswas
True Color Image Compression and Decompression Using Fusion of Three-Level Discrete Wavelet Transform—Discrete Cosine Transforms and Arithmetic Coding Technique

In this research paper, we have done the implementation and analysis of true color image compression and decompression technique. The implemented paper divides the color image into RGB component then after applying three-level Discrete Wavelet Transform, RGB components are split into nine higher frequency sub-bands and one lower order sub-band. The lower frequency sub-band is compressed into T-Matrix using One Dimension Discrete Cosine Transform. At the same time, higher frequency sub-bands are compressed using scalar quantize and eliminate zero and store data algorithm are applied to remove zeros in sub-band matrixes. Last, the encoded mode adopted arithmetic encoding. This algorithm has use two level of quantization this show significance improve in performance of compression algorithm. The decompression process is reverse process of encoder. The decompression algorithm decoded high-frequency subbands using return zero matrix algorithm and recover low-frequency sub-bands and other sub-bands using applying inverse process.

Trupti Baraskar, Vijay R. Mankar
Application of Neural Networks in Image Processing

In today’s world, the need for computers and their logics is critical for the development of any system. This paper is for those people who have little or no knowledge about Artificial Neural Networks (ANNs). Various advances have been made in creating intelligent systems, some enlivened by biological neural networks. Analysts from numerous logical orders are outlining artificial neural networks (ANNs) to take care of a variety of problems in design recognition, prediction, optimization, associative memory, and control. And will be discussing some of the algorithms of ANNs such as image segmentation using edge detection, image enhancement using multiscale retinex and technique combining sharpening and noise reduction and some practical applications such as diagnosing liver disease and face detection systems.

Ishan Raina, Chirag Chandnani, Mani Roja Edinburgh
Region of Interest (ROI) Based Image Encryption with Sine Map and Lorenz System

In this research work, ROI-based grayscale image encryption with chaos is proposed. First, ROI areas are identified using Sobel edge detection operator and categorized into important and unimportant regions based on number of edges present in the particular block. Next, the important regions are encrypted with Lorenz system (both confusion and diffusion process) and unimportant regions are encrypted using Sine map. Finally, the entire image is shuffled by Lorenz system with new initial conditions to get final encrypted image. The significant advantage of this research work is that important and unimportant regions are encrypted separately with different chaos equation and system which increases the security of the image and also the end user can vary the important regions depends upon the requirements. The experimental results show that the proposed encryption approach provides better results for different cryptographic attacks.

Veeramalai Sankaradass, P. Murali, M. Tholkapiyan
Chaos-Based Color Image Encryption with DNA Operations

In this paper, an efficient yet simple color image encryption is proposed based on DNA sequence operation and chaotic maps for Android-based mobile devices. We designed our encryption algorithm based on such constraints. The main idea of the proposed method is different kind of DNA encoding for RGB channels and then Arnold’s cat map shuffling is performed in RGB channel in the combined manner. Then DNA addition is performed to diffuse the values between the RGB channels itself. Finally, DNA decoding is performed to form the color encrypted image. The promising feature of the proposed encryption algorithm is simple operations which comfortably suites for low processing power computing devices like Android mobile phones. The analysis and experiment’s results show the efficiency of the proposed encryption method.

P. Murali, Veeramalai Sankaradass, M. Tholkapiyan
A Lucrative Sensor for Counting in the Limousine

In developing countries like India, the governments are in dire need of funds for all its projects. Hence it relies on a number of external factors such as exports, trading and FDI. On the other hand, the government also depends on internal factors such as tax from its citizens and indirectly from departments such as transportation. It is the responsibility of the government to provide transportation facilities like buses and trains to its citizens at a nominal charge and at the same time make a profit to run the government and its aided projects. To start from the lowest level, the sad truth is that the number of people travelling without tickets in buses is increasing at an alarming rate. This leads to a big loss for the transportation department and indirectly to the government which falls short of funds. When such losses are incurred due to defaulters in a large scale the government is forced to increase prices of other commodities to look at other sources of income from its citizens though hike in tax, interest rates, fuel price, VAT, etc. The proposed system reduces manpower by excluding checking inspector’s role and their responsibilities. As we are all facing a lot of problems in identifying the number of passengers travelling without tickets the design of an automation system emerges with an in-built lucrative sensor that can count the number of passengers entering inside and leaving the limousine. Further, it can also detect the number of passengers who have taken tickets and who have not taken tickets, which will be handy to track down defaulter and make the bus ticketing system more efficient.

R. Ajith Krishna, A. Ashwin, S. Malathi
Energy Expenditure Calculation with Physical Activity Recognition Using Genetic Algorithms

Physical health is associated with physical activity, physical activity also ensures the wellbeing of the humans, physical activity is recognized using body worn sensors, and three Inertial Measurement units (IMU) are used to capture the data from the sensors. The activity recognition chain consists of Data Acquisition, Preprocessing, segmentation, Feature extraction, and Classification. Different levels of research are carried out on each stage. In feature selection genetic algorithms are used but the paper proposing the memetic algorithms an enhanced version of the genetic algorithms with local search in the each stage of genetic algorithm. This technique shall eliminate the chances of energy loss and consequently increase efficiency of the current system.

Y. Anand, P. P. Joby
Improved Intrinsic Image Decomposition Technique for Image Contrast Enhancement Using Back Propagation Algorithm

The technique of intrinsic image decomposition is based on the illumination value of the image. The histogram equalization value of the input image is calculated to increase the image contrast. In this research work, the back propagation algorithm is applied for the calculation of histogram equalization. The iterative process of back propagation is executed until error is reduced for the histogram equalization calculation. The simulation of the proposed modal is performed in MATLAB. The performance of proposed modal is compared in terms of PSNR and MSE.

Harneet Kour, Harpreet Kaur
Low-Power High-Speed Hybrid Multiplier Architectures for Image Processing Applications

Multipliers play an imperative role in communication, signal and image processing, and embedded ASICs. Generally, multipliers are designed through various steps and they are occupying more area in the hardware, consumes more power, and causes an effect on performance. This paper is aimed to implement low-area and high-speed multiplier using various data compressors in partial product stages and tested for image processing. To suppress the vertical dimension of the partial product stage in multiplier, Sklansky adder is considered for the last stage and five hybrid multiplier architectures (HyMUL1–HyMUL5) have been implemented. For application verification, the two grayscale images are given as the inputs of the proposed multipliers and produce a new image which is an overlap of the two input images. The comparative analysis indicates that the proposed multiplier HyMUL2 consumed less area compared to other multipliers and its speed is also improved. The obtained new overlapped image using proposed multiplier HyMUL2 has high PSNR and low NMED.

U. Saravanakumar, P. Suresh, V. Karthikeyan
Intervertebral Disc Classification Using Deep Learning Technique

This paper describes the semiautomatic method for diagnosis of intervertebral disc degeneration according to Pfirrmann’s five scale (1–5) grading system, which is used in the assessment of disc degeneration severity. Total 1123 discs are obtained after augmentation from 120 subject’s T2-weighted lumbar scans. Manual classification into five grades is done by experts. Our method is extracting 59 features using Local Binary Pattern for texture analysis and 4096 features using pretrained CNN. 1 × 59 and 1 × 4096 feature vectors are fused to form 1 × 4155 feature vector to train our multiclass Support Vector Machine classifier. This feature level fusion method is able to achieve 80.40% accuracy. A Quantitative analysis is done using parameters, viz.,—Accuracy, Sensitivity, Specificity, Precision, Recall, F1 score, etc.

J. V. Shinde, Y. V. Joshi, R. R. Manthalkar
Thermal Image Segmentation of Facial Thermograms Using K-Means Algorithm in Evaluation of Orofacial Pain

The study aims at analyzing skin surface temperature, aided by the thermal camera, a supporting software, and application of k-means algorithm, and feature extraction in MATLAB to diagnose dental diseases, specifically, orofacial pain. The thermal camera is employed for capturing thermal images of the Left, Right, and Front profiles of all the subjects taken into account. MATLAB-based image segmentation using k-means algorithm, and feature extraction was carried out for control and test group data. The results obtained from the study depict that the mean temperature difference of maximum, minimum and average values of temperature recorded were found to be 1.09% in the front, 3.78% in the right, and 3.97% in the left facial regions between the normal subjects and abnormal diseased subjects. Of the regions examined using thermography, and subsequent feature extraction, the right and left sides show almost similar percentage differences, that is, of 3.78 and 3.97%. These findings point toward a clear, and significant rise of temperature, due to presence of infections, or ailments in the From this data it is safe to infer that, presence of infections, significantly increases the temperature of the region they are present in, and hence give an indication of possible application of thermography in dental disease detection.

Nida Mir, U. Snekhalatha, Mehvish Khan, Yeshi Choden
Analysis of Web Workload on QoS to Assist Capacity

Workload characterization is a well-established discipline, which finds its applications in performance evaluation of modern Internet services. With the high degree of popularity of the Internet, there is a huge variation in the intensity of workload and this opens up new challenging performance issues to be addressed. Internet Services are subject to huge variations in demand, with bursts coinciding with the times that the service has the most value. Apart from these flash crowds, sites are also subject to denial-of-service (DoS) attacks that can knock a service out of commission. The paper aims to study the effect of various workload distributions with the service architecture ‘thread-per-connection’ in use as a basis. The source model is structured as a sequence of activities with equal execution time requirement with an additional load time of page (loading embedded objects, images, etc.). The threads are allocated to the requests in the queue; leftover requests if any are denied service. The rejection rate is used as a criterion for evaluation of the performance of the system with a given capacity. The proposed model could form a basis for various system models to be integrated into the system and get its performance metrics (i.e. QoS) evaluated.

K. Abirami, N. Harini, P. S. Vaidhyesh, Priyanka Kumar
Content-Based Image Retrieval Using Hybrid Feature Extraction Techniques

Images consist of visual components such as color, shape, and texture. These components stand as the primary basis with which images are distinguished. A content-based image retrieval system extracts these primary features of an image and checks the similarity of the extracted features with those of the image given by the user. A group of images similar to the query image fed is obtained as a result. This paper proposes a new methodology for image retrieval using the local descriptors of an image in combination with one another. HSV histogram, Color moments, Color auto correlogram, Histogram of Oriented Gradients, and Wavelet transform are used to form the feature descriptor. In this work, it is found that a combination of all these features produces promising results that supersede previous research. Supervised learning algorithm, SVM is used for classification of the images. Wang dataset is used to evaluate the proposed system.

B. Akshaya, S. Sruthi Sri, A. Niranjana Sathish, K. Shobika, R. Karthika, Latha Parameswaran
Review of Feature Extraction and Matching Methods for Drone Image Stitching

Image stitching is the process of combining multiple overlapping images of different views to produce a high-resolution image. The aerial perspective or top view of the terrestial scenes will not be available in the generic 2D images captured by optical cameras. Thus, stitching using 2D images will result in lack of information in top view. UAV (Unmanned Aerial Vehicle) captured drone images tend to have the high aerial perspective, 50–80% of overlapping of information between the images with full information about the scene. This work comprises of discussion about methods such as feature extraction and feature matching used for drone image stitching. In this paper, we compare the performance of three different feature extraction techniques such as SIFT (Scale-Invariant Feature Transform), SURF (Speeded-Up Robust Features), and ORB (ORiented FAST and rotated BRIEF) for detecting the key features. Then the detected features are matched using feature matching algorithms such as FLANN (Fast Library for Approximate Nearest Neighbors) and BF (Brute Force). All the matched key points may not be useful for creating panoramic image. Further, RANSAC (Random sample consensus) algorithm is applied to separate the inliers from the outlier set and interesting points are obtained to create a high-resolution image.

M. Dhana Lakshmi, P. Mirunalini, R. Priyadharsini, T. T. Mirnalinee
An ANN-Based Detection of Obstructive Sleep Apnea from Simultaneous ECG and SpO2 Recordings

Obstructive sleep apnea (OSA) is one of the most common sleep disorders characterized by a disruption of breathing during sleep. This disease, though common, goes undiagnosed in most cases because of the inconvenience, cost, and/or unavailability of opting for polysomnography (PSG) and a sleep analyst. Many researchers are working on devising an unsupervised, cost-effective, and convenient OSA detection methods which will aid the timely diagnosis of this sleep disorder. Commonly used signals to detect OSA are ECG, EEG, pulse oximetry (SpO2), blood oxygen saturation (SaO2), and heart rate variability (HRV). In this work, an attempt to detect the OSA using simultaneously acquired ECG and SpO2 signals has been presented. Various features from the RR intervals of ECG, and a couple of features—namely, CT90 and delta index—from the SpO2, were extracted as indicators of OSA. The features were then fed to a trained artificial neural network (ANN) which classified the signals as OSA positive or OSA negative. The proposed technique boasts a very high accuracy of 98.3%, which is superior to other competing techniques reported so far.

Meghna Punjabi, Sapna Prabhu
Design of an Image Skeletonization Based Algorithm for Overcrowd Detection in Smart Building

Crowd analysis has found its significance in varied applications from security purposes to commercial use. This proposed algorithm aims at contour extraction from skeleton of the foreground image for identifying and counting people and for providing crowd alert in the given scene. The proposed algorithm is also compared with other conventional algorithms like HoG with SVM classifier, Haar cascade and Morphological Operator. Experimental results show that the proposed method aids better crowd analysis than the other three algorithms on varied datasets with varied illumination and varied concentration of people.

R. Manjusha, Latha Parameswaran
Performance Comparison of Pre-trained Deep Neural Networks for Automated Glaucoma Detection

This paper addresses automated glaucoma detection system using pre-trained convolutional neural networks (CNNs). CNNs, a class of deep neural networks (DNNs), extract features of high-level abstractions from the fundus images, thereby eliminating the need for hand-crafted features which are prone to inaccuracies in segmenting landmark regions and require excessive involvement of experts for annotating these landmarks. This work investigates the applicability of pre-trained CNNs for glaucoma diagnosis, which is preferred when the dataset size is small. Further, pre-trained networks have the advantage of the quick model building. The proposed system has been validated on the High-Resolution (HRF), which is a publicly available benchmark database. Results demonstrate that among other pre-trained CNNs, VGG16 network is more suitable for glaucoma diagnosis.

Manas Sushil, G. Suguna, R. Lavanya, M. Nirmala Devi
Investigation on Land Cover Mapping of Large RS Imagery Using Fuzzy Based Maximum Likelihood Classifier

The success of a large number of real-world applications such as mapping, forestry, and change detection depends on the effectiveness with which land cover classes are extracted from Remotely Sensed (RS) imagery. Application of Fuzzy theory in remote sensing has been of great interest in the remote sensing fraternity particularly when the data are inherently Fuzzy. In this paper, a Fuzzy theory based Maximum Likelihood Classifier (MLC) is discussed. The study aims at amplifying the classification accuracy of large heterogeneous multispectral remote sensor data characterized by the overlapping of spectral classes and mixed pixels. Landsat 8 multispectral data of North Canara District was collected from USGS website and is considered for the research. Seven land use land cover classes were identified over the study area. The study also aims at achieving classification results with a confidence level of 95% with $$\pm 4\%$$ error margin. The conducted research attains the predicted classification accuracy and proves to be a valuable technique for classification of large heterogeneous RS multispectral imagery.

B. R. Shivakumar, S. V. Rajashekararadhya
Gaussian Membership Function and Type II Fuzzy Sets Based Approach for Edge Enhancement of Malaria Parasites in Microscopic Blood Images

This research presents a three-stage approach. In the first stage, the original image transformed into grayscale image, then normalizes grayscale image using min-max normalization, which performs a linear conversion on the original image data. The second stage calculates the Gaussian membership function on the normalized grayscale image then measure lower membership values and upper membership values using a threshold value. In addition, computed a novel membership function with Hamacher t-conorm using lower and upper membership values on given images. Finally, the median filter applied on these images to obtain edge enhanced microscopic images. The current study is conducted on the microscopic blood images of the malaria parasites. The experimental results compared with Prewitt filter, Sobel edge filter, and rank-ordered filter. The proposed approach is consistent and coherent in all microscopic malaria parasite images with four stages, with average entropy 0.90215 and 59.69% PSNR values, respectively.

Golla Madhu
Implementation of Virtual Trial Room for Shopping Websites Using Image Processing

In today’s world, the use of e-commerce websites has increased a lot. Online shopping gives us more information and choice about different products under various categories which are readily available. The customer just has to choose between the different products, purchase them, and the product comes to the customer’s doorstep. Thus, many people like to buy many things online. Clothing is one of such categories that people can buy online. But for shopping clothes, this scenario is a little different. There is a major problem that people do not know how the clothes would actually look on them and so many people avoid buying clothes online. Sometimes the customers even send back the clothes they buy as it does not look good on them. This is why a virtual trial room has to be developed so that people do not have to wait to try on the clothes physically after it is delivered. They can try clothes virtually on the virtual trial room.

Niket Zagade, Akshay Bhondave, Raghao Asawa, Abhishek Raut, B. C. Julme
Response Analysis of Eulerian Video Magnification

The human eye has a very high optical resolution making it one of the most astonishing curiosities of the world. However, its spatial resolution is not good enough to capture everything happening around it, and it can miss out minor details, which can be termed as hidden movements. The hidden movements can be due to the extremely high speed of the visual, very small movements, long-term physical process, etc. Eulerian video magnification is a spatiotemporal video processing algorithm that can reveal hidden details that are otherwise hidden to naked eyes. In this process, a standard video sequence is spatially decomposed, and temporal filtering of the frames is done. The data so obtained as output can be used in many fields such as biomedical instrumentation, remote surveillance, etc. Here, an analysis has been done on Eulerian Video Magnification (EVM), for different video resolutions to understand its reliability.

S. Ramya Marie, J. Anudev
Face Authentication and IOT-Based Automobile Security and Driver Surveillance System

Automobile industry is one of the largest and fastest growing industry and the actual reason behind it is, up-growing men to vehicle ratio. A lot of new vehicles are coming in the market and people are using them by spending substantial amount of money. This increasing ratio of man to vehicle is squeezing the crimes regarding vehicle robbery and accidents even though there are lots of safety features readily available. Hence this paper proposes simple low cost solution, based on strong biometric mechanism that involves face authentication. The system that uses night vision camera to capture the face of person seating on the driver’s seat and some sensors to provide his surveillance in the accidental situations. This system also gives us instant alert with latest captured image of vehicle’s interior on email.

Mahesh R. Pawar, Imdad Rizvi
Highly Repeatable Feature Point Detection in Images Using Laplacian Graph Centrality

Image registration is an indispensible task required in many image processing applications, which geometrically aligns multiple images of a scene, with differences caused due to time, viewpoint or by heterogeneous sensors. Feature-based registration algorithms are more robust to handle complex geometrical and intensity distortions when compared to area-based techniques. A set of appropriate geometrically invariant features forms the cornerstone for a feature-based registration framework. Feature point or interest point detectors extract salient structures such as points, lines, curves, regions, edges, or objects from the images. A novel interest point detector is presented in this paper. This algorithm computes interest points in a grayscale image by utilizing a graph centrality measure derived from a local image network. This approach exhibits superior repeatability in images where large photometric and geometric variations are present. The practical utility of this highly repeatable feature detector is evident from the simulation results.

P. N. Pournami, V. K. Govindan
A Survey on Face Recognition in Video Surveillance

In today’s world, enormous amount of threats arises due to terrorists, criminals, thieves and also illegal access of the data from the unwanted person, etc. This leads to a lot of challenges in our daily life. With the increase in threat globally the need to deploy reliable surveillance is to increase. Video surveillance is considered to be the major breakthrough in monitoring and security. In video surveillance, the facial recognition furthermore enhances the security and defense progressively. By face recognition the probe person can be recognized more accurately, efficiently and with short time. Various methods, approaches, algorithms were available for face recognition from surveillance video. The main objective of the paper is to discuss and analyze about the various facial recognition techniques.

V. D. Ambeth Kumar, S. Ramya, H. Divakar, G. Kumutha Rajeswari
Driver’s Drowsiness Detection Using Image Processing

There are some causes of car accidents due to driver error which includes drunkenness, fatigue and drowsiness. Hence, the system is needed which will alert driver before he/she falls asleep and number of accidents can be reduced. In the proposed system, a camera continuously captures movement of the driver. To determine whether a driver is feeling drowsy or not the head position, eye closing duration and eye blink rate are used. Using this information, the drowsiness level is determined. As per the drowsiness level the alarm is generated. A night vision camera is used to handle different light conditions.

Prajakta Gilbile, Pradnya Bhore, Amruta Kadam, Kshama Balbudhe
Identification of Optimum Image Capturing Technique for Corneal Segmentation—A Survey

Segmentation of corneal layers plays an important role in diagnosing the corneal disease and it also helpful for planning the refractive surgeries when measuring the corneal layer thicknesses. There are several imaging instruments available for capturing the image of the cornea which can be used for segmentation and thickness measurement of corneal layers. The work describes the various available imaging instruments along with the necessity of imaging the cornea. It also suggests a reliable imaging method for capturing the cornea for segmentation by doing a comparative study. It will be helpful for identifying the best image capture method for implementing the image processing technique for further segmentation and measurement flow from the image.

H. James Deva Koresh, Shanty Chacko
Hybrid SIFT Feature Extraction Approach for Indian Sign Language Recognition System Based on CNN

Indian sign language (ISL) is one of the most used sign languages in the Indian subcontinent. This research aims at developing a simple Indian sign language recognition system based on convolutional neural network (CNN). The proposed system needs webcam and laptop and hence can be used anywhere. CNN is used for image classification. Scale invariant feature transformation (SIFT) is hybridized with adaptive thresholding and Gaussian blur image smoothing for feature extraction. Due to unavailability of ISL dataset, a dataset of 5000 images, 100 images each for 50 gestures, has been created. The system is implemented and tested using python-based library Keras. The proposed CNN with hybrid SIFT implementation achieves 92.78% accuracy, whereas the accuracy of 91.84% was achieved for CNN with adaptive thresholding.

Abhishek Dudhal, Heramb Mathkar, Abhishek Jain, Omkar Kadam, Mahesh Shirole
A Contemporary Framework and Novel Method for Segmentation of Brain MRI

Brain magnetic resonance imaging plays a vital role in medical image processing for detection of brain tumor, therapy response evaluation, brain tumor diagnosis, and treatment selection. Contrast-enhanced T1C 3D brain magnetic resonance imaging is extensively used brain-imaging modalities. Automated segmentation and detection of brain tumors in contrast-enhanced T1C 3D brain magnetic resonance imaging is a very complicated task due to high disparity in shape, size, and appearance of brain tumor. Nevertheless, so many analyses have been conducted in the similar research work, it remains a very complicated task for automated segmentation and detection of brain tumor in magnetic resonance imaging, and enhancing segmentation accuracy of brain tumor is still continuing field. The main aim of this research work is to develop a fully automated segmentation framework for segmentation and detection of the brain tumor that is allied with contrast-enhanced T1C 3D brain magnetic resonance imaging. Consequently, this research work deals about development of segmentation framework for detection of tumor in brain 3D MR images. The proposed segmentation framework ingrates the most established fuzzy C means clustering method and improved Expectation Maximization (EM) method. An anisotropic filter is used for preprocessing and subsequently, it is employed to the fuzzy C means clustering method, improved Expectation Maximization (EM), and proposed method for superior segmentation and detection of tumor. The performance result of proposed framework is evaluated on 10 patients’ simulated medical clinical dataset. The performance results of the proposed framework exhibited better results as compared with presented methods.

A. Jagan
Anatomical Segmentation of Human Brain MRI Using Morphological Masks

Segmenting the anatomical parts of the human brain from MRI is a challenging task in medical image analysis. There is no evident scale for the distribution of intensity over a region in medical images. Region growing is performed to generate a mask to segment the anatomical parts from MRI. A new tailored version of dilation is used in renovating the segmentation mask. This custom-made dilation differs from typical dilation in computation. The structuring element size is fixed, and the anchor point norm is changed from usual dilation. The neighborhood evaluation is made only for certain pixels that are satisfied by the proposed constraints; thus, estimation is not made throughout the image. The computation of the classical dilation is reduced with the proposed custom-made dilation.

J. Mohamed Asharudeen, Hema P. Menon
Digital Image Restoration Using NL Means with Robust Edge Preservation Technique

We present a novel approach for image denoising with spatial domain edge preservation method to yield denoised image, from an image which is distorted by additive white Gaussian noise without loss of detail information of an image. Denoising of noise corrupted images while preserving its attributes like edges and fine details is an extreme challenge. During image acquisition and transmission often noise gets added in the image which degrades the image quality. Algorithms like NL means and BM3D have been very successful in this aspect. The weight assigned to the fellow pixels for calculating replacement for a noisy pixels have a pivotal role in these algorithms. In NL means, the weight assigned to a pixel depends only on its relative magnitude with respect to its neighborhood. In this paper we are proposing a robust mechanism for weight calculation which also considers the orientation of a pixel. Using this mechanism we got improvement in PSNR and visual quality of the de-noised image as compared to the NL means method and also got improvement in structural similarity as compared to the BM3D. From the experimental results it can be seen that our proposed algorithm is superior in comparison to the NL means and BM3D technique of image denoising, considering the factors like PSNR, SSIM and accordingly conclusions are drawn.

Bhagyashree V. Lad, Bhumika A. Neole, K. M. Bhurchandi
Speech Recognition Using Novel Diatonic Frequency Cepstral Coefficients and Hybrid Neuro Fuzzy Classifier

Speech recognition is the ability of the machine to identify spoken words and classify them into appropriate category. First stage in the process of speech recognition is the extraction of appropriate features from the recorded words. We propose a novel algorithm for feature extraction using diatonic frequency cepstral coefficients. Diatonic frequencies are derived from a musical scale called as diatonic scale. The scale is based on harmonics of sound and models nonlinear behavior of human auditory filter. After feature extraction, the next classification stage uses a hybrid classifier using artificial neural network and fuzzy logic. If the difference between prediction values available at the output of the neural network is less, the classifier matches wrong patterns. Proposed algorithm overcomes this drawback using fuzzy logic. Proposed hybrid classifier improves the recognition rate significantly over existing classifiers. Test bed used in the experimentation focuses on Marathi language. It is the native language spoken in the state of Maharashtra.

Himgauri Kondhalkar, Prachi Mukherji
Performance Analysis of Fuzzy Rough Assisted Classification and Segmentation of Paper ECG Using Mutual Information and Dependency Metric

The paper aims at the development of fuzzy rough set-based dimensionality reduction for discrimination of electrocardiogram into six classes. ECG acquired by the offline method is in the form of coloured strips. Morphological features are estimated using eigenvalues of Hessian matrix in order to enhance the characteristic points, which are seen as peaks in ECG images. Binarization of the image is carried out using a threshold that maximizes entropy for appropriate extraction of the fiducial features from the background. Various image processing algorithms enhance the image which is utilized for feature extraction. The dataset produced comprises the feature vector consisting of 79 features and 1 decision class for 6 classes of ECG. Extensive analysis of dimensionality reduction has been done to have relevant and nonredundant attributes. Fuzzy rough domain has been explored to take into account the extreme variability and vagueness in the ECG. Optimal feature set is subjected to fuzzification using Gaussian membership function. Further, fuzzy rough set concepts help in defining a consistent rule set to obtain the appropriate decision class. Classification accuracy of unfuzzified dataset is compared with the fuzzified dataset. Semantics of the data are well preserved using fuzzy rough sets and are seen from the performance metrics like accuracy, sensitivity and specificity. The proposed model is named as Fuzzy Rough ECG Image Classifier (FREIC) which can be deployed easily for clinical use as well as experimental use.

Archana Ratnaparkhi, Dattatraya Bormane, Rajesh Ghongade
A Mixed Reality Workspace Using Telepresence System

Even after years of advancement from telephone technology to video conferencing, it eventually does not provide the environment of real human interaction. It has been a challenge ever since. The system we propose here would provide an environment in the user’s reality, where the user can interact with the person on the other side of the conversation with the help of a wearable mixed reality headset. The user will get a feeling of coexistence with objects and people who are far away. This advancement in technology will lead to a world, where people can connect with others who are only virtually present. This paper will present the technical system for teleporting virtual objects and virtual participants in the real-world environment. It will present a system where 3D audio and real-time communication will help build a more realistic environment with the presence of virtual participants and objects in 3D space.

Viken Parikh, Mansi Khara
Detection of Static and Dynamic Abnormal Activities in Crowded Areas Using Hybrid Clustering

In Computer Vision, to monitor the activities, behavior, and other changing information, surveillance is used. Surveillance is used by government organizations, private companies for security purposes. In video processing, anomaly is generally considered as a rarely occurring event. In a crowded area, it is impossible to monitor the occasionally moving objects and each person’s behavior. The main objective is to design a framework that detects the occasionally moving objects and abnormal human activities in the video. Histogram of Oriented Gradient feature extraction is used for the detection of an occasionally moving object and abnormality detection involves in computing the motion map by the flow of motion vectors in a scene that detects the change in movement. The experimental analysis demonstrates the effectiveness of this approach which is efficient to run in real time achieves 96% performance, however, for effective validation of the system is tested with standard UMN datasets and own datasets.

M. R. Sumalatha, P. Lakshmi Harika, J. Aravind, S. Dhaarani, P. Rajavi
Scaled Conjugate Gradient Algorithm and SVM for Iris Liveness Detection

With the extensive use of biometric systems at most of the places for authentication, there are security and privacy issues concerning with it. Most of the public places we usually visit are under reconnaissance and may keep track of face, fingerprints, and iris etc. Thus, biometric information is not secret anymore and sensitive to spoofing attacks. Therefore, not only biometric traits but also liveness detection must be deployed in an authentication mechanism which is a challenging task. Iris is the most accurate trait and increasingly in demand in applications like national security, duplicate-free voter registration list, and Aadhar program its detection must be made robust. Solution to the iris liveness detection by extracting distinctive textural features from genuine (live) and fake (print) patterns using statistical approaches GLCM and GLRLM are implemented. Popular supervised SVM algorithm and PatternNet neural network with second-order scaled conjugate gradient training algorithm are assessed. Both of these algorithms are found to be faster with PatternNet outperforms over SVM.

Manjusha N. Chavan, Prashant P. Patavardhan, Ashwini S. Shinde
Long Short-Term Memory-Based Recurrent Neural Network Approach for Intrusion Detection

Intrusion detection is very essential in the field of information security. The cornerstone of an Intrusion Detection System (IDS) is to accurately identify different attacks in a network. In this paper, a deep learning system to detect intrusions is proposed. The existing recurrent neural network (RNN-IDS) based IDS is expanded to include Long Short term memory (LSTM) and the results are compared. The binary classification performance of the RNN-IDS is tested with various learning rates and using different number of hidden nodes. The results show that by integrating LSTM with RNN-IDS, the accuracy of intrusion prediction has improved against the benchmark dataset.

Nishanth Rajkumar, Austen D’Souza, Sagaya Alex, G. Jaspher W. Kathrine
Video Frame Interpolation Using Deep Convolutional Neural Network

Video frame interpolation fuse several low-resolution (LR) frames into one high-resolution (HR) frame. The existing methods for video frame interpolation use optical flow method to determine motion in a scene, but computation using optical flow method is difficult, which can lead to artifacts in the output video. In many applications where we use video footages, there is a similarity in the content of footages. This similarity in content recommends that using some kind of context-aware approach can do better interpolation than the different existing interpolation techniques. We propose such a context-aware approach for video interpolation, the video frame interpolation using convolutional neural networks. In this proposed method, neighboring images are given as input to an end-to-end convolutional neural network which interpolates a frame between them. A comparative analysis of video interpolation technique using proposed RGB model and HSV model using metric standards such as SSIM, PSNR, and MSE is also included in the proposed method.

Varghese Mathai, Arun Baby, Akhila Sabu, Jeexson Jose, Bineeth Kuriakose
Sentiment Analysis of Twitter Data Using Big Data Tools and Hadoop Ecosystem

Sentiment analysis and opinion mining help in the analysis of people’s views, opinions, attitudes, emotions and sentiments. In this twenty-first century, huge amount of opinionated data recorded in the digital form is available for analysis. The demand of sentiment analysis occupies the same space with the growth of social media such as Twitter, Facebook, Quora, blogs, microblogs, Instagram and other social networks. In this research work, the most popular microblogging site ‘twitter’ has been used for sentiment analysis. People’s views, opinions, attitudes, emotions and sentiments on an outdoor game ‘Lawn Tennis’ have been used for the analysis. This is done by analysing people’s positive, neutral and negative reviews posted on Twitter. Through this it has been analysed that how many people around the world really like this game and how popular this game is in different countries.

Monica Malik, Sameena Naaz, Iffat Rehman Ansari
Detection of Chemically Ripened Fruits Based on Visual Features and Non-destructive Sensor Techniques

Nowadays great concern for everyone is health; hence primary requirement for sound health is eating good quality fruits. However, most of the available fruits in the market are ripened using hazardous chemicals such as calcium carbide, which is highly hazardous to human health. In the existing literature, less focus is given towards addressing the problem of identification of artificially as well as naturally ripened fruits, due to the complex nature of problem. In order to solve this problem, a new framework is proposed in this paper, which utilizes both the image features- and sensor-based techniques to identify whether the fruit is ripened by chemicals or not. By employing pH-sensor based techniques and visual features, it is possible to detect artificially ripened fruits and save the human beings from serious health hazards. The experiments were conducted and the results indicate that the proposed technique is performing better for the identification of artificially ripened banana fruits.

N. R. Meghana, R. Roopalakshmi, T. E. Nischitha, Prajwal Kumar
Dynamic Object Indexing Technique for Distortionless Video Synopsis

With a development of observation cameras, the measure of caught recordings extends. Physically dissecting and recovering reconnaissance video is work concentrated and costly. It is substantially more important to create a video description and the video can be observed in a good manner. So, here we describe a novel video outline way to deal with produce consolidated video, which utilizes a protest following technique for extracting imperative items. This strategy will create video objects and a crease cutting technique to gather the first video. Finally, output results that our proposed strategy can accomplish a high buildup rate while safeguarding all the imperative objects of intrigue. Hence, in this method, we can empower clients to see the synopsis video with high impact.

G. Thirumalaiah, S. Immanuel Alex Pandian
An Intrusion Detection and Prevention System Using AIS—An NK Cell-Based Approach

The widespread use of internet in key areas has increased unauthorized attacks in the network. Intrusion detection and prevention system detects as well as prevents the attacks on confidentiality, integrity, and availability of the system. In this paper, an Artificial Immune System based intrusion detection and prevention system is designed using artificial Natural Killer (NK) cells. Random NK cells are generated and negative selection algorithm is applied to eliminate self-identifying cells. These cells detect attacks on the network. High health value cells that detect a large number of attacks are proliferated into the network. When the proliferation reaches a threshold, the NK cells are migrated into the intrusion prevention system. So NK cells in IDS are in promiscuous mode and NK cells in IPS are in inline mode. The technique yields high detection rate, better accuracy and low response time.

B. J. Bejoy, S. Janakiraman
HORBoVF—A Novel Three-Level Image Classifier Using Histogram, ORB and Dynamic Bag of Visual Features

Every country has its own currency in terms of coins and paper notes. Each of the currency of Individual County has its unique features, colors, denominations, and international value. Though it is easy for us to identify the denomination but for the blind people, it is a not at all possible to do this! Especially when size of currency of different denominations is same, it becomes almost impossible for them to do this and there are chances that they might got cheated by others. This paper proposes and discusses a novel three-stage image classifier algorithm for the same purpose to help the blind people in identifying denomination more accurately and to check if the currency is real or fake.

Vishwas Raval, Apurva Shah
Neural Network Based Image Registration Using Synthetic Reference Image Rotation

Typical image registration techniques use a set of features from a target and reference images and search in the affine transformation space using a similarity metric. Neural Networks typically have employed two choices—geometric transformations to find correlation between images and a similarity metric. In this paper, however, we have proposed and employed a simple and effective method for image registration using neural networks. The image registration has been formulated as a classification problem. By generating and learning exhaustive synthetic reference image transformations appropriate re-transformation for target image is computed for effective registration. The proposed work is tested on satellite imagery.

S. Phandi, C. Shunmuga Velayutham
Design and Development of Efficient Algorithms for Iris Recognition System for Different Unconstrained Environments

One of the important concepts of identification of human beings in various sectors across the universe is the biometrics. In this paper, a brief report of the biometric recognition is being presented in a nutshell. This paper gives a brief conceptual view of the research work done on the topic titled, “Design and Development of Efficient Algorithms for Iris Recognition System for Different Unconstrained Environments” as the research topic chosen.

M. R. Prasad, T. C. Manjunath
Relation Extraction Using Convolutional Neural Networks

Identifying the relationship between the entities plays a key role in understanding any natural language. The relation extraction is a task, which finds the relationship between entities in a sentence. The relation extraction and named entity recognition are the subtasks of information extraction. In this paper, we have experimented and analyzed the closed-domain relation extraction using three variants of temporal convolutional neural network on SemEval-2018 and SemEval-2010 relation extraction corpus. In this approach, the word-level features are formed from the distributed representation of text and the position information of entity are used as the feature for the model.

V. Hariharan, M. Anand Kumar, K. P. Soman
Multi-Object Detection Using Modified GMM-Based Background Subtraction Technique

Detection of objects is the most important and challenging task in video surveillance system in order to track the object and to determine meaningful and suspicious activities in outdoor environment. In this paper, we have implemented novel approach as modified Gaussian mixture model (GMM) based object detection technique. The object detection performance is improved compared to original GMM by adaptively tuning its parameters to deal with the dynamic changes that occurred in the scene in outdoor environment. Proposed adaptive tuning approach significantly reduces the overload experimentations and minimizes the errors that occurred in empirical tuning traditional GMM technique. The performance of the proposed system is evaluated using open source database consisting of seven video sequences of critical background condition.

Rohini Chavan, S. R. Gengaje, Shilpa Gaikwad
Motion Detection Algorithm for Surveillance Videos

Locality Sensitive Hashing (LSH) is an approach which is extensively used for comparing document similarity. In our work, this technique is incorporated in a video environment for finding dissimilarity between the frames in the video so as to detect motion. This has been implemented for a single point camera archiving, wherein the images are converted into pixel file using a rasterization procedure. Pixels are then tokenized and hashed using minhashing procedure which employs a randomized algorithm to quickly estimate the Jaccard similarity. LSH finds the dissimilarity among the frames in the video by breaking the minhashes into a series of band comprising of rows. The proposed procedure is implemented on multiple datasets, and from the experimental analysis, we infer that it is capable of isolating the motions in a video file.

M. Srenithi, P. N. Kumar
6D Pose Estimation of 3D Objects in Scenes with Mutual Similarities and Occlusions

Estimation of six degrees of freedom (6DoF) attitude of rigid bodies is an essential issue in such fields as robotics and virtual reality. This paper proposed a method that could accurately estimate 6DoF attitude of known rigid bodies with RGB and RGB-D input image data. Additionally, the method could also work well in the scenes in which objects exhibit mutual similarities and occlusion. As one of the contributions made by this paper, a modularized assembly line method was proposed, which integrated deep learning and multi-view geometry method. At first, a neural network for instance segmentation was used to identify the general locations of known objects in the images and give the bounding boxes and masks. Then 6DoF attitude was estimated roughly according to the local features of RGB-D images and templates. Finally, purely geometric method was used to refine the estimation. Another contribution of this paper was the correction of misclassification with the help of some information reserved in the process of training the network. The proposed method achieves a superior performance on a challenging public dataset.

Tiandi Chen, Lifeng Sun
A Simple and Enhanced Low-Light Image Enhancement Process Using Effective Illumination Mapping Approach

When an image is captured in low-light, it gets the low visibility. To overcome the low visibility of the image, some operations are to be performed. But in this paper, image enhancement is introduced using illumination mapping. First, R, G, B maximum values in each pixel of the considered image are to be calculated and then convert it into a grey scale image by applying the formulae. Some filters are used to remove the noise, the choice of filter depends on the type of noise, and then the image is preprocessed. The logarithmic transformation helps to increase the brightness and contrast of the image with a certain amount. Earlier there were some methods to enhance the low-light image, but illumination map existence is chosen. In this illumination, the image will be enhanced with the good quality and efficiency. The illumination technique will be the more efficient and more quality. The illumination corrects the R, G, B values to get the desired image, then Gamma Correction is applied. The Gamma Correction is a non-linear power transform, it helps to increase or decrease the brightness of the desired image when a low value of gamma is taken, the brightness will be increased and when a high value of gamma is taken, and the brightness will be decreased. The proposed system is implemented using MATLAB software. When different types of images are applied, different contrast and brightness levels that depend on the type of image are observed.

Vallabhuni Vijay, V. Siva Nagaraju, M. Sai Greeshma, B. Revanth Reddy, U. Suresh Kumar, C. Surekha
Comparison of Particle Swarm Optimization and Weighted Artificial Bee Colony Techniques in Classification of Dementia Using MRI Images

Numerous soft computing techniques are used nowadays to analyze medical images, and diagnosis of disease is computerized. This paper compares the performance of Weighted Artificial Bee Colony and Particle Swarm Optimization in the diagnosis of dementia using MRI images. For analysis, cross-sectional MRI of 235 subjects collected from OASIS is used. By adjusting the weights for both optimization techniques in a proper manner, optimized results can be reached. These techniques classify the cross-sectional image into three categories and give almost equal Goodness Detection Ratio of 78% along with different regression ratios.

N. Bharanidharan, Harikumar Rajaguru
Automatic Segmentation of Malaria Affected Erythrocyte in Thin Blood Films

In today’s world, a highly précised diagnostic method needs to be improved for management of feverish sickness and ensure that medicines are prescribed when necessary. The proposed algorithm is applied to giemsa-stained thin blood films. Using triangle’s thresholding technique, the erythrocytes are segmented, HSV color space based feature extraction is applied on the segmented erythrocytes. Features extracted are given to SVM classifier to identify whether the query sample is affected with a parasite or a normal sample. The performance of this algorithm is evaluated on the database collected from CDC website and 20 samples taken manually from a local hospital.

Komal B. Rode, Sangita D. Bharkad
Detection and Recognition of Vehicle Using Principal Component Analysis

The idea of detection of moving objects and the concept of classification of moving objects is considered to be the important part of research in video processing and in real-time applications for surveillance and tracking of vehicles. In scientific terms, image processing is said to be any form of signal processing, where the input is an image and the output of image processing may taken be either an image or a set of characteristics or parameters related to the image. The proposed work of the paper is to detect and classify vehicles in a given video. It consists of two modules, first one uses GLOH algorithm for feature extraction and feature reduction. The second module classifies the vehicle from an input video frame using PCA. The final result is obtained by integrating the above-said modules.

Kolandapalayam Shanmugam Selvanayaki, Rm. Somasundaram, J. Shyamala Devi
Convolution Neural Networks: A Case Study on Brain Tumor Segmentation in Medical Care

Image segmentation is dividing of medical imaging into parts and extracting the regions of interest. The study involves the images of brain tumors where the tumor part is segmented from the image and analyzed accurately and efficiently. Convolution Neural Network (CNN) has made a tremendous progress in the field of the Medical and Information Technology. With CNN model, one may not be able to reorganize higher risk patients to get immediate aid they require but also communicate through the network to the clinicians, surgeons, eventually improving the standard of patient care in the medical system.

Jayanthi Prisilla, V. Murali Krishna Iyyanki
Vision-Based Algorithm for Fire Detection in Smart Buildings

Recent technological advancement has opened the space for a gradual increase in the number of smart buildings. Public safety and security has becomes a matter of concern with such a development, especially in areas of fire accidents. The conventional fire detection system usually worked on sensors and takes time for fire detection. This work presents an early fire detection system that unlike conventional fire detection system is cost-effective with high fire detection rate. The proposed algorithm uses features like color, increase in area and intensity flicker for early detection of fire. Segmentation of fire colored regions is done with the help of L*a*b*, YCbCr, and RGB color space. Analysis of fire, i.e., fire area, its spread, temporal information, direction of the fire, and its average growth rate are measured using optical flow and blob analysis. Accuracy and F measure are used to evaluate the accuracy of the proposed system. Experimental results show that the average accuracy of the system is above 80% which is more promising in a video.

Patel Abhilasha Paresh, Latha Parameswaran
Comparative Performance Analysis of Local Feature Descriptors for Biomedical Image Retrieval

Biomedical imaging field is growing enormously from last decade. The medical images have been used and stored continuously for diagnosis as well as research purposes. There exist several methods that tend to provide real-time retrieval of medical images from such storage repositories. Therefore, in this paper, we strive to present an exhaustive performance comparison of existing and recently published state-of-the-art local feature descriptors for retrieval of CT and MR images. All the compared methods have been tested on two standard test databases, namely, NEMA CT and NEMA MRI. Additional experiments have been conducted to analyze the noise robustness ability of all the compared approaches. Lastly, the methods are also compared in terms of their computational complexity and total CPU time taken to retrieve images corresponding to the given query image.

Suchita Sharma, Ashutosh Aggarwal
Detection of Liver Tumor Using Gradient Vector Flow Algorithm

Liver tumor also known as the hepatic tumor is a type of growth found in or on the liver. Identifying the tumor location can be a tedious, error-prone and need an experts study to identify it. This paper presents a segmentation technique to segment the liver tumor using Gradient Vector Flow (GVF) snakes algorithm. To initiate snakes algorithm the images need to be insensitive to noise, Wiener Filter is proposed to remove the noise. The GVF snake starts its process by initially extending it to create an initial boundary. The GVF forces are calculated and help in driving the algorithm to stretch and bend the initial contour towards the region of interest due to the difference in intensity. The images were classified into tumor and non-tumor categories by Artificial Neural Network Classifier depending on the features extracted which showed notable dissimilarity between normal and abnormal images.

Jisha Baby, T. Rajalakshmi, U. Snekhalatha
Hyperspectral Image Classification Using Semi-supervised Random Forest

In this paper, a hyperspectral image classification technique is proposed using semi-supervised random forest (SSRF). Robust node splitting in the random forest requires enormous training data, which is scarce in remote sensing applications. In order to overcome this drawback, we propose utilizing unlabeled data in conjunction with labeled data to assist the splitting process. Moreover, in order to tackle the curse of dimensionality associated with a hyperspectral image, we explore nonnegative matrix factorization (NMF) to remove redundant information. Experimental results confirm the efficacy of the proposed method.

Sunit Kumar Adhikary, Sourish Gunesh Dhekane
Clustering of Various Parameters to Catalog Human Bone Disorders Through Soft Computing Simulation

Every minute nearly 20 fractures occur due to bone disorders in the world. People around the world could not able to differentiate the difference between bone disorders. This chapter is a novel approach toward differentiation of different bone disorders like osteoporosis and osteopenia with influences several parameters. Accordingly, five different parameters such as Calcium, Phosphate, Vitamin D3, Parathyroid hormone (PTH) level, and calcitonin level are considered for the study to categorize the bone disorders. The present approach is an attempt to combine the clinical data measured from each patient and their respective bone mineral density value for the better classification. This is a unique study to provide combined information of both clinical and image processing studies. For this purpose, the above-mentioned parameters and bone density values were observed from ten different patients. All these data were used as an input for soft computing using MATLAB for further processing the data. Initially, unsupervised mapping classifier is adopted to classify bone disorder, for which the clinical parameters are compared with bone density value using k-means clustering algorithm. The prime idea behind using of k-means technique is that the feasibility to classify the inputs based on the distance between the input seeds. With reference to the perpendicular distance between the seed inputs, the bone disorders have been cataloged. The repeated iterations lead to best clustering results.

S. Ramkumar, R. Malathi
IoT-Based Embedded Smart Lock Control Using Face Recognition System

Smart home security and remote monitoring have become vital and indispensable in recent times, and with the advent of new concepts like Internet of Things and development of advanced authentication and security technologies, the need for smarter security systems has only been growing. The design and development of an intelligent web-based door lock control system using face recognition technology, for authentication, remote monitoring of visitors and remote control of smart door lock have been reported in this paper. This system uses Haar-like features for face detection and Local Binary Pattern Histogram (LBPH) for face recognition. The system also includes a web-based remote monitoring, an authentication module, and a bare-bones embedded IoT server, which transmits the live pictures of the visitors via email along with an SMS notification, and the owner can then remotely control the lock by responding to the email with predefined security codes to unlock the door. This system finds wide applications in smart homes where the physical presence of the owner at all times is not possible, and where a remote authentication and control is desired. The system has been implemented and tested using the Raspberry Pi 2 board, Python along with OpenCV are used to program the various face recognition and control modules.

J. Krishna Chaithanya, G. A. E. Satish Kumar, T. Ramasri
MediCloud: Cloud-Based Solution to Patient’s Medical Records

Cloud computing has lately emerged as a new standard for hosting and delivering services over the internet. As per the Code of Federal Regulations, the hospitals are required to retain the radiological records for a duration of 5 years. Healthcare organizations are considering cloud computing as an attractive option for managing radiological imaging data. In this paper, we have proposed a public cloud-based “Infrastructure as a Service (IaaS)” model as a solution for maintaining the medical records of patients. New patient information is registered at the hospital with the help of a unique identification number. For each user, the bucket is created in the Amazon AWS cloud to store or retrieve the data. Data access is performed with the help of username and password provided by the web link which is embedded in the unique QR code. Two QR codes are used, the first code gives the access to the login page, whereas the latter one is used for accessing the corresponding user bucket.

G. B. Praveen, Anita Agrawal, Jainam Shah, Amalin Prince
A Trio Approach Satisfying CIA Triad for Medical Image Security

Medical image security attains a great appeal due to the challenges in transmitting them through an open network channel. Even a small change in medical information leads to a wrong diagnosis. Hence, methods to prevent the attacks and tampering on the medical images in the open channel have a great demand. This work proposes such an attractor-assisted medical image watermarking and double chaotic encryption which preserves the confidentiality, integrity and authenticity of the medical image. Patient diagnosis details are compressed using lossless compression approach and embedded in the pixels of patient’s DICOM image. Attractor-based selective watermarking is implemented using integer wavelet transform. Encryption will be carried out by employing chaotic maps using confusion and diffusion operations. The effectiveness of the work will be evaluated using standard analyses namely MSE, PSNR, SSIM, entropy, correlation, histogram and key space.

Sivasaranyan Guhan, Sridevi Arumugham, Siva Janakiraman, Amirtharajan Rengarajan, Sundararaman Rajagopalan
A Novel Hybrid Method for Time Series Forecasting Using Soft Computing Approach

Improving the forecasting accuracy of time series is important and has always been a challenging research domain. From many decades, Auto-Regressive Integrated Moving Average (ARIMA) has been popularly used for statistic forecasting however it will solely forecast linear half accurately because it cannot capture the nonlinear patterns. Therefore here, we have projected a hybrid model of ARIMA and SVM. As Support Vector Machine (SVM) has demonstrated great outcomes in solving nonlinear regression estimation problems and to utilize the linear strength of ARIMA. Comparison with other models using different datasets has been done and the results are very promising.

Arpita Sanghani, Nirav Bhatt, N. C. Chauhan
Medical Image Classification Using MRI: An Investigation

The main objective of the paper is to review the performance of various machine learning classification technique currently used for magnetic resonance imaging. The prerequisite for the best classification technique is the main drive for the paper. In magnetic resonance imaging, detection of various diseases might be simple but the physicians need quantification for further treatment. So, the machine learning along with digital image processing aids for the diagnosis of the diseases and synergizes between the computer and the radiologist. The review of machine learning classification based on the support vector machine, discrete wavelet transform, artificial neural network, and principal component analysis reveals that discrete wavelet transform combined with other highly used method like PCA, ANN, etc., will bring high accuracy rate of 100%. The hybrid technique provides the second opinion to the radiologist on taking the decision.

R. Merjulah, J. Chandra
Tumor Detection and Analysis Using Improved Fuzzy C-Means Algorithm

Formation of abnormal cells in brain serves the major cause of tumor. With estimated deaths of 229,000 as of 2015, it has become an issue to be dealt. The less awareness of brain tumor owes to lots of unaccounted deaths. Thus, we aim in developing an app which could serve the purpose of detecting the tumor and giving additional information related to the detected tumor. This app takes in an MRI image and does preprocessing followed by clustering, segmentation, and binarization. The preprocessing involves the conversion of the image into grayscale and noise filtering. We aim at using improved Fuzzy c-means algorithm for clustering and segmentation. Binarization mainly aims at calculating the tumor size useful for further analysis. The improved fuzzy c-means algorithm overcomes the various constraints of k-means algorithm such as time complexity, processing of noisy images, and memory space.

R. Swathika, T. Sree Sharmila, M. Janani Bharathi, S. Jacindha
An Off the Shelf CNN Features Based Approach for Vehicle Classification Using Acoustics

Vehicle classification is a trending area of research in Intelligent Transport System. Vehicle Recognition can help traffic policy makers, public safety organizations, insurance companies, etc. It can assist in various applications like automatic toll collection, emissions/pollution estimation, traffic modelling, etc. Many methods, both infrastructure-based and infrastructureless have been proposed for vehicle classification but they have certain disadvantages. In this paper, we have explored the possibility to use off the shelf Convolutional Neural Network (CNN) features for commuter vehicle classification using acoustics. To extract features from acoustic recordings taken from the vehicle, a simple CNN is designed. These features are used to classify vehicles in five main categories car, bus, plane, train, and three-wheeler using Support Vector Machine (SVM). This approach is tested on dataset having 4789 recordings and gives good accuracy as compared to simple Mel Frequency Cepstral Coefficients (MFCC) feature based deep learning and machine learning approach.

Anam Bansal, Naveen Aggarwal, Dinesh Vij, Akashdeep Sharma
Conjunctival Vasculature Liveness Detection Based on DCT Features

Iris liveness detection algorithms are developed to recognize iris images were acquired from a live person who is actually present at the time of data capture. However the quality of the acquired images will decide success rate. Systems can be spoofed by using fake Photographs, video recordings, printed contact lenses, etc. Conjunctival Vasculature can be used as a biometric trait to identify liveness, paper gives focus on generation of a novel method to extract significant portion of off-angle eye called as sclera. DCT Transform based statistical features are used to find liveliness. System is tested using Extreme learning machines.

S. N. Dharwadkar, Y. H. Dandawate, A. S. Abhyankar
Unusual Social Event Detection by Analyzing Call Data Records

The availability of call data records provides an opportunity to identify unusual social events occurring in the society in an effective manner. The visitors participating in the events are the most important stakeholders for event organizers to improve their success rate. Visitors global positioning system (GPS) enabled device provides spatial data that are used to identify the visitors’ presence during event time. Mobile call data records (CDR) represent a real spatial data source to detect occupied visitors, but provide less accuracy in terms of time and space resolution. Using spatial data, it is possible to detect unusual events. In this paper, the method for detecting unusual social events in various locations is proposed. The CDRs are used to detect new visitors who participated in the event and total number of visitors participated is computed for free-to-view social event. The information extracted from preprocessed CDRs are utilized to identify new visitors and to compute total visitors present in an event place effectively. The tower-wise visitor’s details and CDR details provide information about the visitors’ movement as well as CDRs distribution pattern during the event time. The experiment is conducted on real CDRs provided by a telecommunication service provider (TSP) servicing in a larger city. Results show that the proposed method provides accurate identification visitors involved in unusual social events compared to the state-of-the art methods.

V. P. Sumathi, K. Kousalya, V. Vanitha, N. Suganthi
Drunk Driving and Drowsiness Detection Alert System

Advancement of safety features to avert drunk and drowsy driving has been one of the leading technical challenges in the automobile business. Especially in this modern age where people are under serious work pressure has led to higher crash rates. To prevent such accidents this paper discusses the use of nonintrusive techniques by using visual features to determine whether driver is driving in alert state. Drowsiness detection has been implemented using HAAR Cascade for face and eye closure detection and yawn detection implemented using Template matching in visual studio 2013. For drunk state detection, an alcohol sensor (MQ-3) has been implemented to avoid drunk driving. If the driver is found to be in drunk or drowsy condition, then an alarm would be generated and the driver being alerted using a buzzer and a vibrator that can be placed in the seatbelt or under driver seat thus preventing from mishaps taking place.

Vivek Nair, Nadir Charniya
A Survey on Intelligent Face Recognition System

Face recognition system is a computer’s capability which gives it a vision of performing two fundamental operations the detection and the recognition of a human face. With the advancement of machine learning algorithm and image processing techniques the accuracy of face recognition system has been significantly improved. The objective of this paper is to give a detailed survey of a few face recognition algorithm with their features and limitations. The basics of face detection and face recognition techniques along with their approaches are described in the section.

Riddhi Sarsavadia, Usha Patel
Real-Time Health Monitoring System Implemented on a Bicycle

It is evident that many systems have been developed and are being developed these days related to health monitoring. The main requirement is the continuous real-time health monitoring where the data, i.e. the body vitals/health parameters, can be easily understood by the user through an application interface, and this would also be shared with the corresponding physician who can be aware of the patient’s vitals at all times. This is a study of methods where a real-time health monitoring system can become an integral part of the society whereby its intention mainly aimed at gaining awareness of each ones health and its implementation leading to focus on a health record which is linked to web also information reaching the doctors in time for continuous monitoring. The obese people are able to control their weight. The project being ‘Real Time Health Monitoring System Implemented On A Bicycle’ aimed at designing and assembly of a non-invasive health monitoring system. The conversion of cycle energy will also be taken into account to supply for the health monitoring devices and charging of the display.

Rohith S. Prabhu, O. P. Neeraj Vasudev, V. Nandu, J. Lokesh, J. Anudev
A Fuzzy Rule-Based Diagnosis of Parkinson’s Disease

Neurodegenerative brain disorder is the root cause of Parkinson’s Disease (PD). Neurodegenerative is the process of impairment of brain cells. PD is diagnosed through clinical methods. Hope this research work helps to identify the intensity of the PD. Fuzzy Inference System is used to identify the PD and its intensity. Fuzzy rules, Mamdani Fuzzy Inference, Membership Functions, and Defuzzification are the process used to obtain accurate results. Oxford Parkinson’s Disease Detection Dataset is used for this research work. Among 23 fields in the dataset, only four fields FoH, DFA, Spread1, and Spread2 are chosen for analyzing the PD diagnosis and intensity. These four fields values are categorized into three sets: one is PD affected subjects, the second set is common values for both PD affected subjects and healthy subjects, and the third set is completely healthy subjects. Intensity values are measured from low to maximum as 0–100. Eighty-one rules are framed to calculate the PD intensity. We hope this FIS model is a novel method for identifying the PD intensity and helps doctors to diagnose and treat the patients in an effective way.

D. Karunanithi, Paul Rodrigues
A Comprehensive Study of Retinal Vessel Classification Methods in Fundus Images for Detection of Hypertensive Retinopathy and Cardiovascular Diseases

Quantitative studies for classification of retinal vessels using new computer-assisted retinal fundus imaging system have allowed the researchers to understand the influence of systemic on retinal vascular caliber. These retinal vascular caliber changes reflect the cumulative response to cardiovascular risk factor. Hypertensive retinopathy can be detected in earlier stage by analyzing the retinal image. Nowadays, it is obvious that there is a relationship between changes in the retinal vessel structure and the most common diseases such as hypertension, stroke, cardiovascular diseases, those can be detected by noninvasive retinal fundus image. The proposed approach of applying an image processing technique, the aforementioned disease can be diagnosed earlier by retinal fundus image. To achieve the precise measurement of the retinal image parameters, the classification of blood vessels such as arteries and veins is necessary. These classifications of arteries and veins can be achieved through the retinal fundus image. The retinal vessel classification is based on visual and geometric features from these classified images into arteries and veins for the detection of hypertensive retinopathy, stroke, and cardiovascular risk factor. This classification of retinal fundus image is essential for early diagnosis of aforementioned diseases. The retinal arteriolar caliber which is narrower and smaller, that is associated with older age, will predict the incidence of diabetic retinopathy and cardiovascular risk factor. Similarly, retinal venular caliber which is wider, that is associated with younger age, will predict the incidence of risks of stroke and coronary heart diseases. This could suggest the possibility of using this model of fundus image in classification approaches. Finally, the selected attributes of classification are applied through the genetic algorithm with radial basis function neural network for diagnosis of the disease in order to improve the classification accuracy with less computational cost time.

J. Anitha Gnanaselvi, G. Maria Kalavathy
Estimation of Parameters to Model a Fabric in a Way to Identify Defects

Fabric defect detection is a quality check process which can locate and identify defects caused during the production process in the textile industry. Automated defect identification system uses computer vision and pattern recognition techniques whose performance depends majorly on the quality and quantity of the input dataset. A wide range of parameters is considered for decision process which compromises the accuracy of the system. This paper aims to estimate suitable parameters for the defect-free fabric which can be used by traditional methods to identify the defects in an efficient way. Hough-transform-based method is proposed to identify the parameters and the algorithm is experimented on various fabrics. The proposed method gives promising results when the horizontal and vertical threads are evident in the image.

V. Subhashree, S. Padmavathi
Error Detection Technique for A Median Filter Using Denoising Algorithm

This paper presents a modified design for error detection technique for a median filter, while processing the images using a digital processing system, acquisition stage capture impulsive noises along with the images. A classic filter used to eliminate. Some sliding windows of n × n matrices are created from the original image. The generated median is compared with the original pixel value at the center of the selected portion of the matrix corresponding to the image. If we found any error then the new median value is placed instead of the existing one. For storing the pixel values, a line buffer and register bank are used along with the median filter. Using the buffers the time taken for the median calculation is reduced, and as a result the total time taken for processing the image is also reduced.

Maymoona Rahim, Ruksana Maitheen
Retinal Image Processing and Classification Using Convolutional Neural Networks

This study aims to develop a system to distinguish retinal disease from fundus images. Precise and programmed analysis of retinal images has been considered as an effective way for the determination of retinal diseases such as diabetic retinopathy, hypertension, arteriosclerosis, etc. In this work, we extracted different retinal features such as blood vessels, optic disc and lesions and then applied convolutional neural network based models for the detection of multiple retinal diseases with fundus photographs involved in structured analysis of the retina (STARE) database. Augmentation techniques like translations and rotations are done for expanding the number of images. The blood vessel extraction is done with the help of morphological operations like dilation and erosion and enhancement operations like CLAHE and AHE. The optic disc is localized by the methods such as opening, closing, Canny’s edge detection and finally thresholding the image after filling the holes. The bright lesions (exudates) inside the retina are detected by the filtering operations and contrast enhancement after the removal of the optic disc. In this study, we experimented with different retinal features as input to convolutional neural networks for effective classification of retinal images.

Karuna Rajan, C. Sreejith
Comparison of Thermography and 3D Mammography Screening and Classification Techniques for Breast Cancer

Breast cancer, without doubt is one of the leading reasons for fatality among women in the world after lung cancer. Awareness and accessibility to better screening and treatment protocols will have a major impact in improving the survival rates. Moving away from the traditional methods of mammography and biopsy methods, newer techniques provide faster and efficient results to ensure early start of treatment. Therefore, a comparison study has been performed to weigh the pros and cons of thermography and 3D mammography as screening methods, followed by their respective processing and classification procedures. The ease of screening, extent of radiation, percentage of false positives, efficient segmentation, clustering, and novel classification are all considered and a conclusive result is obtained determining the better of the two processes. This could potentially revolutionize the way breast cancer is diagnosed and treated for women of all ages and walks of life.

Sureshkumar Krithika, K. Suriya, R. Karthika, S. Priyadharshini
IEFA—A Fuzzy Framework for Image Enrichment

In this work, an Image Enhancement Fuzzy Algorithm (IEFA), a technique for image enhancement has been proposed and developed. IEFA formulates the mapping from a given input to an output using fuzzy logic. IEFA improves the contrast of low-contrast images. The technique begins the process of image enrichment by modifying membership functions and designing fuzzy if–then rules that exist as a sophisticated bridge between human knowledge on one side and the numerical framework of the computers on the other side. The algorithm converts image properties into fuzzy data and further fuzzy data into crisp output through defuzzification. Further, to evaluate the performance of the proposed technique, the developed technique has been compared with “Histogram Equalization (HE) and Contrast Limited Adaptive Histogram Equalization (CLAHE).” It has been observed that PSNR and CII of the proposed algorithm (using a test image) are 25.56 and 1.13, respectively. These metrics are 0.078 and 6.603% more effective than the metrics of existing algorithms.

Ankita Sheoran, Harkiran Kaur
Hidden Markov Random Field and Gaussian Mixture Model Based Hidden Markov Random Field for Contour Labelling of Exudates in Diabetic Retinopathy—A Comparative Study

Diabetic Retinopathy (DR) is one of the important causes of blindness in diabetic patients. Diabetes that affects the retina is called diabetic retinopathy. Diabetic retinopathy occurs due to the damage of blood vessels in retina and increase in the level of glucose. Different pathologies are normally seen in DR such as microaneurysms, hard exudates, soft exudates, cotton wool spots and haemorrhages. We have done a comparative study of Hidden Markov Random Field (HMRF) and Gaussian Mixture Model (GMM) based HMRF for automatic segmentation of exudates and the performance analysis of both methods. The preprocessing consists of candidate extraction step using greyscale morphological operation of closing and initial labelling of exudates using K-means clustering followed by contour detection. In contour detection, we have analysed two approaches, one is GMM-based HMRF and the other is HMRF. DIARETDB1 is the dataset used.

E. Revathi Achan, T. R. Swapna
AUGEN: An Ocular Support for Visually Impaired Using Deep Learning

Among the wide varieties of technologies, mobile phone technology has become popular and the usage of mobile phone applications is increasing day by day. Most of the modern mobiles are able to capture photographs. This can be used by the visually impaired to capture images of their surroundings which is then used to generate sentences that can be read out to the give visually impaired people a better knowledge of their surroundings. The content of an image is described automatically to them by which they can avoid seeking help from people around them. Computer vision is a field which can be used for gaining information from images or videos. The tasks which the human visual system can do can be done using computer vision. Visually impaired people can use these technologies in order to get better understanding of their surroundings.

Reema K. Sans, Reenu Sara Joseph, Rekha Narayanan, Vandhana M. Prasad, Jisha James
A Novel Flight Controller Interface for Vision-Guided Autonomous Drone

Aerial vehicles are rapidly exploring places in a variety of applications in the service sector such as transport assistance and other logistics involved sectors. Most applications require local perception for the aerial vehicles wherein vision system is the most powerful information. Such local perception may be designed based on landmarks on the ground below. This paper details the work where a computer vision system is aiding in landmark identification, namely, a line strip on the ground below. The vision system is implemented in a real-time controller, and a novel and simple interface solution is presented to interface the vision controller to the flight controller. The aerial vehicle under consideration is quadrotor type. The paper presents the details of the image processing algorithm along with the hardware details of the flight control interface. The vision system is developed to identify line strips which share sufficient contrast with the background under daylight condition. The image analysis is performed to continuously extract the lateral position and yaw error. Such an interface would serve as a computationally cheap and easy solution over traditional programmable flight control interface solutions such as MAVLINK.

R. Senthilnathan, Niket Ahuja, G. Vyomkesh Bhargav, Devansh Ahuja, Adish Bagi
Sliding Discrete Fourier Transform for 2D Signal Processing

Discrete Fourier Transform (DFT) is the most frequently used method to determine the frequency contents of the digital signals. As DFT will take more time to implement, this paper gives the algorithm for the fast implementation of the DFT on the Two-Dimensional (2D) sliding windows. To fast implement DFT on the 2D sliding window, a 2D DFT (here 2D SDFT) algorithm is stated. The algorithm of the proposed 2D SDFT tries to compute current window’s DFT bins directly. It makes use of precalculated bins of earlier window. For a 2D input signal, sliding transform is being accelerated with the help of the proposed algorithm. The computational requirement of the said algorithm is found to be lowest among the existing ones. The output of discrete Fourier transform and sliding discrete Fourier transform algorithm at all pixel positions is observed to be mathematically equivalent

Anita Kuchan, D. J. Tuptewar, Sayed Shoaib Anwar, Sachin P. Bandewar
Automated Lung Nodules and Ground Glass Opacity Nodules Detection and Classification from Computed Tomography Images

Lung cancer health care community depends on lung cancer Computer Aided Detection system to draw useful lung cancer details from Computed Tomography lung images. Nodules growth rate indicates the severity of the disease, which can be periodically radiologist analyzed by nodule segmentation and classification. Main challenges in analyzing nodules growth rate are lung nodules of different type requires special methods for segmentation, their irregular shape, and boundary. In this paper, automatic three-phase framework for lung nodules and nodules of ground glass opacity detection followed by classification is proposed. In this work, nodule segmentation framework uses proposed automatic region growing algorithm that selects set of black pixels as seed points automatically from output binary image for lung parenchyma segmentation followed by artifacts removal to reduce disease search space. Nodules are segmented based on nodule candidates center pixels identification and intensity feature of lung nodule candidates. Segmented nodules are classified using SVM classifier and classification results are compared with other considered classifiers KNN, boosting and decision tree. In the evaluation step, it was found that SVM classifier’s performance is outstanding compared to other considered classifiers in this work. Complete automation in nodule detection within very less time is the key feature of the proposed method. CT images are taken from Lung Image Database Consortium and Image Database Resource Initiative (LIDC/IDRI) public database to evaluate the performance of proposed work. An accuracy of 98% (45/46) with less computational time is achieved. The experimental results demonstrated that the proposed method achieve efficient and accurate segmentation of lung nodules and ground glass opacity nodules with less computation time.

Vijayalaxmi Mekali, H. A. Girijamma
A Review—Edge Detection Techniques in Dental Images

Edge detection plays an important role in digital image processing applications. The main aim of edge detection is to identify the discontinuity in images, where the sharp changes in intensity take place. This research work presents the edge detection technique in dental X-ray images (panoramic radiograms), which is advantageous to separate teeth individually for better classification and identification of diseases. The objective is to study and compare the various algorithms that are Sobel, Prewitt, Canny, multiple morphological gradient (mMG), line analyzer, neural network, genetic algorithm, and infinite symmetric filter (ISF), multi-scale and multi-directional analysis with statistical thresholding (MMST), and fuzzy logic approach for edge detection in dental X-ray images. There are many difficulties in finding diseases from panoramic dental images only, and hence to overcome these difficulties edge detection is introduced. Some of the dental diseases that require edge detection for their identification are discussed. Based on capability of detecting the diseases accurately and total number of diseases detected from the dental images by the use of edge detection, comparison of results takes place.

Aayushi Agrawal, Rosepreet Kaur Bhogal
Detection of Exudates and Removal of Optic Disk in Fundus Images Using Genetic Algorithm

Diabetic retinopathy is one of the serious and sight-threatening complications of diabetics. The main symptom of diabetic retinopathy is the presence of exudates that results in yellow flecks due to the fluid that has seeped out of damaged capillaries. This causes the tissue in the retina to distend, resulting in hazy or unclear vision. If they are left untreated, diabetic retinopathy can cause blindness. Hence, segmentation of exudates is vital process in retinal pathologies. The proposed work involves accurate segmentation of exudates from the retinal fundus images. Initially, K-means clustering is applied on the retinal images to separate the exudates and optic disk. Genetic algorithm is used for the accurate segmentation of the exudates in which the fitness function is calculated to perform crossover between the segmented images obtained from the K-means clustering segmentation. Before performing the mutation process, the grayscale image is converted into the RGB channels. These three-segmented channels are further combined by the mutation process to obtain the genetic algorithm output. High-intensity region is determined to be the exudates and the low intensity is said to be the optic disk. The elimination of the optic disk which has the same intensity as that of the exudates is performed using watershed segmentation. Finally, the parameter validation is done after the morphological operations. This method was implemented in 10 images downloaded from CHASE and STARE database and the accuracy has been improved to 94% compared with the existing approaches.

K. Gayathri Devi, M. Dhivya, S. Preethi
Analysis of Feature Ranking Methods on X-Ray Images

Mycobacterium causes an infectious disease called tuberculosis which can be diagnosed by its various symptoms like fever, cough, etc. Tuberculosis can also be analyzed by understanding the chest X-ray of the patient which is revealed by an expert physician. The chest X-ray image contains texture and shape-based features which are extracted from X-ray image using image processing concepts. This paper presents implementation of various feature weighting methods on the extracted features of X-ray images. These feature weighting methods are analyzed using linear regression model and Linear Discriminant Analysis (LDA) model. The performance of various feature weighting methods is compared and found that the accuracy of weights by PCA using linear regression model is 98.75% which is better than other methods.

H. Roopa, T. Asha
Salient Object Detection for Synthetic Dataset

Salient object detection essentially deals with various image processing and video saliency methodologies such as object recognition, object tracking, and saliency refinement. When image contains diverse object parts with cluttered background then using background prior we perform salient object detection through which we get more accurate and robust saliency maps. This paper introduces the analysis of salient object detection using synthetic dataset which also deals with negative interference of image that contains diverse object parts with cluttered background. Earlier study uses contrast prior but nowadays researchers use mainly boundary connectivity for improving the results. So, for detecting salient object we used four stages: first, we use SLIC superpixel method for image segmentation. Second, we use boundary connectivity which distinguishes the spatial layout of image region by considering image boundaries. Third, we use background measure and for reducing the noise in both foreground and background regions. Lastly, we use optimization framework through which we acquire a clean saliency map.

Aashlesha Aswar, Arati Manjaramkar
Atherosclerotic Plaque Detection Using Intravascular Ultrasound (IVUS) Images

The atherosclerotic plaque deposition in artery is a type of cardiovascular disease and is a major factor of death. Mostly larger and high-pressure vessels such as the femoral, cerebral, renal, coronary, and carotid arteries are influenced by atherosclerosis. Hence, the characterization of plaque distribution and its liability to rupture are mandatory to judge the degree of risk and to schedule the treatment. Intravascular ultrasound (IVUS) is an ultrasound imaging modality which uses a unique catheter. The catheter will be provided with an ultrasound probe which is miniaturized and is connected to the lateral end. In this paper, segmentation of the IVUS image using fast marching method (FMM) is done and will be followed by the feature extraction. Feature extraction methods such as local binary pattern (LBP), speeded up robust feature (SURF), and histogram of oriented gradients (HOG) are used and the resulting image is classified using Euclidean distance classifier. By comparing the results of subjected feature extraction techniques, LBP method is found suitable for the detection of atherosclerotic plaque.

A. Hari Priya, R. Vanithamani
Correlative Feature Selection for Multimodal Medical Image Fusion Through QWT

A novel image fusion technique is proposed in this paper to achieve an efficient and informative image by combining multiple medical images into one. This method accomplishes quaternion wavelet transform (QWT) as a feature representation technique and correlation metric for feature selection. Here, the correlation-based feature selection is accomplished to extract the optimal feature set from the sub-bands thereby to reduce the computational time taken for fusion process. QWT decomposes the source images first and then the feature selection process obtains an optimal feature set from the low-frequency (LF) as well as high-frequency (HF) sub-bands. The obtained optimal feature sets of both LF sub-bands and HF sub-bands are fused through low-frequency fusion rule and high-frequency fusion rule and the fused LF and HF coefficients are processed through IQWT to obtain a fused image. Various models of medical image are processed in the simulation and the performance is evaluated through various performance metrics and compared with conventional approaches to show the robustness and efficiency of proposed fusion framework.

J. Krishna Chaithanya, G. A. E. Satish Kumar, T. Ramasri
Early Detection of Proliferative Diabetic Retinopathy in Neovascularization at the Disc by Observing Retinal Vascular Structure

Proliferative Diabetic Retinopathy (PDR) is the advanced stage of Diabetic Retinopathy (DR) with high risk of severe visual impairment. Neovascularization is a common scenario at this stage where abnormal vessels proliferate. This paper describes a semi-automated method for early detection of PDR around few diameters of Optic Disc (OD) in retinal images. Center of OD detection from segmented images is essentially important here because the approach focuses on Neovascularization at the Disc (NVD). Around OD center on few pixel distance window boundary, the width of major vessels are measured and counted. Finally, the major vessels are identified by distinct colors. The sensitivity and specificity results on STARE dataset of 25 images are 0.86 and 0.87, respectively. The approach shows the average accuracy as 0.88.

Nilanjana Dutta Roy, Arindam Biswas
Finding Center of Optic Disc from Fundus Images for Image Characterization and Analysis

An automated method for center reference point extraction from retinal fundus images is essentially required for an untroubled image mapping in medical image analysis, image registration, and verification. This paper proposes a spadework, revealing a distinct reference point within optic disc in blood vessel structure of the human eye, analysis on which would serve as an efficient preventive measure for any ocular disease and would strengthen the image verification method along with other extracted features of the human eye at low cost. The proposed method includes segmentation from colored fundus images followed by removal of thin and tiny blood vessels which carry very less information. Removal process comes up with a few bounded polygonal structures, named ring near optic disc. From the named structures near optic disc, a cluster of junction points have been found with maximum members and we made a convex hull out of them. Finally, calculating the centroid of the formed convex hull unveils the center of the optic disc. Experiments are done on some publicly available databases called DRIVE, STARE, and VARIA. Experimental results compared to other standard methods are available in the literature.

Nilanjana Dutta Roy, Arindam Biswas
A Robust Method for Image Copy-Move Passive Forgery Detection with Enhanced Speed

Forgery detection of images is presently one of the fascinated research fields. Copy-move forgery is the most commonly used methods for image forgery. A novel method is proposed in this paper, which is an effective and advanced method for detecting copy-move forgery. The proposed method is a block matching technique with reduced computational speed and less computational complexities. The efficiency of outcome is also improved. The image is segmented into fixed dimensions of overlying blocks and then discrete cosine transform (DCT) is applied to each block to extract its features. Then, the mean of each block is obtained. The mean of each block is compared with other blocks to find the similarity between the blocks. The computational outcomes are shown that indicates the proposed method is robust to detect copy-move forgery efficiently with enhanced speed.

Asif Hassan, V. K. Sharma
Feature Extraction and Classification of Epileptic EEG Signals Using Wavelet Transforms and Artificial Neural Networks

Marked by unpredictable seizures, epilepsy is the fourth most prevailing chronic neural disorder. This neurodegenerative disorder can attack individuals belonging to any category or age group. Also, the resulting seizures can be of any type. There is always a possibility of misjudging the symptoms with psychogenic nonepileptic events. Thus, in addition to the common methods like functional magnetic resonance imaging (fMRI) and positron emission tomography (PET), electroencephalography (EEG) is a useful tool to differentiate epilepsy from other neurodegenerative disorders. However, EEG measures brain activity directly unlike the other two techniques that measure changes in blood flow to a certain part of the brain. Hence, EEG is most widely used. The paper focuses on conversion of time domain brain signals into time–frequency domain using wavelet transforms followed by extraction of various statistical and nonlinear features. These features are then fed to the neurons of an artificial neural network (ANN) which indicates the presence of epilepsy in an individual.

Upasana Chakraborty, R. Mary Lourde
3D Printed Surgical Guides in Orthognathic Surgery—A Pathway to Positive Surgical Outcomes

Rapid advancements in robotics and computer-aided surgeries have revolutionized the field of medicine and surgery. Maxillofacial surgery has benefited from these technological advances as significant contributions have been made in the management of complex soft tissue and bony pathologies. These technologies are especially useful in patients with post-traumatic defects and also cases with esthetic facial deformities. Defects in the craniofacial skeleton are of either congenital, developmental, traumatic, or pathological ethology. The primary purpose of correcting facial anomalies is for functional rehabilitation. The esthetic rehabilitation of a patient is very challenging. It is important to achieve excellent postoperative form and function and also to minimize operative and postoperative morbidity. Rapid prototyping biomodels that are being used in recent times are playing a very significant role not only for patient education, diagnosis of defects but also in surgical planning. Their use in surgical planning reduces anesthesia time, reduces operating time, and provides better esthetic and functional results. The integration of 3D imaging and computerized surgery continues to bring about newer and better changes in the conventional surgeries making the outcome much more beneficial to the patient [1]. We present a case of facial deformity in the form of maxillary excess and retrogenia corrected using orthognathic surgery supported by the use of surgical guides fabricated using additive manufacturing.

Chitra Chakravarthy, Sanjay Sunder, Santosh Kumar Malyala, Ammara Tahmeen
Image-Based Method for Analysis of Root Canal Geometry

Image-based techniques of observation and measurement are getting popular due to its accuracy and accessibility. In dentistry, endodontic files are used for root canal shaping and cleaning. Manufacturers design and fabricate different endodontic files. Endodontists examine the performance of different endodontic files. Endodontists used artificial root canal for comparison and examination of the file system. After shaping of artificial root canals, endodontic blocks are compared by qualitative observation. This article presents a methodology based on image analysis of artificial root canal for examination and comparing the shaping ability of endodontic files. In the proposed method, the images of the artificial root canal were processed in a custom-made MATLAB® program to quantify the root canal deviation and root canal transportation. Calculation of root canal curvature quantified root canal transportation. Performance of 20/0.02 NiTi hand file and 20/0.04 NiTi rotary file for glide path preparation was compared. Twenty endodontic blocks were divided into two experimental groups for canal preparation. Pre- and post-treatment root canals were compared regarding canal curvature modification and deviation for the experimental group of 20/0.02 NiTi hand file with self-adjusting file (SAF) and 20/0.04 NiTi rotary file with SAF using t-test at 95% confidence interval. Pre- and post-treatment images are processed in the custom-made MATLAB® program to find out the canal transportation and deviation. A significant difference in an experimental group of hand file and the rotary file was found. It has been found that canal transportation in case of 20/0.02 NiTi hand file was significantly greater than 20/0.04 NiTi rotary file.

Ankit Nayak, Prashant K. Jain, P. K. Kankar, Niharika Jain
Analysis of Explicit Parallelism of Image Preprocessing Algorithms—A Case Study

The need for the image processing algorithm is inevitable in the present era as every field involves the use of images and videos. The performance of such algorithms can be improved using parallelizing the tasks in the algorithm. There are different ways of parallelizing the algorithm like explicit parallelism, implicit parallelism, and distributed parallelism. The proposed work shows the analysis of the performance of the explicit parallelism of an image enhancement algorithm named median filtering in a multicore system. The implementation of explicit parallelism is done using MATLAB. The performance analysis is based on primary measures like speedup time and efficiency.

S. Raguvir, D. Radha
A Comprehensive Study on Character Segmentation

In identifying the characters from a given image, character segmentation plays an important role. In a given line of text, first, we have to segment the words. Then, in each word there will be a character-by-character segmentation. There have been some rapid developments in this area. Many algorithms have been implemented to increase the accuracy range and decrease the word error rate. This paper aims to provide a review of some of the developments that have happened in this domain.

Sourabh Sagar, Sunanda Dixit
EZW, SPIHT and WDR Methods for CT Scan and X-ray Images Compression Applications

Scanning rate of medical image tools has been significantly improved owing to the arrival of CT, MRI and PET. For medical imagery, storing in less area and not losing its details are vital. So, an efficient technique is necessary for storing in a cost-effective way. In this paper, wavelet is employed to perform decomposition, and image is compressed using Embedded Zero-Tree Wavelet (EZW), Set Partitioning in Hierarchical Trees (SPIHT) and Wavelet Difference Reduction (WDR) algorithms. These algorithms are applied to compress X-ray and CT images, and compared using performance metrics. From results, it is seen that compression ratio is better in WDR for all the wavelets than SPHIT and EZW. High compression ratio, 82.47, is obtained with Haar and WDR combination for CT scan, whereas this is 32.89 for Biorthogonal and WDR combination for X-ray. The main objective of this paper is to find the optimal combination of wavelets and image compression techniques.

S. Saradha Rani, G. Sasibhushana Rao, B. Prabhakara Rao
Human Identification Based on Ear Image Contour and Its Properties

Identity management is the process of authenticating individuals by means of security objects (traits) to confirm whether the subject is permitted to access any secured property. Ear biometrics is one of the best solutions to access any secured property, which may be private/public. In the current security surveillance, the subject is identified passively without the knowledge. Ear recognition is a better passive system where the human ear is captured to verify whether he is authorized or not. This system can possibly suit for crowd management like bus stations, railway stations, temples, cinema theatres, etc. An ear biometric system based on 2D ear image contours and its properties was proposed. In this article, three types of databases are taken as input, i.e. IIT Delhi Database, AMI Database and VR Students Sample Database, and enrolment and verification process is done with these databases based on the contour features and its properties—bounding rectangle, aspect ratio, extent, equivalent diameter, contour area, contour perimeter, checking convexity, convex hull and solidity. This approach takes less time to execute, and the obtained FAR and FRR performance parameter values are nominal when compared to other traditional mechanisms.

P. Ramesh Kumar, K. L. Sailaja, Shaik Mehatab Begum
Defocus Map-Based Segmentation of Automotive Vehicles

Defocus estimation plays a vital role in segmentation and computer vision applications. Most of the existing work uses defocus map for segmentation, matting, decolorization and salient region detection. In this paper, we propose to use both defocus map and grabcut using wavelet for reliable segmentation of the image. The result shows the comparative analysis between the bi-orthogonal and Haar function using wavelet, grabcut and defocus map. Experimental results show promising results, and hence, this algorithm can be used to obtain the defocus map of the scene.

Senthil Kumar Thangavel, Nirmala Rajendran, Karthikeyan Vaiapury
Multi-insight Monocular Vision System Using a Refractive Projection Model

The depth information of a scene, imaged from the inside of a patient’s body, is a difficult task using a monocular vision system. A multi-perception vision system idea has been proposed as a solution in this work. The vision system of the camera has been altered with the refractive projection model. The developed lens model recognises the scene with multiple perceptions. The motion parallax is observed under the different lenses for the single shot, captured through the monocular vision system. The presence of multiple lenses refracts the light in the scene at the different angles. Eventually, the appearance of the object dimension is augmented with more spatial cues that help in capturing 3D information in a single shot. The affine transformations between the lenses have been estimated to calibrate the multi-insight monocular vision system. The geometrical model of the refractive projection is proposed. The multi-insight lens plays a significant role in spatial user interaction.

J. Mohamed Asharudeen, Senthil Kumar Thangavel
ONESTOP: A Tool for Performing Generic Operations with Visual Support

Programming has become tedious for every person these days. Learning programming languages and writing a computer program for different tasks using various programming languages is a difficult and time-consuming task. Therefore, modules are used to make programming easier and faster. Cloud computing enables applications to be accessed everywhere. The ‘ONESTOP’ tool will be provided as a facility to the users under the category ‘Software as a Service’. The paper provides directions for enabling the same facility. It does not address the challenges for provisioning this tool on the cloud. Every module in ONESTOP consists of the operations under that category. The tool processes the input by removing fillers, identifying the operation to be performed using trie data structure and synonym mapping and displaying the result. User need not write codes or define functions. A simple sentence in English is sufficient to perform the task. The tool is easy to use and does not require any programming knowledge to use it. All the operations are performed in less time enhancing the performance of the tool. Key aspect of ONESTOP is that it does not produce any error and saves debugging time.

Gowtham Ganesan, Subikshaa Senthilkumar, Senthil Kumar Thangavel
Performance Evaluation of DCT, DWT and SPIHT Techniques for Medical Image Compression

Medical imaging is, visible illustration of internal of the human body in digital form. There exists a need for compression of these medical images for storage and transmission purposes with high image quality for error free diagnosis of diseases. It is necessary to develop new techniques for compression of images, resulting into reduction in cost of data storage and transmission. This paper proposes a progressive and DWT-based Set Partition in Hierarchical Tree (SPIHT) algorithm for achieving better image compression while maintaining the nature of restored image. The performance of SPIHT algorithm is evaluated and compared with simple DCT and DWT using the parameters like Peak Signal to Noise Ratio (PSNR), Mean Squared Error (MSE) and compression ratio (CR). From the results it is concluded that SPIHT algorithm produces better PSNR and MSE values compared to DCT and DWT with high image quality.

M. Laxmi Prasanna Rani, G. Sasibhushana Rao, B. Prabhakara Rao
Structural Health Monitoring—An Integrated Approach for Vibration Analysis with Wireless Sensors to Steel Structure Using Image Processing

Wireless sensors for structural health monitoring (SHM) with a coordinated approach to steel structure has obtained research area in the field of structural engineering due to its low cost and a wide range of applications. These systems have been used to monitor structural behaviour, and it has the potentiality to improve structural life period and also to improve public security too. In this paper, the study is done through low-cost wireless vibration sensors that are deployed on steel frame structure for vibration analysis using microcontrollers and data systems. In this project, tests were done to interface a wireless sensor to an Arduino. And this sensor data to PC via serial and visualize the data in a software called image processing in Arduino as a signal output. This paper explains the real-time deployment of SHM using sensor networks.

C. Harinath Reddy, K. M. Mini, N. Radhika
An Improved Image Pre-processing Method for Concrete Crack Detection

Structure health monitoring of concrete structures has gained more attention in the recent years due to advancement in the technology. Different methods like acoustic, ultrasonic and image processing based inspection methods have been deployed to carry out an assessment of concrete structure. In this paper, work has been carried out to monitor the health of laboratory scale concrete objects using vision-based inspection. The objective is to provide a modified image pre-processing algorithms for accurate concrete crack detection. Different image processing based algorithms reviewed from existing literature were implemented and tested to detect cracks on the surface of a 15 × 15 × 15 cm concrete cube. Due to random unevenness on the surface of concrete blocks, designing of an accurate and robust algorithm becomes difficult and challenging. Developed algorithm was applied to different images of concrete cubes. Receiver operating characteristics analysis and computation time analysis along with result images were discussed in the paper. In order to validate the applicability of developed algorithm, test results of crack detection on practical crack images are presented. Python was used to develop algorithm along with OpenCV library for image processing functions.

Harsh Kapadia, Ripal Patel, Yash Shah, J. B. Patel, P. V. Patel
Grape Crop Disease Classification Using Transfer Learning Approach

Grape is one of the important fruit crops which is affected by diseases. The advents of digital camera and machine learning based approaches have facilitated recognition of plant diseases. Convolution Neural Network (CNN) is one of the types of architecture used in deep learning based approach. AlexNet is a category of CNN which is used in this study for classification of three diseases along with healthy leaf images obtained from PlantVillage dataset. Transfer learning-based approach is used where the pretrained AlexNet is fed with 4063 images of above categories. The model achieved 97.62% of classification accuracy. Feature values from the different layers of the same network are extracted and applied to Multiclass Support Vector Machine (MSVM) for performance analysis. Features from Rectified Linear unit (ReLu 3) layer of AlexNet applied to MSVM achieved the best classification accuracy of 99.23%.

K. R. Aravind, P. Raja, R. Aniirudh, K. V. Mukesh, R. Ashiwin, G. Vikas
Exploring Image Classification of Thyroid Ultrasound Images Using Deep Learning

Deep learning for medical imaging has been at the forefront of its numerous applications, thanks to its versatility and robustness in deployment. In this paper, we explore various classification methodologies that are employed for datasets of relatively small in size to actually train a deep learning algorithm from scratch. Thyroid ultrasound images are classified using a small CNN from scratch, transfer learning and fine-tuning of Inception-v3, VGG-16. We present a comparison of the aforementioned methods through accuracy, sensitivity, and specificity.

K. V. Sai Sundar, Kumar T. Rajamani, S. Siva Sankara Sai
Medical Applications of Additive Manufacturing

Additive manufacturing (AM) is also known as “3D printing.” AM creates the physical model by construction of succeeding layers using the input material. Each succeeding layer is attached to the preceding layer to form the physical AM model. Medical industry requires the error-free exact anatomy of the patient for diagnosis, surgical planning, surgical guides, implants, etc. Since every patient has unique anatomy, AM suits as best fit for medical industry. AM medical model provides advantage of customizing each model according to the patient’s specific requirement, which is not so easy in conventional way. The current work aims to explain the importance of AM medical model in the medical industry for various scenarios.

A. Manmadhachary, Santosh Kumar Malyala, Adityamohan Alwala
A Study on Comparative Analysis of Automated and Semiautomated Segmentation Techniques on Knee Osteoarthritis X-Ray Radiographs

Arthritis is a most common disease in the worldwide population targeting knee, neck, hand, hip, and almost all the joints of the human body. It is a frequently noticed problem in elder people, especially women. The severity of the disease is analyzed using the older KL grading system. Traditionally, the detection of various grades of OA (osteoarthritis) is interpreted by just a visual examination. A traditional modality, X-ray images are considered as the data for the project. The images are segmented using different segmentation techniques to extract the articular cartilage as region of interest. From the literature, eight different segmentation techniques were identified out of which seven are automated and one is semiautomated. By implementing those techniques and evaluating their performance, it is inferred that block-based segmentation, center rectangle segmentation, and the semiautomated seed point selection segmentation performs well and provides sensitivity, positive prediction value and dice Sorenson’s coefficient of 100%, respectively, and specificity of 0%.

Karthiga Nagaraj, Vijay Jeyakumar
Plant Disease Detection Based on Region-Based Segmentation and KNN Classifier

The plant disease detection is the technique which can detect disease from the plant leaves. The plant disease detection has various steps which are textural feature analysis, segmentation, and classification. This research paper is based on the plant disease detection using the KNN classifier with GLCM algorithm. In the proposed method, the image is taken as input which is preprocessed, GLCM algorithm is applied for the textural feature analysis, k-means clustering is applied for the region-based segmentation, and KNN classifier is applied for the disease prediction. The proposed technique is implemented in MATLAB and simulation results show up to 97% accuracy.

Jaskaran Singh, Harpreet Kaur
Analyzing e-CALLISTO Images: Sunspot Number, 10.7 cm Flux and Radio Solar Bursts

This paper investigates on the sun’s activity based on sunspot images using time series of sunspot numbers and absolute 10.7 cm flux that would also enable to establish relationship between these indices and provide possible forecasts. Moreover, these indices are also compared with the monthly number of solar bursts detected by the Mauritian Radio Telescope which forms part of the e-CALLISTO network.

R. Sreeneebus, Z. Jannoo, N. Mamode Khan, C. Monstein, M. Heenaye-Mamode Khan
A Homogenous Prototype Design for IED Detection Using Subsurface Ground-Penetrating RADAR

Security is a prime factor of a nation’s peace. Nowadays, terrorists are using Improvised Explosive Devices for their attacks. This paper presents a method to locate, characterize, and identify the IED in landmines using a subsurface technology called Ground-Penetrating RADAR with electromagnetic imaging. By this technique, the homogenous nature of soil, moisture content, dielectric constants of air, soil, and explosives are analyzed. To measure the attenuation and scattering losses of IED underground, a homogenous 2D prototype model is designed with TNT explosives as IED. By simulation, the scattering loss and attenuation are characterized over the step frequency range from 0.5 to 3 GHz. The step frequency shows that the optimal frequency to detect IED in the subsurface scan was 2 GHz. Based on the experimental results, the interference is more in dry soil than wet soil and also the dimension of landmine is directly proportional to the amount of scattering.

Alagarsamy Gautami, G. Naveen Balaji
A Study on Various Deep Learning Algorithms to Diagnose Alzheimer’s Disease

Alzheimer’s disease (AD) is one of the most frequent types of dementia, which is deterioration in mental ability severe enough to interfere with daily life and gradually affect the human’s brain, its capability to learn, think and communicate. The symptoms of AD develop over time and become a major brain disease over the course of several years. To bring out patterns from the brain neuroimaging data, different statistical and machine learning approaches have been used to find the Alzheimer’s disease present in older adults at clinical as well as research applications; however, differentiating the phases of the Alzheimer’s and healthy brain data has been difficult due to the similarity in brain atrophy patterns and image intensities. Recently, number of deep learning methods has been expeditiously developing into numerous areas, which consist of medical image analysis. This survey gives out the idea of deep learning-based methods which used to differentiate between Alzheimer’s Magnetic Resonance Imaging (MRI) and functional MRI from the normal healthy control data.

M. Deepika Nair, M. S. Sinta, M. Vidya
Performance Analysis of Image Enhancement Techniques for Mammogram Images

Mammography is a technique which uses X-rays to take mammographic images of the breast, but identifying abnormalities from a mammogram is a challenging task. Many Computer-Aided Diagnosis (CAD) systems are developed to aid the classification of mammograms, as they search in digitized mammographic images for any abnormalities like masses, microcalcification which is difficult to identify especially in dense breasts. The first step in designing a CAD system is preprocessing. It is the process of improving the quality of the image. This paper focuses on the techniques involved in preprocessing the mammogram images to improve its quality for early diagnosis. Preprocessing involves filtering the image, applying image enhancement techniques like Histogram Equalization (HE), Adaptive Histogram Equalization (AHE), Contrast-Limited Adaptive Histogram Equalization (CLAHE), Contrast Stretching, and Bit-plane slicing; filtering techniques like mean, median, Gaussian and Wiener filters are also applied to the mammogram images. The performance of these image enhancement techniques are evaluated using quality metrics, namely Mean Square Error (MSE), Peak Signal-to-Noise Ratio (PSNR), and Contrast-to-Noise Ratio.

A. R. Mrunalini, J. Premaladha
A Study on Preprocessing Techniques for Ultrasound Images of Carotid Artery

Ultrasound imaging has been widely used in the diagnosis of atherosclerosis. To precisely diagnose the carotid plaque, the affected region should be segmented from the ultrasonic image of carotid artery. Many techniques have been used to identify the plaque in ultrasound images. Image enhancement and restoration are the important processes to acquire high-quality images from the noisy images. When the artery images are captured, noise occurs due to high-frequency rate. To acquire a high-quality image, preprocessing is the first step to be done. The quality of the image is improved in this process. The techniques involved in preprocessing are dealt in this paper. Preprocessing involves filtering the image and removing the noise by various filtering techniques. Salt-and-pepper and Gaussian noise in ultrasound images can be filtered using techniques like mean, median and Wiener filters. Salt-and-pepper noise is multiplicative in nature and it is introduced by the image acquisition mechanism. The quality of the input sensor is reflected by the Gaussian noise. In this paper, the performance of image enhancement techniques on ultrasound images are evaluated using quality metrics, namely Mean Square Error (MSE) and Peak Signal–to-Noise Ratio (PSNR).

Mounica S., Ramakrishnan S., Thamotharan B.
Fractional Reaction Diffusion Model for Parkinson’s Disease

Calcium (Ca2+) ion known as a second messenger, involve in variety of signalling process, and directly link with the intracellular calcium concentration ([Ca2+]) that are continuously remodelled for the survival of the nerve cell. Buffer, also refer as a protein, react with Ca2+ and significantly lower down the intracellular [Ca2+] in nerve cell. There are numerous signalling processes in mammalian brain which can initiate at the high level of intracellular [Ca2+]. Voltage gated calcium channel (VGCC), and ryanodine receptor (RyR) are work as an outward source of Ca2+ which initiate, and sustain the signalling process for smooth functioning of the cells. Parkinson’s disease (PD) is a brain disorder of the central nervous system accompanied with the alteration of the signalling process. In present paper, a one dimensional fractional reaction diffusion model is consider to understand the physiological role of buffer, VGCC, and RyR in view of the PD.

Hardik Joshi, Brajesh Kumar Jha
Prediction-Based Lossless Image Compression

In this paper, a lossless image compression technique using prediction errors is proposed. To achieve better compression performance, a novel classifier which makes use of wavelet and Fourier descriptor features is employed. Artificial neural network (ANN) is used as a predictor. An optimum ANN configuration is determined for each class of the images. In the second stage, an entropy encoding is performed on the prediction errors which improve the compression performance further. The prediction process is made lossless by making the predicted values as integers both at the compression and decompression stages. The proposed method is tested using three types of datasets, namely CLEF med 2009, COREL1 k and standard benchmarking images. It is found that the proposed method yields good compression ratio values in all these cases and for standard images, the compression ratio values achieved are higher compared to those obtained by the known algorithms.

Mohamed Uvaze Ahamed Ayoobkhan, Eswaran Chikkannan, Kannan Ramakrishnan, Saravana Balaji Balasubramanian
Audio and Video Streaming in Telepresence Application Using WebRTC for Healthcare System

Currently, health care in Lower- and Middle-Income Countries (LMICs) is suffering from shortage of trained physicians in rural areas. The development of certain technologies, particularly in sensors has yielded the birth of mHealth. The main objective of this paper is to develop a telepresence application for a mobile platform. The functionality of the system will include the ability for the doctor to connect with a doctor/patient from a remote location using mobile or web application, and the ability to diagnose patient disease remotely and recommend follow-up care. There is a requirement for low price, well suited, and easy-to-use video communication system. The web real-time communication (WebRTC) permits browsers to set up a peer connection to deliver information and media with real-time conferencing capabilities through simple JavaScript APIs.

Dhvani Kagathara, Nikita Bhatt
Review on Image Segmentation Techniques Incorporated with Machine Learning in the Scrutinization of Leukemic Microscopic Stained Blood Smear Images

This paper is a contemplated work of N-different methods that have been employed in the area of revealing and classifying leukocytes and leukoblast cells. Blood cell images obtained through digital microscopes are taken as input to the algorithms reviewed. In bringing out the nucleus and cytoplasm of White Blood Cells (WBCs), the images have been undergone by a variety of image segmentation techniques along with filtering, enhancement, edge detection, feature extraction, classification, and image recognition steps. Apart from image processing, the analysis and categorization of the leukemic images are handled using some other machine learning techniques of computer science discipline. Assessment of accuracy and correctness of the proposals were done by applying texture, color, contour, morphological, geometrical, and statistical features.

Duraiswamy Umamaheswari, Shanmugam Geetha
Detection of Gaze Direction for Human–Computer Interaction

Eye guide is an assistive specialized apparatus intended for the incapacitated or physically disabled individuals who were not able to move parts of their body, especially people whose communications are limited only to eye movements. The prototype consists of a camera and a computer. The system recognizes gazes in four directions and performs required user actions in related directions. The detected eye direction can then be used to control the applications. The facial regions which form the images are extracted using the skin color model and connected-component analysis. When the eye regions are detected, the tracking is performed. The system models consist of image processing, face detector, face tracker, and eyeblink detection. The eye guide system potentially helps as a computer input control device for the disabled people with severe paralysis.

G. Merlin Sheeba, Abitha Memala
Road Detection by Boundary Extraction Technique and Hough Transform

Visual perception of road images captured by cameras mounted within a vehicle is the main element of an autonomous vehicle system. Road detection plays a vital role in a visual routing system for a self-governing vehicle. Effective detection of roads under varying illumination conditions plays a vital role to prevent majority of the road accidents that occur currently. In the current study, a new method using “boundary extraction” technique along with “Hough transform” is proposed for effective road detection. Here, two different algorithms, one using “Canny edge detection” and “Hough transform” and another using “boundary extraction” technique and “Hough transform” were implemented and tested on the same dataset. The comparison of the results of both the techniques showed that the algorithm using “boundary extraction” technique worked better than that which used “Canny edge” detection technique.

Namboodiri Sandhya Parameswaran, E. Revathi Achan, V. Subhashree, R. Manjusha
Investigating the Impact of Various Feature Selection Techniques on the Attributes Used in the Diagnosis of Alzheimer’s Disease

According to the Dementia India report 2010, it is estimated that over 3.7 million people are affected by dementia and is expected to be double by 2030. Around 60–80% of the demented are suffering from Alzheimer’s disease. Neuropsychological tests are useful tools for diagnosis of dementia. Diagnosis of dementia using machine learning for low- and middle-income setting is a rare study. Various attributes are used for diagnosing dementia. Finding the prominent attributes among them is a tedious job. Chi-squared, gain ratio, info gain and ReliefF filtering techniques are used for finding the prominent attributes. Cognitive score is identified as the most prominent attribute.

S. R. Bhagyashree, Muralikrishna
Detection of Sleep Apnea Based on HRV Analysis of ECG Signal

Sleep apnea is a breathing disorder which occurs during sleep. Sleep apnea causes more health-threatening problems such as daytime sleepiness, fatigue and cognitive problems, coronary arterial disease, arrhythmias, and stroke. However, there is an extremely low public consciousness about this disease. The most common type of sleep apnea is obstructive sleep apnea (OSA). Polysomnography (PSG) is the widely used technique to detect OSA. Obstructive sleep apnea is extremely undiagnosed due to the inconvenient and costly polysomnography (PSG) testing procedure at hospitals. Moreover, a human expert has to monitor the patient overnight. Hence, there is a requirement of new method to diagnose sleep apnea with efficient algorithms using noninvasive peripheral signal. This work is basically aimed at detection of sleep apnea using a physiological signal electrocardiogram (ECG) alone which is taken from free online apnea ECG database provided by PhysioNet/PhysioBank. This database consists of 70 ECG recordings. A detailed time- and frequency-domain features and nonlinear features extracted from the RR interval of the ECG signals for observing minutes of sleep apnea are occurred in this work. Time-domain features mean HR (P = 0.0093, r = 0.3593) and RR interval mean (ms) (p = 0.0003, r = 0.376), frequency-domain features VLF power (%) (P = 0.00659, r = 0.1081) and HF power (%) (P = 0.00135, r = 0.41138), and nonlinear analysis feature SD1 (P = 0.00039, r = 0.18998), significantly different for normal and apnea ECG. Further supervised learning algorithms have been used to classify ECG signal to differentiate normal and apnea data. The overall efficiency is 90.5%. Algorithms which deal ECG signal along with respiratory signal will give more incite about apnea disease and also the classification accuracy may be improved.

A. J. Heima, S. Arun Karthick, L. Suganthi
Performance Comparison of SVM Classifier Based on Kernel Functions in Colposcopic Image Segmentation for Cervical Cancer

Cervical cancer is the second most common cancer affecting women worldwide. It can be cured in almost all patients if detected and treated in time. Pap smear test has been broadly used for detection of cervical cancer. The conventional Pap smear test has several shortcomings including subjective nature, low sensitivity, and frequent retesting. In order to overcome this issue, colposcopy method is used for visual inspection of cervix with the aid of acetic acid and with proper magnification, abnormal cells to be identified. Thus, we propose a method for automatic cervical cancer detection using segmentation and classification. In this work, several methods used for detecting cervical cancer is discussed which uses different classification techniques like K-means clustering, texture classification and Support Vector Machine (SVM) to detect cervical cancer. The proposed work compares and determines accuracy for five types of kernel functions, namely Polynomial kernel, Quadratic kernel, RBF kernel, linear kernel, and Multi-Layer Perceptron kernel. Analysis shows that Multi-layer Perceptron kernel in SVM classifier provides the best performance with an accuracy of 98%.

N. Thendral, D. Lakshmi
GPU Based Denoising Filter for Knee MRI

MRI is a popularly used technique for diagnosing muscle and skeletal disorders, especially of the knee. For accuracy in diagnosis, the rician noisy knee image needs to be filtered using efficient denoising algorithm. In recent years, the spatial neighborhood bilateral filter is being explored by researchers for its capacity to retain edges and tissue structures. It is noted that increase in image resolution slows down performance of the bilateral filter effectively discouraging its use. The research work proposes a cost-effective accelerated solution to the problem by implementing CUDA-based bilateral filter as applied to T2-weighted sagittal knee MRI slice. The work suggests use of GPU shared memory for optimized implementation and better speedup. The speedup achieved for 3.96 Mpixel knee MR image is 114.27 times more than that of its CPU counterpart. The results indicate average occupancy of 90.15% for image size of 6302 pixels, indicating effective parallelization. Also, over varying rician noise levels, the average PSNR achieved is 21.83455 dB indicating good filter performance.

Shraddha Oza, Kalyani R. Joshi
Performance Evaluation of Audio Watermarking Algorithms Using DWT and LFSR

Advancement in Internet technology has compelled encryption, copyright protection and authentication of audios, videos, images, and documents. Watermarking techniques have been widely employed in copyright protection and authentications of digital media. Digital image watermarking techniques are extensively explored and found suitable for copyright protection of images, whereas audio watermarking techniques are less studied and need extensive research due to superior hearing ability of the human beings. In this paper, we present audio watermarking technique using discrete wavelet transform (DWT) and linear feedback shift registers (LFSR). Watermark signal is obtained using LFSR technique before embedding into audio signal. Dispersion of maximum power spectral density property of LFSR is explored that makes it suitable as scramblers. LFSR does not require secret key for scrambling and descrambling of watermark signal. Sequences were embedded into the DWT coefficients of the audio signal. Experimental simulations were performed to evaluate the performance of the audio watermarking technique. Audio watermarking finds applications in the area of ownership protection, tamper detection and authentication of music, military communication, voice-activated machines, and robots.

Ramesh Shelke, Milind Nemade
Biometric Image Encryption Algorithm Based on Modified Rubik’s Cube Principle

Multi-biometric systems are being mostly installed in many large-scale biometric applications most importantly UIDAI system in India. However, multi-biometric systems require storage of multiple biometric templates for each user that results in increased risk to user privacy and cybercrimes. Among all the biometrics, fingerprints are widely used in personal verification systems especially from small-scale to large-scale organization. Encryption of the biometric images is very important due to increasingly use of insecure systems and networks for storage, transmission, and verification. We proposed encryption algorithm based on modified Rubik’s cube principle for fingerprint biometric images in this paper. Secret key for the encryption is obtained from the same fingerprint biometric image that assists in key management and storage for further decryption and verification process. The principle of Rubik’s cube involves row and column rotation or changing of the pixels position. Further, bitwise XOR operation through modifications in secret key adds to security of the biometric images. Experimental simulations were performed to obtain the encryption parameters and validate the algorithm through common attacks such as noise and compression. Thus, the proposed encryption algorithm may be applied to fingerprint images used in attendance monitoring systems since it provides key management and does not involve complex calculations.

Mahendra V. Patil, Avinash D. Gawande, Dilendra
Leaf Disease Detection Based on Machine Learning

Farming is a standout among the most imperative elements in light of which a nation’s economy is chosen. Alignments in crops are very regular, which is one of the prominent factors that leads to the disease location and detection in plant’s parts which is of high importance in agroindustry. In this way, it is imperative to effectively distinguish the maladies from the harvest to specifically shower herbicides and treat to diminish wastage utilization of concoction. In this work, we display an approach that coordinates picture handling and machine figuring out how to permit diagnosing infections from leaf pictures and to contemplate quantitative plant physiology imaging and PC vision which is utilized. Wavelet is extremely mainstream apparatus in picture preparing calculation. Surface highlights are utilized for recognition of yield. These features were mean, standard deviation, skewness, and kurtosis which we have used in this paper. We are proposing an approach, which used to identify the plant infection, i.e., plant disease. Here, we are using minimum distance classifier for the classification of the disease. The proposed approach displays a way toward robotized plant sicknesses finding on a gigantic scale.

Anish Polke, Kavita Joshi, Pramod Gouda
Cross Domain Recommendation System Using Ontology and Sequential Pattern Mining

Recommendation system is very helpful to filter the information according to the user interest and provide user personalized suggestion. Recommendation system is emerging now-a-days in many social networks like Facebook, Twitter, e-commerce etc. Cross domain recommendation system is one of the method to develop the recommendation where we can gather the knowledge from different domains and recommend most similar items related to the user search term. In this work, we try to extend cross domain recommendation by finding semantic similarity of items in Ontology, applying Collaborative Filtering and recommending user preferred items using PrefixSpan algorithm. The similarity between items can be achieved through modified Wpath method. Finally, we can recommend the most preferred items and evaluate using performance measures like F-score.

S. Udayambihai, V. Uma
Identifying the Risk Factors for Diabetic Retinopathy Using Decision Tree

The role of data mining in healthcare industry is to improve the health systems and to use these data analytics to identify the inefficiencies and to improve care. Accuracy is important when it comes to patient care, and handling this huge amount of data improves the quality of the healthcare system. Diabetic retinopathy is a disease mainly occurring in patients with high sugar level in their blood. The situation occurs when sugar levels in blood are high, thus causing damage to blood vessels. The blood vessels swell and leak and gradually affect the eye vision. This is not detected in early stage of diabetics, even though it affects the eyesight from the beginning. Decision tree classification helps to detect the problem in the initial stage that helps to find the risk factors that cause this disease. Better treatment is provided to those infected patients. A detailed analysis of the result from the decision tree classifier is also presented in this work, and decisive factors for diabetic retinopathy are concluded herewith.

Preecy Poulose, S. Saritha
Heavy Vehicle Detection Using Fine-Tuned Deep Learning

Heavy vehicles develop technical snag and traffic jam on streets. Accidents between heavy vehicle and road users, for example, pedestrians often result in severe injuries of the weaker street users. The highway safety and traffic jams can be secured with detection of heavy and overloaded vehicles on the highway to facilitate light motor vehicles like cars, scooters. A model for heavy vehicle detection using fine-tuned based on deep learning is proposed to deal with entangled transportation scene. This model comprises two parts, vehicle detection model and vehicle fine-grained detection. This step provides data for the next classification model. Experiments show that vehicle’s make and model can be recognized from transportation images effectively by using our method. Experimental results demonstrate that the proposed detection system performs accurately with other simple and complex scenarios in detecting heavy vehicles in comparison with past vehicle detection systems.

Manisha Chate, Vinaya Gohokar
Using Contourlet Transform Based RBFN Classifier for Face Detection and Recognition

Face is a highly non-rigid object; in such case, face detection and recognition has become an essential part of biometric systems in the majority of the applications. Numerous applications like robots, tablets, surveillance systems, and cell phones revolve around an efficient face detection and recognition technique in the background for access. Human–computer interaction systems like expression recognition, cognitive state/emotional state, etc. are used. Recognizing with the increased need for security and anticipation of spoofing attacks, almost all techniques have been proposed in the past to successfully detect and recognize the face through a single or combination of facial features, which is a challenging task given the complex nature of the background and the number of facial features involved. Here, the proposed work involves a multi-resolution technique, namely, the Contourlet transform along with linear discriminant analysis for feature detection given to an RBFN classifier for effective classification. It could be clearly seen that the proposed technique outperforms the other conventional techniques by its recognition rate of nearly 99.2%. The observed results indicate a good classification rate in comparison with conventional techniques.

R. Vinothkanna, T. Vijayakumar
Breast Cancer Recognition by Support Vector Machine Combined with Daubechies Wavelet Transform and Principal Component Analysis

The method of identifying the abnormal mammary gland tumor images was presented in order to assist the medical staff to find the patients with breast diseases accurately and timely. Db2 wavelet transform and principal component analysis (select the optimal threshold) is used to extract the effective features, support vector machine (set appropriate penalty parameter) is used to classify health and diseased samples, and 10-fold cross-validation is used to verify the classification result. The experimental results show that the method is feasible, the average sensitivity is 83.10 ± 1.91%, the average specificity is 82.60 ± 4.50%, and the average accuracy is 82.85 ± 2.21%.

Fangyuan Liu, Mackenzie Brown
Correction to: Breast Cancer Recognition by Support Vector Machine Combined with Daubechies Wavelet Transform and Principal Component Analysis

The Corresponding Author function has been assigned to Mackenzie Brown and the wrong labeling of images in figure 1 has been corrected.The correction chapter and the book has been updated with the changes.

Fangyuan Liu, Mackenzie Brown
Metadata
Title
Proceedings of the International Conference on ISMAC in Computational Vision and Bio-Engineering 2018 (ISMAC-CVB)
Editors
Durai Pandian
Xavier Fernando
Zubair Baig
Fuqian Shi
Copyright Year
2019
Electronic ISBN
978-3-030-00665-5
Print ISBN
978-3-030-00664-8
DOI
https://doi.org/10.1007/978-3-030-00665-5