Skip to main content

2016 | Buch

Advances in Signal Processing and Intelligent Recognition Systems

Proceedings of Second International Symposium on Signal Processing and Intelligent Recognition Systems (SIRS-2015) December 16-19, 2015, Trivandrum, India

herausgegeben von: Sabu M. Thampi, Sanghamitra Bandyopadhyay, Sri Krishnan, Kuan-Ching Li, Sergey Mosin, Maode Ma

Verlag: Springer International Publishing

Buchreihe : Advances in Intelligent Systems and Computing

insite
SUCHEN

Über dieses Buch

This Edited Volume contains a selection of refereed and revised papers originally presented at the second International Symposium on Signal Processing and Intelligent Recognition Systems (SIRS-2015), December 16-19, 2015, Trivandrum, India. The program committee received 175 submissions. Each paper was peer reviewed by at least three or more independent referees of the program committee and the 59 papers were finally selected. The papers offer stimulating insights into biometrics, digital watermarking, recognition systems, image and video processing, signal and speech processing, pattern recognition, machine learning and knowledge-based systems. The book is directed to the researchers and scientists engaged in various field of signal processing and related areas.

Inhaltsverzeichnis

Frontmatter

Biometrics/Digital Watermarking/Recognition Systems

Frontmatter
Emotion Recognition from Facial Expressions for 4D Videos Using Geometric Approach

Emotions are important to understand human behavior. Several modalities of emotion recognition are text, speech, facial expression or gesture. Emotion recognition through facial expressions from video play a vital role in human computer interaction where the facial feature movements that convey the emotion expressed need to be recognized quickly. In this work, we propose a novel method for the recognition of six basic emotions in 4D video sequences of BU-4DFE database using geometric based approach. We have selected key facial points out of the 83 feature points provided in the BU-4DFE database. A video expressing emotion has frames containing neutral, onset, apex and offset of that emotion. We have identified the apex frame from a video sequence automatically. The Euclidean distance between the feature points in apex and neutral frame is determined and their difference in corresponding neutral and the apex frame is calculated to form the feature vector. The feature vectors thus formed for all the emotions and subjects are given to Random Forests and Support Vector Machine (SVM) for classification. We have compared the accuracy obtained by the two classifiers. Our proposed method is simple, uses only two frames and yields good accuracy for BU-4DFE database. We have determined optimum number of key facial points that could provide better recognition rate using the computed distance vectors. Our proposed method gives better results compared with literature and can be applied for real time implementation using SVM classifier and kinesics in future.

V. P. Kalyan Kumar, P. Suja, Shikha Tripathi
Fraudulent Image Recognition Using Stable Inherent Feature

To prove authenticity and originality of images, many techniques were recently released. This paper proposes a pixel based forgery detection technique for identifying forged images, which is an effective method for finding tampering in images. The proposed method detects splicing and copy-move forgery in images by locating the forged components in the input image. Splicing is the process of copying a component from an image and pasted to another image. Copy-move forgery is the process of copying a component from an image and pasted to another portion of the same image. To find the forged component in the input image, the noise variance remaining after denoising method and SURF features are used. In order to locate spliced component, image segmentation is done before finding the number of components in the image. For segmentation, segmentation based on combining spectral and texture features are used. To identify the number of components in the image, fuzzy c-means clustering is used. In order to locate copy-move forgery, SURF features are detected first and extracted for finding the similarity between keypoints. The experiment results show that the proposed method is very good at identifying whether an image is forged or not. The proposed method gives a high speed performance compared to the state-of-the-art methods. Results gained through the experiments on both manually edited images and visually realistic real images shows the effectiveness of the proposed method.

Deny Williams, G. Krishnalal, V. P. Jagathy Raj
Kernel Visual Keyword Description for Object and Place Recognition

The most important aspects in computer and mobile robotics are both visual object and place recognition; they have been used to tackle numerous applications via different techniques as established previously in the literature, however, combining the machine learning techniques for learning objects to obtain best possible recognition and as well as to obtain its image descriptors for describing the content of the image fully is considered as another vital way which can be used in computer vision. Thus, the ability of the system is to learn and describe the structural features of objects or places more effectively, which in turn; it leads to a correct recognition of objects. This paper introduces a method that uses Naive Base to combine the Kernel Principle Component (KPCA) features with HOG features from the visual scene. According to this approach, a set of SURF features and Histogram of Gradient (HOG) are extracted from a given image. The minimum Euclidean Distance between all SURF features is computed from the visual codebook which was constructed by K-means previously to be combined with HOG features. A classification method such as Support Vector Machine (SVM) was used for data analysis and the results indicate that KPCA with HOG method significantly outperforms bag of visual keyword (BOW) approach on Caltech-101 object dataset and IDOL visual place dataset.

Abbas M. Ali, Tarik A. Rashid
Hardware Accelerator for Facial Expression Classification Using Linear SVM

In this paper, we present hardware accelerator for Facial Expression Classification using One-Versus-All (OVA) linear Support Vector Machine (SVM) classifier. The motivation behind this work is to perform real-time classification of facial expressions into three different classes: neutral, happy and pain, which could be used in an embedded system to facilitate automatic patient monitoring in ICUs of hospitals without any personal assistance. Pipelining and parallelism (inherent qualities of FPGAs) have been utilized in our architecture to achieve optimal performance. For achieving high accuracy, the architecture has been designed using IEEE-754 single precision floating-point data format. We performed the SVM training offline and used the trained parameters to implement its testing part on Field Programmable Gate Array (FPGA). Synthesis result shows that the designed architecture is operating at a maximum clock frequency of 200 MHz. Classification accuracy of 97.87% has been achieved on simulating the design with different test images. Thus, the designed architecture of the OVA linear SVM shows good performance in terms of both speed and accuracy facilitating real-time classification of the facial expressions.

Sumeet Saurav, Sanjay Singh, Ravi Saini, Anil K. Saini
Enhancing Face Recognition Under Unconstrained Background Clutter Using Color Based Segmentation

Face recognition algorithms have been extensively researched for the last 3 decades or so. Even after years of research, the algorithms developed achieve practical success only under controlled environments. Their performance usually takes a dip under unconstrained scene conditions like the presence of background clutter, non-uniform illumination etc. This paper explores the contrast in performance of standard recognition algorithms under controlled and uncontrolled environments. It proposes a way to combine 3 subspace learning algorithms, namely Eigenfaces (1DPCA), 2 dimensional Principal Component Analysis (2DPCA) and Row Column 2DPCA (RC2DPCA) with a color-based segmentation approach in order to boost the recognition rates under unconstrained scene conditions. A series of steps are performed that extract all possible facial regions from an image, following which the algorithm segregates the largest candidate for a probable face, and puts a bounding box on the blob in order to isolate only the face. It was found that the proposed algorithms, formed by the combination of such segmentation methods obtain a higher level of accuracy than the standard recognition techniques. Moreover, it serves as a general framework wherein much more robust recognition techniques could be combined to achieve boosted accuracies.

Ankush Chatterjee, Deepak Mishra, Sai Subrahmanyam Gorthi
Emotion Recognition from 3D Images with Non-Frontal View Using Geometric Approach

Over the last decade emotion recognition has gained prominence for its applications in the field of Human Robot Interaction (HRI), intelligent vehicle, patient health monitoring, etc. The challenges in emotion recognition from non-frontal images, motivates researchers to explore further. In this paper, we have proposed a method based on geometric features, considering 4 yaw angles (0˚, +15˚, +30˚, +45˚) from BU-3DFE database. The novelty in our proposed work lies in identifying the most appropriate set of feature points and formation of feature vector using two different approaches. Neural network is used for classification. Among the 6 basic emotions four emotions i.e., anger, happy, sad and surprise are considered. The results are encouraging. The proposed method may be implemented for combination of pitch and yaw angles in future.

D. KrishnaSri, P. Suja, Shikha Tripathi
Real-Time Automatic Camera Sabotage Detection for Surveillance Systems

Video surveillance is very common for security monitoring of premises and sensitive installations. Criminals tamper the surveillance camera settings so that their (criminal) activities in the scene are not recorded properly, thereby making the captured video frames useless. Various camera tampering/sabotage include - changing the normal view of the camera by turning the camera away from the scene, obstructing the camera lens by placing some objects in front of the camera or spraying paint on it and defocusing the camera lens by changing the camera focus settings, spraying water or some viscous fluid on it. Manual monitoring of the surveillance systems have many limitations - human fatigue, lack of continuous monitoring, etc. Hence, real-time automated analysis and detection of suspicious events have gained importance. In this paper, we propose an efficient algorithm for camera tamper detection based on background modeling, edge details, foreground object size and its movement. In our testing or experimental setup, the results are encouraging with high precision and low false alarm rate. As the proposed method can process $$320 \times 240$$ resolution videos at $$60-70$$ frames/sec, it can be implemented for real-time applications.

K. Sitara, B. M. Mehtre
An Improved Approach to Crowd Event Detection by Reducing Data Dimensions

Crowd monitoring is a critical application in video surveillance. Crowd events such as running, walking, merging, splitting, dispersion, and evacuation inform crowd management about the behavior of groups of people. For an effective crowd management, detection of crowd events provides an early sign of the behavior of the people. However, crowd event detection using videos is a highly challenging task because of several challenges such as non-rigid human body motions, occlusions, unavailability of distinguishing features due to occlusions, unpredictability in people movements, and other. In addition, the video itself is a high-dimensional data and analyzing to detect events becomes further complicated. One way of tackling the huge volume of video data is to represent a video using low-dimensional equivalent. However, reducing the video data size needs to consider the complex data structure and events embedded in a video. To this extent, we focus on detection of crowd events using the Isometric Mapping (ISOMAP) and Support Vector Machine (SVM). The ISOMAP is used to construct the low-dimensional representation of the feature vectors, and then an SVM is used for training and classification. The proposed approach uses Haar wavelets to extract Gray Level Coefficient Matrix (GLCM). Later, the approach extracts four statistical features (contrast, correlating, energy, and homogeneity) at different levels of Haar wavelet decomposition. Experiment results suggest that the proposed approach is shown to perform better when compared with existing approaches.

Aravinda S. Rao, Jayavardhana Gubbi, Marimuthu Palaniswami
Art of Misdirection Using AES, Bi-layer Steganography and Novel King-Knight’s Tour Algorithm

Nowadays, the need for of data-integrity and privacy is higher than ever. From small bank transactions to large-scale classified military information, these factors have been prerequisites. This classified information can be faked deliberately by the sender with the help of a pseudo secret in order to increase the level of security. Cryptography makes this information unreadable and steganography hides the same. This paper brings together the paths of steganography and Rijndael encryption, which is commonly known as the AES. The secret image or text-file is encrypted using AES algorithm and is then camouflaged into the pseudo-secret image using a modified proposed Knight’s Tour-inspired chess-based algorithm that enhances data diffusion. Then the pseudo-secret image/text is once again encrypted using the AES algorithm and is masked into the cover image/text using a checkerboard-based location map. Thus lossless, highly secure, nested-layer steganography is achieved with high complexity and high PSNR for various extension types of images as illustrated in the paper.

Sudharshan Chakravarthy, Vishnu Sharon, Karthikeyan Balasubramanian, V. Vaithiyanathan
Digital Watermarking Using Fractal Coding

This paper presents a watermarking method using fractals. In the method discussed here, the host image is encoded by the proposed fractal coding method. To embed the watermark evenly over the whole host image, specific Range blocks are selected. Then, the scrambled watermark is inserted into the selected Range blocks. Finally, the watermarked image is obtained by the fractal decoding method. Simulation results have proven the imperceptibility of the proposed scheme and shown the robustness against various attacks.

Rama Seshagiri Rao Channapragada, Munaga V. N. K. Prasad
A Cheque Watermarking System Using Singular Value Decomposition for Copyright Protection of Cheque Images

Recently, all bank system has online money transfer facility. It may be either by online transactions or by NEFT cheques. Bank system pays money through cash, demand draft, and online money transfer or by cheques. Few days ago, there was a flow of the physical cheques issued by bank customer from drawer to the drawee branch. This process consumes more time and increase physical work of the employee. To overcome this problem cheque truncation system (CTS) comes into picture. This system provides faster clearing and transmission of bank cheques issued by bank customer from drawer branch account to the drawee branch account.CTS transfers the cheque in image format. Due to this reason, there is a need of copyright protection to the cheque image. It can be provided by digital watermarking techniques. The digital watermarking techniques can be applied using various methods. Such as, frequency domain, spatial domain, computational intelligence technique, etc. In this paper, it is going to discuss, “A Cheque Watermarking System Using Singular Value Decomposition for Copyright Protection of Cheque Images”. The aim of this paper is to obtain secure cheque image & watermark after extraction process of watermark at drawee branch.

Sudhanshu Suhas Gonge, Ashok Ghatol
Biometric Watermarking Technique Based on CS Theory and Fast Discrete Curvelet Transform for Face and Fingerprint Protection

Nowadays, multibiometric system is employed to overcome limitation of the unimodal biometric system. The problem associated with multibiometric system is to design technique which offers security for biometric data. A biometric watermarking technique using compressive sensing (CS) theory and Fast Discrete Curvelet Transform is proposed for face and fingerprint protection. Compressive sensing has provided computational security to watermark fingerprint image and used for generation of sparse measurements of the watermark fingerprint image. This proposed watermarking technique embeds sparse measurements of the watermark fingerprint image into high frequency curvelet coefficients of host face image. The quantitative measure such as a structural similarity index measure is used for cross verification between original watermark fingerprint image and reconstructed fingerprint image. The watermarked face image and reconstructed watermark fingerprint image are formed face-fingerprint based multibiometric system which is used for two levels of verification of individuals. The experimental results demonstrate that proposed watermarking technique does not affect verification and authentication performance of multibiometric system.

Rohit Thanki, Komal Borisagar
Cancelable Fingerprint Cryptosystem Based on Convolution Coding

In this paper we propose a fingerprint bio-cryptosystem with cancelability using fuzzy commitment scheme. The minutiae of a fingerprint are transformed using Delaunay triangulation net constructed from fingerprint minutiae. Further, these transformed features are encrypted using convolution coding. During decoding phase, the Viterbi algorithm is used to get the codeword. Experimental evaluation done on FVC 2002 databases show the credibility of the proposed method. The EER obtained for the proposed method is 1.66%, 1.89% and 6.87% for FVC 2002 DB1, DB2, and DB3 respectively.

Mulagala Sandhya, Munaga V. N. K. Prasad
A Bio-cryptosystem for Fingerprints Using Delaunay Neighbor Structures(DNS) and Fuzzy Commitment Scheme

The emergence of biometrics and it’s widespread use in authentication applications made template protection in biometrics an important task to be considered in recent years. In this paper we propose a fingerprint bio-cryptosystem using fuzzy commitment scheme. We constructed Delaunay Neighbor Structures(DNS) from fingerprint minutiae points. Then a bit string representation of DNSs was developed. Bose, Chaudhuri, and Hocquenghem(BCH) error correction code is used along with bit string in the fuzzy commitment scheme to generate a secure template. During decoding phase, BCH decoding is performed to get the codeword. The EER obtained for proposed method on FVC 2002 DB1, DB2 and DB3 are 1.43%, 1.79%, and 5.89% respectively. This shows the credibility of the proposed method.

Mulagala Sandhya, Munaga V. N. K. Prasad

Image/ Video Processing

Frontmatter
Improving the Feature Stability and Classification Performance of Bimodal Brain and Heart Biometrics

Electrical activities from brain (electroencephalogram, EEG) and heart (electrocardiogram, ECG) have been proposed as biometric modalities but the combined use of these signals appear not to have been studied thoroughly. Also, the feature stability of these signals has been a limiting factor for biometric usage. This paper presents results from a pilot study that reveal the combined use of brain and heart modalities provide improved classification performance and furthermore, an improvement in the stability of the features over time through the use of binaural brain entrainment. The classification rate was increased, for the case of the neural network classifier from 92.4% to 95.1% and for the case of LDA, from 98.6% to 99.8%. The average standard deviation with binaural brain entrainment using all the inter-session features (from all the subjects) was 1.09, as compared to 1.26 without entrainment. This result suggests the improved stability of both the EEG and ECG features over time and hence resulting in higher classification performance. Overall, the results indicate that combining ECG and EEG gives improved classification performance and that through the use of binaural brain entrainment, both the ECG and EEG features are more stable over time.

Ramaswamy Palaniappan, Samraj Andrews, Ian P. Sillitoe, Tarsem Shira, Raveendran Paramesran
Denoising Multi-coil Magnetic Resonance Imaging Using Nonlocal Means on Extended LMMSE

Denoising plays key role in the field of medical images. Reliable estimation and noise removal is very important for accurate diagnosis of the disease. This should be done in such a way that original resolution is retained while maintaining the valuable features. Multi-coil Magnetic Resonance Image(MRI) trails nonstationary noise following Rician and Noncentral Chi(nc-$$\chi $$) distribution. On using the modern techniques which make use of multi-coil MRI like in GRAPPA would yield nc-$$\chi $$ distributed data. There has been lots of research done on the Rician nature but only few for nc-$$\chi $$ distribution. The proposed method uses Nonlocal Mean(NLM) on extended Linear Minimum Mean Square Error(ELMMSE) for denoising multi-coil MRI having nc-$$\chi $$ distributed data. The performance of the nonlocal scheme on multi-coil MRI is evaluated based on PSNR, SSIM and MSE and the result indicates proposed scheme is better than the existing scheme including Non local Maximum Likelihood(NLML), adaptive NLML and ELMMSE.

V. Soumya, Abraham Varghese, T. Manesh, K. N. Neetha
An Intelligent Blind Semi-fragile Watermarking Scheme for Effective Authentication and Tamper Detection of Digital Images Using Curvelet Transforms

A novel intelligent semi-fragile watermarking scheme for authentication and tamper detection of digital images is proposed in this paper. This watermarking scheme involves embedding and extraction of the quantized first level Discrete Curvelet Transform (DCLT) coarse coefficients. The amount of quantization of the first level coarse DCLT coefficients of the input image is decided intelligently based on the energy contribution of the coefficients. At the receiver side, the extracted and generated first level coarse DCLT coefficients of the watermarked image is divided into blocks of uniform size. A feature similarity index value between each block of extracted and generated coefficients is compared and if the difference exceeds threshold, the block is marked as tampered. The watermarking scheme is blind and does not require any additional information to identify authenticity of the watermarked image. Experiments are conducted rigorously and the results reveal that the proposed method is robust than the existing method [1]. Better accuracy in localizing tampered regions is achieved compared to method [1].

K. R. Chetan, S. Nirmala
Dental Image Retrieval Using Fused Local Binary Pattern & Scale Invariant Feature Transform

In the field of dental biometrics, textural information plays a significant role very often in tissue characterization and gum diseases diagnosis, in addition to morphology and intensity. Failure to diagnose gum diseases in its early stages may leads to oral cancer. Dental biometrics has emerged as vital biometric information of human being due to its stability, invariant nature and uniqueness. The objective of this paper is to improve the classification accuracy based on fused LBP and SIFT textural features for the development of a computer assisted screening system. The swift expansion of dental images has enforced the requirement of efficient dental image retrieval system for retrieving images that are visually similar to query image. This paper implements a dental image retrieval system using fused LBP & SIFT features. The fused LBP & SIFT features identify the gum diseases from the epithelial layer in classifying normal dental images about 91.6% more accurately compared to other features.

R. Suganya, S. Rajaram, S. Vishalini, R. Meena, T. Senthil Kumar
Block Based Variable Step Size LMS Adaptive Algorithm for Reducing Artifacts in the Telecadiology System

In this paper, an efficient Block Based Error Data Nonlinear Variable Step Size Least Mean Square (BBEDNVSSLMS) adaptive algorithm is used to reducing the noises present in the cardiogram signal. Now a day’s Heart attack is main problem in the world. This problem is very important when patients are present far away from the medical diagnosis centre. So in these cases Telecardiology (ambulatory) system will help to patient for proper treatment within their area. When we are measuring the Electrocardiogram (ECG) signal from the patient it will undergo numerous noises. In this paper we proposed efficient BBEDNVSSLMS adaptive algorithms for reducing of Power Line Interference (PLI) noise and Electrode Motion (EM) artifact noise. Also, we derived sign based algorithms based on BBEDNVSSLMS algorithm, which will give less computational complexity. Finally these algorithms are applied to corrupted ECG signals. By analyzing the simulation values for different factors on, Signal to noise ratio, convergence characteristics and Mean square error values, these algorithm gives better elimination of noise in the ECG signal compared to LMS algorithm.

Thumbur Gowri, P. Rajesh Kumar
Braille Tutor
A Gift for the Blind

Education is the basic right of every human being. There are 37 million blind people in world and India has 15 million people among them. Visually impaired people use Braille language to write and read. Though there are many blind schools, the number of teachers who can effectively teach are very less. Moreover the blind children cannot read or write without the help of teacher. Thus, there is a need to develop a technology which helps these people to learn efficiently without anybody’s help which will make them self-dependent. To meet this need the idea is to develop a product that can teach the blind children the basic alphabets and numbers. Further the product has data connectivity through Bluetooth link so that the teachers can use it to monitor what the students are learning. The result is the complete working device Braille tutor with the speakjet IC output through speaker and Bluetooth connectivity for monitoring each device.

Anjana Joshi, Ajit Samasgikar
Performance Evaluation of S-Golay and MA Filter on the Basis of White and Flicker Noise

EEG is one of the most effective diagnostic techniques to evaluate the electrical activity in brain. For that we use a variety of filters to reduce the artifacts existing in the EEG signal. The main objective of the paper is to compare the functioning of the two commonly used filters i.e. the moving average and S-Golay. This is done by creating a synthetic signal and then by adding white or flicker (pink) noise to it. The analysis is made on the basis of performance of filters on SNR of the final output while keeping the same number of reference points (frame length) and observing the distortion ratio. Furthermore the most important information of EEG signal lies in the peak of the signal so it becomes absolutely necessary to use a filter that not only filters out the noise better but simultaneously shows the ability to provide the least distortion from the original signal. The shape preserving characteristics of the filter are determined at different noise levels and peaks are detected.

Shivang Baijal, Shelvi Singh, Asha Rani, Shivangi Agarwal
An Efficient Multi Object Image Retrieval System Using Multiple Features and SVM

Retrieval of images containing multiple objects has been an active research area in recent years. This paper presents an efficient retrieval system for images containing multiple objects using shape, texture, edge histogram features and SVM (Support Vector Machine). The process starts with preparing the knowledge database. In this, all the images in dataset are segmented and shape, texture, edge histogram features are extracted. A combined feature vector is generated by combining these multiple features. These features are trained by using SVM and all the images in database are classified into different classes. This information is stored as knowledge data base. When the user enters his choice of query image, it is segmented and features are extracted. The combined feature vector is generated by using all these features. SVM is used for matching of the query image feature vector class with the classes in knowledge database. The similarity distance is calculated between the feature vector of the query image and the images in the corresponding class. The images are sorted and displayed according to the ascending order of the similarity distance. Experimental results demonstrate better retrieval efficiency.

Nizampatnam Neelima, E. Sreenivasa Reddy
Blood Cells Counting by Dynamic Area-Averaging Using Morphological Operations to SEM Images of Cancerous Blood Cells

In this paper, we describe a novel method of determining the complete blood cell count using region-based segmentation method of highly magnified images obtained from SEM. It provides an efficient way of measuring blood cell counts and it also be used to diagnose the presence of many life-threatening diseases such as leukemia, allergies etc. In many cases, mainly, due to diseases and cell longevity, the actual sizes of micron-order blood cells appear to change their sizes. Therefore, this proposed method uses dynamical averaging of red blood cell (RBC) and white blood cell (WBC) areas so that no a-priori area (fixed area of blood cell) is required. This is a significant step forward since no fixed area cut-offs are used to separate WBC and RBC. The SEM images are preprocessed with a global threshold and then, significant noises are removed using morphological processing. Finally, the areas are separated using dynamical averaging and separately, WBC and RBC counts are computed. Experimental data are provided for different real samples of cancer patients which show encouraging results.

Kanik Palodhi, Dhrubajyoti Dawn, Amiya Halder
Texture Guided Active Contour for Object Segmentation in Natural Images

Object segmentation based on active contour approach, uses an energy minimizing spline to localize object boundary. The major limitations of parametric and nonparametric active contour approaches are sensitivity to noise, high computation cost and poor convergence towards boundary. To address these problems, this paper proposes a novel approach utilizing the textural characteristics of the image for detecting the boundary of the salient object. From the textural patterns of the image a saliency map is generated and convolved with the user-defined vector field kernel. The performance of proposed approach is analyzed on three datasets such as Weizmann single object dataset, MSRA-500 dataset and Brodatz texture images. The experimental results demonstrates that proposed method can effectively converges towards the boundary of the salient object.

Glaxy George, M. Sreeraj
Development of Tomographic Imaging Algorithms for Sonar and Radar

Tomographic principle is now widely known for medical imaging using CT-scan machines. Whereas CT-scan uses absorption tomography by X-rays new areas of applications such as sonar and radar imaging are recently explored for reflection tomography. In this paper, we have shown that same backprojection principle can be utilized for imaging under water objects by sonar and ground objects by inverse synthetic aperture radar (ISAR). Analysis of results of image reconstruction from data collected in experiments are shown in the paper.

Ashish Roy, Supriya Chakraborty, Chinmoy Bhattacharya

Signal and Speech Processing

Frontmatter
Design and Development of an Innovative Biomedical Engineering Application Toolkit (B.E.A.T. ®) for m-Health Applications

Obtaining on-the-go medical grade reliable data in noisy environment is a challenge faced by developers across the globe. While transporting patients, mobile phones connected wirelessly to the sensors have tremendous lifesaving applications. Recent trends in research have shown promises in the use of continuous biomedical signals for Biometric applications. However, practical applications of such systems will be possible only if they can generate real time medical grade data in environment having radical noise. This paper details the design and development of B.E.A.T. ® - an innovative system which captures medical grade physiological signals using 2 leads with or without the use of pre-gelled electrodes over Android OS.

Abhinav, Avval Gupta, Shona Saseendran, Abhijith Bailur, Rajnish Juneja, Balram Bhargava
A VMD Based Approach for Speech Enhancement

This paper proposes a Variational Mode Decomposition (VMD) based approach for enhancement of speech signals distorted by white Gaussian noise. VMD is a data adaptive method which decomposes the signal into intrinsic mode functions (IMFs) by using the Alternating Direction Method of Multipliers (ADMM). Each IMF or mode will contain a center frequency and its harmonics. This paper tries to explore VMD as a Speech enhancement technique. In the proposed method, the noisy speech signal is decomposed into IMFs using VMD. The noisy IMFs are enhanced using two methods; VMD based wavelet shrinkage (VMD-WS) and VMD based MMSE log STSA (VMD-MMSE). The speech signal distorted with different noise levels are enhanced using the VMD based methods. The level of noise reduction and speech signal quality are measured using the objective quality measures.

B. Ganga Gowri, S. Sachin Kumar, Neethu Mohan, K. P. Soman
Opportunistic Routing with Virtual Coordinates to Handle Communication Voids in Mobile Ad hoc Networks

Communication voids or the unreachability problem has been one of the major issues in mobile ad hoc networks. With highly mobile and unreliable nodes, they can occur very often and may lead to loss of data packets in the network. Most of the existing methods that are available to handle this issue try to find out a route around the communication void. Due to this the data packets may have to travel more number of hops in reaching the destination and thus causing more delay in data delivery in the network. Such techniques do not allow us to fully utilize the advantages brought by geographic routing and opportunistic forwarding. In this paper we propose a method, Opportunistic Routing with Virtual Coordinates (ORVC) that would improve the potential forwarding area for a packet by sending the packets to multiple locations near the destination. The analysis and simulations show that using this method a data packet takes less number of hops to reach the destination compared to the existing methods.

Varun G. Menon, P. M. Joe Prathap
Acoustic Echo Cancellation Technique for VoIP

Acoustic Echo Canceller (AEC) is a crucial element to eliminate acoustic echo in a Voice over Internet Protocol (VoIP) communication. In the past, AEC have been implemented on Digital Signal Processing (DSP) platforms. This work uses software to perform the very demanding real-time signal processing that stack up against traditional implementation of AEC. Several issues are taken into consideration in the design of the AEC software. First, the AEC software is able to handle a mismatch in sampling rate between A/D and D/A conversion. Second, the playback sampling rate of playing CD-quality music with voice chat is considered. Third, the AEC has to overcome the major challenge of separating the reference signal in the presence of noise. Finally, the computational complexity and convergence problem has to be minimal. This work presents an AEC method that enable optimal audio quality in VoIP.

Balasubramanian Nathan, Yung-Wey Chong, Sabri M. Hansi
MRAC for A Launch Vehicle Actuation System

Actuators are used to provide Thrust Vector Control (TVC) for lifting the launch vehicle and its payload to reach its space orbits. The TVC comprises actuation and power components for controlling the engines direction and thrust to steer the vehicle. Here the actuator considered is a linear Electromechanical Actuator (EMA) system. The design is based on the Model Reference Adaptive Controller (MRAC) technique. It calculates an error value as the difference between actual system and desired model and attempts to minimize the error by adjusting the controller parameters. Adaptive control laws are established with the gradient method. The experimental result shows that the MRAC has better tracking performance than that with the traditional compensated system when both controllers are subjected to the same parameter variations. The simulation is done using MATLAB/SIMULINK software.

Elizabath Rajan, Baby Sebastian, M. M. Shinu
Analysis of Inspiratory Muscle of Respiration in COPD Patients

Chronic Obstructive Pulmonary Disease (COPD) is a disease of lungs, in this airway becomes narrow. To perform excess work of respiration necessary and accessory muscles such as sternomastoid (SMM) muscle has to work. In this paper correlation among obstruction level of airways and activity of SMM has invented with time and frequency domain features of electromyography (EMG). Spirometric data and EMG signal of SMM is collected for ten COPD patients. Features used for the analysis are root mean square (RMS), peak to peak voltage, integration and number of peaks in time and frequency domain. Features of EMG and spirometric data are correlated. Significant correlation has found between peak to peak amplitude and number of peaks in time and frequency domain.

Archana B. Kanwade, Vinayak Bairagi
Interconversion of Emotions in Speech Using TD-PSOLA

Emotional speech plays a major role in identifying the mental state of the speaker. The presented work demonstrates the significance of incorporating dynamic variations in prosodic parameters in speech for the effective conversion of one emotion to another. Most existing methods focus on the incorporation of emotions in synthetically generated neutral speech. The proposed model discusses the usage of TD-PSOLA as an important tool in converting one emotion to another for speech samples in English language. It analyses the significance of various prosodic parameters in distinct emotions and explores how prosodic modifications to one emotion can result in different perceived emotions. The paper explores a technique for interconversion of emotions in synthesised speech and demonstrates a methodology for prosodic modifications on an English database.

B. Akanksh, Susmitha Vekkot, Shikha Tripathi

Pattern Recognition/Machine Learning/Knowledge-Based Systems

Frontmatter
A Technique of Analog Circuits Testing and Diagnosis Based on Neuromorphic Classifier

The technique of functional testing the analog integrated circuits based on neuromorphic classifier (NC) has been proposed. The structure of NC providing detection both catastrophic and parametric faults taking into account the tolerance on parameters of internal components has been described. The NC ensures the associative fault detection reducing a time on diagnosis in comparison with parametric tables. The approach to selection of essential characteristics used for the NC training has been represented. The wavelet transform of transient responses, Monte Carlo method and statistical processing are used for the essential characteristics selection with maximum distance between faulty and fault-free conditions. The experimental results for the active filter demonstrating high fault coverage and low likelihood of alpha and beta errors at diagnosis have been shown.

Sergey Mosin
A Genetic PSO Algorithm with QoS-Aware Cluster Cloud Service Composition

The QoS-aware cloud service composition is a significantly crucial concern in dynamic cloud environment. There is multi-nature services are clustered together and integrated with multiple domains over the internet. Because of increasing number private and public cloud sources and predominantly all cloud services offers similar services. However this differs in their functionalities depend on the QoS constraints. This drags more complexity in choosing a clustered cloud services with optimal QoS concert, an enhanced Genetic Particle Swarm Optimization (GPSO) Algorithm is anticipated to crack this crisis. With the intention to construct the QoS-aware cloud composition algorithm, all the parameters to be redefined such as price, position, response time and reputation. The Adaptive Non-Uniform Mutation (ANUM) approach is proposed to attain the best particle globally to boost the population assortment on the motivation of conquering the prematurity level of GPSO algorithm. This strategy also matched with other similar techniques to acquire the convergence intensity. The efficiency of the anticipated algorithm for QoS-aware cloud service composition is exemplified and evaluated with a Modified Genetic Algorithm (MGA), GN_S_Net, and PSOA the outcomes of investigational assessment signifies that our model extensively achieves than the existing approaches by means of execution time with improved QoS performance parameters.

Mohammed Nisar Faruk, G. Lakshmi Vara Prasad, Govind Divya
Anomalous Crowd Event Analysis Using Isometric Mapping

Anomalous event detection is one of the important applications in crowd monitoring. The detection of anomalous crowd events requires feature matrix to capture the spatio-temporal information to localize the events and detect the outliers. However, feature matrices often become computationally expensive with large number of features becomes critical for large-scale and real-time video analytics. In this work, we present a fast approach to detect anomalous crowd events and frames. First, to detect anomalous crowd events, the motion features are captured using the optical flow and a feature matrix of motion information is constructed and then subjected to nonlinear dimensionality reduction (NDR) using the Isometric Mapping (ISOMAP). Next, to detect anomalous crowd frames, the method uses four statistical features by dividing the frames into blocks and then calculating the statistical features for the blocks where objects were present. The main focus of this study is to understand the effect of large feature matrix size on detecting the anomalies with respect to computational time. Experiments were conducted on two datasets: (1) Performance Evaluation of Tracking and Surveillance (PETS) 2009 and (2) Melbourne Cricket Ground (MCG) 2011. Experiment results suggest that the ISOMAP NDR reduces the computation time significantly, more than ten times, to detect anomalous crowd events and frames. In addition, the experiment revealed that the ISOMAP provided an upper bound on the computational time.

Aravinda S. Rao, Jayavardhana Gubbi, Marimuthu Palaniswami
Access Control System Which Uses Human Behavioral Profiling for Authentication

“An access control device has been used for securing valuable properties and lives from sinister people” [1]. Recently, these security device technologies have been improved tremendously in providing security for people through various methods. “Nevertheless, these methods are not individually perfect to provide optimal security. These, most of the time, are not convenient for a wide range of users (i.e, the innocent users who do not pose any threat) due to access time delay and several layers of authentication. The proposed security system should exhibit capabilities that support adaptive security procedures for a diverse range of users, so most innocent users require a minimum layer of identity authentication and verification while suspicious users may require passing through some additional layers of security authentication and verification” [1]. This paper proposes “a novel smart access control (SAC) system, which can identify and categorize suspicious users from the analysis of one’s activities and bio information. The SAC system observes and records users’ daily behavioral activities. From the analysis of the collected data, it selectively chooses certain users for additional layers of authentication procedure and quickly isolates those individuals who might pass through scrutiny by security personnel. Due to this adaptive feature, the SAC system not only minimizes delays and provides more convenience to the users but also enhances the security measure” [1]. Moreover, a novel idea of DPIN, a concept that uses memory and analytical potency of a user to dynamic generation, and updates ones security key is proposed.

Lohit Penubaku, Jong-Hoon Kim, Sitharama S. Iyengar, Kadbur A. Shilpa
Mathematical Morphology and Region Clustering Based Text Information Extraction from Malayalam News Videos

Innovations in technologies like improved internet data transfer, advanced digital data compression algorithms, enhancements in web technology, etc. enabled the exponential growth in digital multimedia data. Among the massive multimedia data, news videos are of higher priority due to its rich up-to-date information and historical evidences. This data is rapidly growing in an unpredictable fashion which requires an efficient and powerful method to index and retrieve such massive data. Even though manual indexing is the most effective, it is the slowest and most expensive. Hence automatic video indexing is considered as an important research problem to be addressed uniquely.In this work, we propose a Mathematical Morphology and Region Clustering based Text Information Extraction (TIE) from Malayalam news videos for Content Based Video Indexing and Retrieval (CBVIR). Morphological gradient acts as an edge detector, by enhancing the intensity variations for detecting the text regions. Further an agglomerative clustering is performed to select the significant text regions. The precision, recall and F1-measure obtained for the proposed approach are 87.45%, 94.85% and 0.91 respectively.

K. Anoop, Manjary P. Gangan, V. L. Lajish
The Impacts of ICT Support on Information Distribution, Task Assignment for Gaining Teams’ Situational Awareness in Search and Rescue Operations

Information and Communication Technology (ICT) has changed the way we communicate and work. To study the effects of ICT for Information Distribution (ID) and Task Assignment (TA) for gaining Teams’ Situational Awareness (TSA) across and within rescue teams, an indoor fire game was played with students. We used two settings (smartphone-enabled support vs. traditional walkietalkies) to analyze the impact of technology on ID and TA for gaining TSA in a simulated Search and Rescue operation. The results presented in this paper combine observations and quantitative data from a survey conducted after the game. The results indicate that the use of the ICT was good in second scenario than first scenario for ID and TA for gaining TSA. This might be explained as technology is more preferable and effective for information sharing, for gaining TSA and also for clear tasks assignment.

Vimala Nunavath, Jaziar Radianti, Tina Comes, Andreas Prinz
A Hybrid Approach to Rainfall Classification and Prediction for Crop Sustainability

Indian Agriculture is primarily dependent on rainfall distribution throughout the year. There have been several instances where crops have failed due to inadequate rainfall. This study aims at predicting rainfall considering those factors which have been correlated against precipitation, across various crop growing regions in India by using regression analysis on historical rainfall data. Additionally, we’ve used season-wise rainfall data to classify different states into crop suitability for growing major crops. We’ve divided the four seasons of rainfall as winter, pre-monsoon, monsoon, and post-monsoon. Finally, a bipartite cover is used to determine the optimal set of states that are required to produce all the major crops in India, by selecting a specific set of crops to be grown in every state, and selecting the least number of states to achieve this. The data used in this paper is taken from the Indian Meteorological Department (IMD) and Open Government Data (OGD) Platform India published by the Government of India.

Prajwal Rao, Ritvik Sachdev, Tribikram Pradhan
Development and Evaluation of Automated Algorithm for Estimation of Winds from Wind Profiler Spectra

National Atmospheric Research Laboratory (NARL) hosts several atmospheric measurements of which, MST Radar is being treated as a work horse, for which the wind observations up to an altitude of 100km is recorded. It is a great tool for studying the atmospheric dynamics. These radars are capable of measuring air motions over a wide range of heights with good spatial and temporal resolutions. The derivation of wind components from spectral data includes noise level estimation and then moment estimation. In the present context, Hildebrand and Sekhon method is utilized for noise level estimation for each range bin employing the physical properties of white noise. The interference due to stationary targets is reflected as the dc components which can be removed using conventional methods such as replacing the spectral power at zero frequency with the average of the power at adjacent spectral points and it is very important for distinguishing between the actual signals from interference echoes. In the light of automation of data analysis so as to obtain the wind components, it is mandatory to develop algorithms for the characterization of signal and interference echoes. In this work, an Automated Algorithm is developed for estimation of moments and is applied to Real Time Radar data from MST region near Gadanki, Tirupati.

E. Ramyakrishna, T. Narayana Rao, N. Padmaja
A Real Time Patient Monitoring System for Heart Disease Prediction Using Random Forest Algorithm

The proposed work suggests the design of a health care system that provides various services to monitor the patients using wireless technology. It is an intelligent Remote Patient monitoring system which is integrating patient monitoring with various sensitive parameters, wireless devices and integrated mobile and IT solutions. This system mainly provides a solution for heart diseases by monitoring heart rate and blood pressure. It also acts as a decision making system which will reduce the time before treatment. Apart from the decision-making techniques, it generates and forwards alarm messages to the relevant caretakers by means of various wireless technologies. The proposed system suggests a framework for measuring the heart rate, temperature and blood pressure of the patient using a wearable gadget and the measured parameters is transmitted to the Bluetooth enabled Android smartphone. The various parameters are analyzed and processed by android application at client side. The processed output is transferred to the server side in a periodic interval. Whenever an emergency caring arises, an alert message is forwarded to the various care providers by the client side application. The use of various wireless technologies like GPS, GPRS, and Bluetooth leads us to monitor the patient remotely. The system is said to be an intelligent system because of its diagnosis capability, timely alert for medication etc. The current statistics shows that heart disease is the leading cause of death and which shows the importance of the technology to provide a solution for reducing the cardiac arrest rate. Apart from that the proposed work compares different algorithms and proposes the usage of Random Forest algorithm for heart disease prediction.

S. Sreejith, S. Rahul, R. C. Jisha
Phoneme Selection Rules for Marathi Text to Speech Synthesis with Anuswar Places

This paper presents phoneme selection rules for Marathi Articulation for Anuswar places in speech synthesis. In Marathi Literature, written text contains Anuswar at different places on Marathi Alphabets. In this paper, the rules are defined for appropriate selection of phonemes for Anuswar Alphabets in the context of developing Marathi Text to Speech systems. This increases naturalness in output speech of TTS system. This paper proposes procedure for phoneme selection with the place of Anuswar on Alphabet. Decision tree algorithm is proposed to define simple rules for Articulation in Speech synthesis with high Intelligibility. The confusion matrix is defined to show performance of decision tree training and testing.

Manjare Chandraprabha Anil, S. D. Shirbahadurkar
Solving Multi Label Problems with Clustering and Nearest Neighbor by Consideration of Labels

In any Multi label classification problem, each instance is associated with multiple class labels. In this paper, we aim to predict the class labels of the test data accurately, using an improved multi label classification approach. This method is based on a framework that comprises an initial clustering phase followed by rule extraction using FP-Growth algorithm in label space. To predict the label of a new test data instance, this technique searches for the nearest cluster, thereby locating k-Nearest Neighbors within the corresponding cluster. The labels for the test instance are estimated by prior probabilities of the already predicted labels. Hence, by doing so, this scheme utilizes the advantages of the hybrid approach of both clustering and association rule mining.The proposed algorithm was tested on standard multi label datasets like yeast and scene. It achieved an overall accuracy of 81% when compared with scene dataset and a 68% in yeast dataset.

C. P. Prathibhamol, Asha Ashok
Multi-view Robotic Time Series Data Clustering and Analysis Using Data Mining Techniques

In present world robots are used in various spheres of life. In all these areas, knowledge of the environment is required to perform appropriate actions. The information about the environment is collected with the help of onboard sensors and image capturing device mounted on the mobile robot. As the information collected is of huge volume, data mining offers the possibility of discovering the hidden knowledge from this large amount of data. Clustering is an important aspect of data mining which will be explored in detail for grouping the scenario from multiple views.

M. Reshma, Priyanka C. Nair, Radhakrishnan Gopalapillai, Deepa Gupta, TSB Sudarshan
Twofold Detection of Multilingual Documents Using Local Features

Twofold detection of the document images plays an important role in document image analysis. This paper presents a novel approach to detect twofold document images by extracting local features such as moment, texture and foreground pixel density. The performance of the proposed system is evaluated based on criteria of data schemes, feature and various distance metrics. Experimental results on different datasets demonstrates that proposed method is flexible enough to handle multilingual documents and provides better performance on historical, printed and handwritten documents. The performance of the proposed approach is analyzed with local features alone and better performance is observed when combined features are taken into account. Based on distance metric criteria, earth mover’s distance for similarity measurement outperforms the other distance measures.

Glaxy George, M. Sreeraj

Workshop on Advances in Image Processing, Computer Vision, and Pattern Recognition (IWICP-2015)

Frontmatter
On Paper Digital Signature (OPDS)

The requirements for authenticating the identity of the sender and ensuring the content integrity in printed paper documents have been growing. Paper documents are still used in the various secure transactions. Manual verification of large amount of documents to authenticate the sender and the content is difficult. The digital signatures can be implemented on each printed paper document to fulfill these requirements. The existing technologies make use of image processing techniques for creating and verifying digital signatures and are vulnerable to physical conditions of the document. Another currently available method takes whole content of the document to achieve the requirement and requires more complex procedures for implementation. This paper presents a novel scheme for securing the authenticity and originality of certificates issued by an authority by implementing unique digital signatures created from the selected contents (words or characters) of the documents. The method assures the content integrity and sender identity authentication.

Sajan Ambadiyil, V. B. Vibhath, V. P. Mahadevan Pillai
Scale Invariant Detection of Copy-Move Forgery Using Fractal Dimension and Singular Values

Digital image forgery is a nightmare in the current scenario. The authenticity of images that circulates through media and public outlets is therefore critical with a caution: “Do not believe everything you see”. Copy-move forgery is the most frequently created image forgery that conceals a particular feature from the scene by replacing it with another feature of the same image. In this paper, a hybrid approach based on local fractal dimension (LFD) and singular value decomposition (SVD) to efficiently detect and localize the copy-move forged region is proposed. In order to reduce the computational complexity of the classification procedure, we propose to arrange image blocks in a B+ tree structure ordered based on the LFD values. Pair of blocks within each segment is compared using singular values, to find regions that exhibit maximum resemblance. This reduces the need for comparison to the most suspected image portions alone. The experimental results show how effectively the method identifies the duplicated region; also presents the robustness of the method to detect and localize forgery even in the presence of after-copying manipulations such as rotation, blurring and noise addition. Furthermore it also detect multiple copy-move forgery within the image.

Rani Susan Oommen, M. Jayamohan, S. Sruthy
Adaptive Nonlocal Filtering for Brain MRI Restoration

Brain magnetic resonance images (MRI) plays a crucial role in neuroscience and medical diagnosis. Denoising brain MRI images is an important pre-processing step required in many of the automatic computed aided-diagnosis systems in neuroscience. Recently, nonlocal means (NLM) and variants of these filters, which are widely used in Gaussian noise removal from digital image processing, have been adapted to handle Rician noise which occur in MRI. One of the crucial ingredient for the successful image filtering with NLM is the patch similarity. In this work we consider the use of fuzzy Gaussian mixture model (FGMM) for determining the patch similarity in NLM instead of the usual Euclidean distance. Experimental results with different noise levels on synthetic and brain MRI images are given to highlight the advantage of the proposed approach. Comparison with other image filtering methods our scheme obtains better results in terms of peak signal to noise ratio and structure preservation.

V. B. Surya Prasath, P. Kalavathi
Automatic Leaf Vein Feature Extraction for First Degree Veins

Leaf vein is one of the most important and complex feature of the leaf used in automatic plant identification system for automatic classification and identification of plant species. Leaves of different species have different characteristic features which help in classification of specific plant species. These features help the botanists in identifying the key species of the plants from its leaf images more accurately. Vein feature is one of the most important complex features of leaf in plant species. In this paper we proposed a new feature extraction model, to extract the vein features from the leaf images. The proposed system using Hough lines stems the extraction of vein feature from the leaf images by plotting the lines over the first degree veins. Angle of lines from the primary vein to the secondary vein is considered as the input parameter for processing the extracted vein features. The centroid vein angle is considered to be the primary feature. The vein feature was given as the input to the neural network for efficient classification and the results were tested with 15 species of plants taken from “leafilia” data sets.

S. Sibi Chakkaravarthy, G. Sajeevan, E. Kamalanaban, K. A. Varun Kumar
Singular Value Decomposition Based Image Steganography Using Integer Wavelet Transform

Transform domain Steganography techniques embed secret message in significant areas of cover image. These techniques are generally more robust against common image processing operations. In this paper, we propose an image Steganography method using singular value decomposition (SVD) and integer wavelet transform (IWT). SVD and IWT strengthen the performance of image Steganography and improve the perceptual quality of Stego images. Results have been taken over standard image data sets and compared with discrete cosine transform (DCT) and redundant discrete wavelet transform (RDWT) based image Steganography methods using peak signal to noise ratio (PSNR) correlation coefficients (CC) metrics. Experimental results show that the proposed SVD and IWT based method provides more robustness against image processing and geometric attacks, such as JPEG compression, low-pass filtering, median filtering, and addition of noise, scaling, rotation, and histogram equalization.

Siddharth Singh, Rajiv Singh, Tanveer J. Siddiqui
Unsupervised Learning Based Video Surveillance System Established with Networked Cameras

The paper presents an autonomous video surveillance system for tracking moving objects in a networked camera environment. The system is validated to identify an authenticated users’ vehicle which is provided with unique sticker as well as vehicle registration number. The multi-camera tracking is implemented on the basis of decentralized hand-over procedure between adjacent cameras. The object of interest in the source image is learnt as single tracking instance and eventually shared among other cameras in the network, autonomously. Thus, the moving objects are continuously tracked without the advent of central supervision and it can be scaled up higher for monitoring of vehicle traffic and other remote surveillance applications.

R. Venkatesan, P. Dinesh Anton Raja, A. Balaji Ganesh
Automatic Left Ventricle Segmentation in Cardiac MRI Images Using a Membership Clustering and Heuristic Region-Based Pixel Classification Approach

This study demonstrates an automated fast technique for left ventricle boundary detection in cardiac MRI for assessment of cardiac disease in heart patients. In this work, fully automatic left ventricle extraction is achieved with a membership clustering and heuristic region-based pixel classification approach. This novel heuristic region-based and membership-clustering technique obviates human intervention, which is necessary in active contours and level sets deformable techniques. Automatic extraction of the left ventricle is performed on every frame of the MRI data in the end-diastole to end-systole to end-diastole cardiac cycle in an average of 0.7 seconds per slice. Manual tracing of the left ventricle wall in the multiple slice MRI images by radiologists was employed for validation. Ejection Fraction (EF), End Diastolic Volume (EDV), and End Systolic Volume (ESV) were the clinical parameters estimated using the left ventricle area and volume measured from the automatic and manual segmentation.

Vinayak Ray, Ayush Goyal
Mixed Noise Removal Using Hybrid Fourth Order Mean Curvature Motion

Image restoration is one of the fundamental problems in digital image processing. Although there exists a wide variety of methods for removing additive Gaussian noise, relatively few works tackle the problem of removing mixed noise type from images. In this work we utilize a new hybrid partial differential equation (PDE) model for mixed noise corrupted images. By using a combination mean curvature motion (MMC) and fourth order diffusion (FOD) PDE we study a hybrid method to deal with mixture of Gaussian and impulse noises. The MMC-FOD hybrid model is implemented using an efficient essentially non-dissipative (ENoD) scheme for the MMC first to eliminate the impulse noise with a no dissipation. The FOD component is implemented using explicit finite differences scheme. Experimental results indicate that our scheme obtains optimal denoising with mixed noise scenarios and also outperforms related schemes in terms of signal to noise ratio improvement and structural similarity.

V. B. Surya Prasath, P. Kalavathi
An Implicit Segmentation Approach for Telugu Text Recognition Based on Hidden Markov Models

Telugu text is composed of aksharas (characters). The presence of split and connected aksharas in Telugu document images causes segmentation difficulties and the performance of the Telugu OCR systems is affected. Our novel approach to solve this problem is using an implicit segmentation for recognizing words. The implicit segmentation approach does not need prior segmentation of the words into aksharas before they are recognized. Since the Hidden Markov models (HMM) are successfully applied for phoneme recognition with no prior segmentation of the speech into phonemes in the automatic speech recognition applications. In this paper, we report on the use of continuous density Hidden Markov Models for representing the shape of aksharas to build Telugu text recognition system. The sliding window method is used for computing simple statistical features and 450 akshara HMMs are trained. We use word bigram language model as contextual information. The word recognition relies on akshara models and contextual information of words. The word recognition involves finding the maximum likelihood sequence of akshara models that matches against the feature vector sequence. Our system recognizes words with split and connected aksharas. The performance of the system is encouraging.

D. Koteswara Rao, Atul Negi
Detection of Copy-Move Forgery in Images Using Segmentation and SURF

In this era of multimedia and information explosion, due to the cheap availability of software and hardware, everyone can capture, edit, and publish images, without much difficulty. Image editing done with malicious intentions known as image tampering, may affect individuals, society, economy and so on. Copy-move forgery is one of the most common and easiest image tampering method which involves copying a patch of an image and pasting it within the same image. The purpose of this may be to conceal some objects in the image or conceal the artifacts of image editing. In this paper, we propose a new method to detect copy-move tampering in images, without prior image information, using an over complete segmentation and keypoint detection. It is evident from the experimental results obtained by testing it on standard datasets, that the proposed method is tolerant to postprocessing operations like blurring, JPEG compression, noise addition and so on. Also, our method is effective in detecting copy-move forgery where copied portions are subjected to various geometric transformations, like translation, rotation and scaling.

V. T. Manu, B. M. Mehtre
Automatic Pattern Recognition for Detection of Disease from Blood Drop Stain Obtained with Microfluidic Device

This paper investigates automatic detection of disease from the pattern of dried micro-drop blood stains from a patient’s blood sample. This method has the advantage of being substantially cost-effective, painless and less-invasive, quite effective for disease detection in newborns and the aged. Disease has an effect on the physical properties of blood, which in turn affect the deposition patterns of dried blood micro-droplets. For example, low platelet count will result in thinning of blood, i.e. a change in viscosity, one of the physical properties of blood. Hence, the blood micro-drop stain patterns can be used for diagnosing diseases. This paper presents automatic analysis of the dried micro-drop blood stain patterns using computer vision and pattern recognition algorithms. The patterns of micro-drop blood stains of normal non-diseased individuals are clearly distinguishable from the patterns of micro-drop blood stains of diseased individuals. As a case study, the micro-drop blood stains of patients infected with tuberculosis have been compared to the micro-drop blood stains of normal non-diseased individuals. The paper delves into the underlying physics behind how the deposition pattern of the dried micro-drop blood stain is formed. What has been observed is a thick ring like pattern in the dried micro-drop blood stains of non-diseased individuals and thin contour like lines in the dried micro-drop blood stains of patients with tuberculosis infection. The ring like pattern is due to capillary flow, an outward flow carrying suspended particles to the edge. Concentric rings (caused by inward Marangoni flow) and central deposits are some of the other patterns that were observed in the dried micro-drop blood stain patterns of normal non-diseased individuals.

Basant S. Sikarwar, Mukesh Roy, Priya Ranjan, Ayush Goyal

Workshop on Signal Processing for Wireless and Multimedia Communications (SPWMC’15)

Frontmatter
STAMBA: Security Testing for Android Mobile Banking Apps

Mobile banking activity plays a major role for M-Commerce (Mobile-Commerce) applications in our daily life. With the increasing usage on mobile phones, vulnerabilities against these devices raised exponentially. The privacy and security of confidential financial data is one of the major issues in mobile devices. Android is the most popular operating system, not only to users but also for companies and vendors or (developers in android) of all kinds. Of course, because of this reason, it’s also become quite popular to malicious adversaries. For this, mobile security and risk assessment specialists and security engineers are in high demand. In this paper, we propose STAMBA (Security Testing for Android Mobile Banking Apps) and demonstrate tools at different levels. These supported tools are used to find threats at a mobile application code level, communication or network level, and at a device level. We give a detailed discussion about vulnerabilities that help design for further app development and a detailed automated security testing for mobile banking applications.

Sriramulu Bojjagani, V. N. Sastry
An Efficient and Secure RSA Based Certificateless Signature Scheme for Wireless Sensor Networks

Certificateless signature cryptography is an efficient approach studied widely because it eliminates the need of certificate authority in public key infrastructure. There are number of security algorithms based on PKC (Public Key Cryptography), ID based certificate less schemes using bilinear maps assumptions. But implementation costs of these schemes are very high. On the other hand RSA is a scheme widely applied in real-life scenarios. So in this paper we proposed a RSA based certificate less signature scheme especially for wireless sensor networks. We have shown that this scheme is secure in random oracles model. A random oracles model is a way to proof the security of cryptographic algorithms which use hash functions and exponential operations in algorithms. Security of this scheme is closely related to discrete logarithm problem. Our scheme provide security against type I attack, type II attack, it also ensures integrity, non-repudiation and authentication.

Jitendra Singh, Vimal Kumar, Rakesh Kumar
SeaMoX: A Seamless Mobility Management Scheme for Real-Time Multimedia Traffic Over Cellular Networks

Real-time multimedia applications are often deployed to provide critical information in situations such as news coverage of an event or an incident or an ambulance rushing to provide emergency care. Mobile cellular coverage and performance of a provider network varies both spatially and temporally and challenges the ability of the network to support uninterrupted network access and application continuity. We propose that inter-provider handoffs could alleviate this problem, given the recent proliferation of Dual SIM Dual Active (DSDA) devices and the possibility of Multi SIM Multi Active devices by configuring laptops with multiple wireless broadband connections. We propose an application QoS aware mobility management approach. The software implementation is termed as SeaMoX and based on SeaMo+, an earlier implementation. We use live video streaming as an example application and demonstrate the impact of network selection and handoff by examining the playback at the receiver, which uses an adaptive jitter buffer algorithm.

D. Kumaresh, N. Suhas, S. Garge Gopi Krishna, S. V. R. Anand, Malati Hegde
Backmatter
Metadaten
Titel
Advances in Signal Processing and Intelligent Recognition Systems
herausgegeben von
Sabu M. Thampi
Sanghamitra Bandyopadhyay
Sri Krishnan
Kuan-Ching Li
Sergey Mosin
Maode Ma
Copyright-Jahr
2016
Electronic ISBN
978-3-319-28658-7
Print ISBN
978-3-319-28656-3
DOI
https://doi.org/10.1007/978-3-319-28658-7