Skip to main content
main-content

Über dieses Buch

This edited volume contains technical contributions in the field of computer vision and image processing presented at the First International Conference on Computer Vision and Image Processing (CVIP 2016). The contributions are thematically divided based on their relation to operations at the lower, middle and higher levels of vision systems, and their applications.

The technical contributions in the areas of sensors, acquisition, visualization and enhancement are classified as related to low-level operations. They discuss various modern topics – reconfigurable image system architecture, Scheimpflug camera calibration, real-time autofocusing, climate visualization, tone mapping, super-resolution and image resizing.

The technical contributions in the areas of segmentation and retrieval are classified as related to mid-level operations. They discuss some state-of-the-art techniques – non-rigid image registration, iterative image partitioning, egocentric object detection and video shot boundary detection.

The technical contributions in the areas of classification and retrieval are categorized as related to high-level operations. They discuss some state-of-the-art approaches – extreme learning machines, and target, gesture and action recognition. A non-regularized state preserving extreme learning machine is presented for natural scene classification. An algorithm for human action recognition through dynamic frame warping based on depth cues is given. Target recognition in night vision through convolutional neural network is also presented. Use of convolutional neural network in detecting static hand gesture is also discussed.

Finally, the technical contributions in the areas of surveillance, coding and data security, and biometrics and document processing are considered as applications of computer vision and image processing. They discuss some contemporary applications. A few of them are a system for tackling blind curves, a quick reaction target acquisition and tracking system, an algorithm to detect for copy-move forgery based on circle block, a novel visual secret sharing scheme using affine cipher and image interleaving, a finger knuckle print recognition system based on wavelet and Gabor filtering, and a palmprint recognition based on minutiae quadruplets.

Inhaltsverzeichnis

Frontmatter

Background Modeling Using Temporal-Local Sample Density Outlier Detection

Although researchers have proposed different kinds of techniques for background subtraction, we still need to produce more efficient algorithms in terms of adaptability to multimodal environments. We present a new background modeling algorithm based on temporal-local sample density outlier detection. We use the temporal-local densities of pixel samples as the decision measurement for background classification, with which we can deal with the dynamic backgrounds more efficiently and accurately. Experiment results have shown the outstanding performance of our proposed algorithm with multimodal environments.

Wei Zeng, Mingqiang Yang, Feng Wang, Zhenxing Cui

Analysis of Framelets for the Microcalcification

Mammography is used commonly to detect the cancer in breast at the early stage. The early stage of breast cancer detection helps in avoiding the removal of breast in women and even the death caused due to breast cancer. There are many computer aided softwares that are designed to detect breast cancer but still only biopsy method is effective in predicting the exact scenario. This biopsy technique is a painful one. To avoid this, a novel classification approach for classifying microcalcification clusters based on framelet transform is proposed. The real -time mammography images were collected from Sri Ramachandra Medical Centre, Chennai, India in order to evaluate the performance of the proposed system. The GLCM features (contrast, energy and homogeneity) are extracted from the framelet decomposed mammograms with different resolution levels and support vector machine classifier is used to classify the unknown mammograms into normal or abnormal initially and then further classifies it as benign or malignant if detected as abnormal. The result shows that framelet transform-based classification provides considerable classification accuracy.

K. S. Thivya, P. Sakthivel

Reconfigurable Architecture-Based Implementation of Non-uniformity Correction for Long Wave IR Sensors

Infra Red (IR) imaging systems have various applications in military and civilian sectors. Most of the modern imaging systems are based on Infra Red Focal Plane Arrays (IRFPAs), which consists of an array of detector element placed at focal plane of optics module. Performance of IRFPAs operating in Medium Wave Infra Red (MWIR) and Long Wave Infra Red (LWIR) spectral bands are strongly affected by spatial and temporal Non-Uniformity (NU). Due to difference in the photo response of detector elements within the array, Fixed-Pattern Noise (FPN) becomes severe. To exploit the potential of current generation infrared focal plane arrays, it is crucial to correct IRFPA for fixed-pattern noise. Different Non-Uniformity Correction (NUC) techniques have been discussed and real-time performance of two-point non-uniformity correction related to IR band is presented in this paper. The proposed scheme corrects both gain and offset non-uniformities. The techniques have been implemented in reconfigurable hardware (FPGA) and exploits BlockRAM memories to store the gain and offset coefficients in order to achieve real-time performance. NUC results for long-range LWIR imaging system are also presented.

Sudhir Khare, Brajesh Kumar Kaushik, Manvendra Singh, Manoj Purohit, Himanshu Singh

Finger Knuckle Print Recognition Based on Wavelet and Gabor Filtering

Biometric systems are used for identification- and verification-based applications such as e-commerce, physical access control, banking, and forensic. Among several kinds of biometric identifiers, finger knuckle print (FKP) is a promising biometric trait in the present scenario because of its textural features. In this paper, wavelet transform (WT) and Gabor filters are used to extract features for FKP. The WT approach decomposes the FKP feature into different frequency subbands, whereas Gabor filters are used to capture the orientation and frequency from the FKP. The information of horizontal subbands and content information of Gabor representations are both utilized to make the FKP template, and are stored for verification systems. The experimental results show that wavelet families along with Gabor filtering give a best FKP recognition rate of 96.60 %.

Gaurav Verma, Aloka Sinha

Design of Advanced Correlation Filters for Finger Knuckle Print Authentication Systems

Biometric recognition systems use automated processes in which the stored templates are used to match with the live biometrics traits. Correlation filter is a promising technique for such type of matching in which the correlation output is obtained at the location of the target in the correlation plane. Among all kinds of available biometrics, finger knuckle print (FKP) has usable significant feature, which can be utilized for person identification and verification. Motivated by the utility of the FKP, this paper presents a new scheme for person authentication using FKP based on advanced correlation filters (ACF). The numerical experiments have been carried out on the Poly U FKP database to evaluate the performance of the designed filters. The obtained results demonstrate the effectiveness and show better discrimination ability between the genuine and the imposter population for the proposed scheme.

Gaurav Verma, Aloka Sinha

A Nonlinear Modified CONVEF-AD Based Approach for Low-Dose Sinogram Restoration

Preprocessing the noisy sinogram before reconstruction is an effective and efficient way to solve the low-dose X-ray computed tomography (CT) problem. The objective of this paper is to develop a low-dose CT image reconstruction method based on statistical sonogram smoothing approach. The proposed method is casted into a variational framework and the solution of the method is based on minimization of energy functional. The solution of the method consists of two terms, viz., data fidelity term and a regularization term. The data fidelity term is obtained by minimizing the negative log likelihood of the signal-dependent Gaussian probability distribution which depicts the noise distribution in low-dose X-ray CT. The second term, i.e., regularization term is a nonlinear CONvolutional Virtual Electric Field Anisotropic Diffusion (CONVEF-AD) based filter which is an extension of Perona–Malik (P–M) anisotropic diffusion filter. The main task of regularization function is to address the issue of ill-posedness of the solution of the first term. The proposed method is capable of dealing with both signal-dependent and signal-independent Gaussian noise, i.e., mixed noise. For experimentation purpose, two different sinograms generated from test phantom images are used. The performance of the proposed method is compared with that of existing methods. The obtained results show that the proposed method outperforms many recent approaches and is capable of removing the mixed noise in low-dose X-ray CT.

Shailendra Tiwari, Rajeev Srivastava, K. V. Arya

System Design for Tackling Blind Curves

Driving through blind curves, especially in mountainous regions or through roads that have blocked visibility due to the presence of natural vegetation or buildings or other structures is a challenge because of limited visibility. This paper aims to address this problem by the use of surveillance mechanism to capture the images from the blind side of the road through stationary cameras installed on the road and provide necessary information to drivers approaching the blind curve on the other side of the road, thus cautioning about possible collision. This paper proposes a method to tackle blind curves and has been studied for various cases.

Sowndarya Lakshmi Sadasivam, J. Amudha

A Novel Visual Word Assignment Model for Content-Based Image Retrieval

Visual bag of words model have been applied in the recent past for the purpose of content-based image retrieval. In this paper, we propose a novel assignment model of visual words for representing an image patch. In particular, a vector is used to represent an image patch with its elements denoting the affinities of the patch to belong to a set of closest/most influential visual words. We also introduce a dissimilarity measure, consisting of two terms, for comparing a pair of image patches. The first term captures the difference in affinities of the patches to belong to the common set of influential visual words. The second term checks the number of visual words which influences only one of the two patches and penalizes the measure accordingly. Experimental results on the publicly available COIL-100 image database clearly demonstrates the superior performance of the proposed content-based image retrieval (CBIR) method over some similar existing approaches.

Anindita Mukherjee, Soman Chakraborty, Jaya Sil, Ananda S. Chowdhury

Online Support Vector Machine Based on Minimum Euclidean Distance

The present study includes development of an online support vector machine (SVM) based on minimum euclidean distance (MED). We have proposed a MED support vector algorithm where SVM model is initialized with small amount of training data and test data is merged to SVM model for incorrect predictions only. This method provides a simpler and more computationally efficient implementation as it assign previously computed support vector coefficients. To merge test data in SVM model, we find the euclidean distance between test data and support vector of target class and the coefficients of MED of support vector of training class are assigned to test data. The proposed technique has been implemented on benchmark data set mnist where SVM model initialized with 20 K images and tested for 40 K data images. The proposed technique of online SVM results in overall error rate as 1.69 % and without using online SVM results in error rate as 7.70 %. The overall performance of the developed system is stable in nature and produce smaller error rate.

Kalpana Dahiya, Vinod Kumar Chauhan, Anuj Sharma

Design and Development of 3-D Urban Geographical Information Retrieval Application Employing Only Open Source Instruments

Numerous 3-D GIS are being developed today that are both commercially and freely available. A multicity web-based 3-D GIS (named as GLC3d) using open source and freely available software/packages exclusively has been developed. This paper presents the architecture, design, and overview of GLC3d. Open source tools and software’s QGIS, Cesium, and MySQL are employed to develop this application. QGIS is utilized for data preparation such as raster, vector, and subsidiary data. MySQL is utilized as database store and data population. GLC3d is based on Cesium (a JavaScript library for creating 3-D globes and 2-D maps in a web browser) and WebGL (a JavaScript API for rendering interactive 3-D computer graphics and 2-D graphics) to display 3-D Globe on the web browser. 3-D visualization for the urban/city geodata is presented on an interactive 3-D Globe. Various city information are generated and incorporated in to the 3-D WebGIS for data representation and decision making.

Ajaze Parvez Khan, Sudhir Porwal, Sangeeta Khare

A Textural Characterization of Coal SEM Images Using Functional Link Artificial Neural Network

In an absolute characterization trial, there are no substitutes for the final subtyping of coal quality independent of chemical analysis. Petrology is a specialty that deals with the understanding of the essential characteristics of the coal through appropriate chemical, morphological, or porosity analysis. Conventional analysis of coal by a petrologists is subjected to various shortcomings like inter-observer variations during screen analysis and due to different machine utilization, time consuming, highly skilled operator experience, and tiredness. In chemical analysis, use of conventional analyzers is expensive for characterization process. Thus, image analysis serves as an impressive automated characterization procedure of subtyping the coal, according to their textural, morphological, color, etc., features. Coal characterization is necessary for the proper utilization of coal in the power generation, steel, and several manufacturing industries. Thus, in this paper, attempts are made to devise the methodology for an automated characterization and subclassification of different grades of coal samples using image processing and computational intelligence techniques.

Alpana, Subrajeet Mohapatra

Template-Based Automatic High-Speed Relighting of Faces

This paper presents a framework which estimates the surface normals of human face, surface albedo using an average 3D face template and subsequently replaces the original lighting on the face with a novel lighting in real time. The system uses a facial feature tracking algorithm, which locates and estimates the orientation of face. The 2D facial landmarks thus obtained are used to morph the template model to resemble the input face. The lighting conditions are approximated as a linear combination of Spherical Harmonic bases. A photometric refinement is applied to accurately estimate the surface normal and thus surface albedo. A novel skin-mask construction algorithm is also used to restrict the processing to facial region of the input image. The face is relighted with novel illumination parameters. A novel, cost-effective, seamless blending operation is performed to achieve efficacious and realistic outputs. The process is fully automatic and is executed in real time.

Ankit Jalan, Mynepalli Siva Chaitanya, Arko Sabui, Abhijeet Singh, Viswanath Veera, Shankar M. Venkatesan

An Improved Contextual Information Based Approach for Anomaly Detection via Adaptive Inference for Surveillance Application

Anomalous event detection is the foremost objective of a visual surveillance system. Using contextual information and probabilistic inference mechanisms is a recent trend in this direction. The proposed method is an improved version of the Spatio-Temporal Compositions (STC) concept, introduced earlier. Specific modifications are applied to STC method to reduce time complexity and improve the performance. The non-overlapping volume and ensemble formation employed reduce the iterations in codebook construction and probabilistic modeling steps. A simpler procedure for codebook construction has been proposed. A non-parametric probabilistic model and adaptive inference mechanisms to avoid the use of a single experimental threshold value are the other contributions. An additional feature such as event-driven high-resolution localization of unusual events is incorporated to aid in surveillance application. The proposed method produced promising results when compared to STC and other state-of-the-art approaches when experimented on seven standard datasets with simple/complex actions, in non-crowded/crowded environments.

T. J. Narendra Rao, G. N. Girish, Jeny Rajan

A Novel Approach of an (n, n) Multi-Secret Image Sharing Scheme Using Additive Modulo

Secret sharing scheme (SSS) is an efficient method of transmitting one or more secret images securely. The conventional SSS share one secret image to n participants. With the advancement of time, there arises a need for sharing multiple secret image. An (n, n)-Multi-Secret Image Sharing (MSIS) scheme is used to encrypt n secret images into n meaningless shared images and stored it in different database servers. For recovery of secrets all n shared images are required. In earlier work n secret images are shared among n or $$n+1$$n+1 shared images, which has a problem as one can recover fractional information from less than n shared images. Therefore, we need a more efficient and secure (n, n)-MSIS scheme so that less than n shared images do not reveal fractional information. In this paper, we propose an (n, n)-MSIS scheme using additive modulo and reverse bit operation for binary, grayscale, and colored images. The experimental results report that the proposed scheme requires minimal computation time for encryption and decryption. For quantitative analysis Peak Signal to Noise Ratio (PSNR), Correlation, and Root Mean Square Error (RMSE) techniques are used. The proposed (n, n)-MSIS scheme outperforms the existing state-of-the-art techniques.

Maroti Deshmukh, Neeta Nain, Mushtaq Ahmed

Scheimpflug Camera Calibration Using Lens Distortion Model

Scheimpflug principle requires that the image sensor, lens, and object planes intersect at a single line known as “Scheimpflug line.” This principle has been employed in several applications to increase the depth of field needed for accurate dimensional measures. In order to provide a metric 3D reconstruction, we need to perform an accurate camera calibration. However, pin-hole model assumptions used by classical camera calibration techniques are not valid anymore for Scheimpflug setup. In this paper, we present a new intrinsic calibration technique using bundle adjustment technique. We elaborate Scheimpflug formation model, and show how we can deal with introduced Scheimpflug distortions, without the need to estimate their angles. The proposed method is based on the use of optical lens distortion models. An experimental comparison of the proposed approach with Scheimpflug model has been made on real industrial data sets under the presence of large distortions. We have shown a slight improvement brought by the proposed method.

Peter Fasogbon, Luc Duvieubourg, Ludovic Macaire

Microscopic Image Classification Using DCT for the Detection of Acute Lymphoblastic Leukemia (ALL)

Development of a computer-aided diagnosis (CAD) system for early detection of leukemia is very essential for the betterment of medical purpose. In recent years, a variety of CAD system has been proposed for the detection of leukemia. Acute leukemia is a malignant neoplastic disorder that influences a larger fraction of world population. In modern medical science, there are sufficient newly formulated methodologies for the early detection of leukemia. Such advanced technologies include medical image processing methods for the detection of the syndrome. This paper shows that use of a highly appropriate feature extraction technique is required for the classification of a disease. In the field of image processing and machine learning approach, Discrete Cosine Transform (DCT) is a well-known technique. Nucleus features are extracted from the RGB image. The proposed method provides an opportunity to fine-tune the accuracy for the detection of the disease. Experimental results using publicly available dataset like ALL-IDB shows the superiority of the proposed method with SVM classifier comparing it with some other standard classifiers.

Sonali Mishra, Lokesh Sharma, Bansidhar Majhi, Pankaj Kumar Sa

Robust Image Hashing Technique for Content Authentication based on DWT

This paper presents an image hashing technique for content verification using Discrete Wavelet Transform (DWT) approximation features. The proposed technique converts resized RGB color images to L*a*b* color images. Further, images are regularized using Gaussian low pass filter. A level 2, 2D DWT is applied on L* component of L*a*b* color image and the LL 2 approximation sub-band image is chosen for feature extraction. The features are extracted by utilizing a sequence of circles on approximation sub-band image. Finally, the robust binary hash is generated from extracted features. The experimental results indicate that the hash of the presented technique is invariant to standard content preserving manipulations and malicious content altering operations. The experiment results of Receiver Operating Characteristics (ROC) plots indicate that the presented technique shows strong discriminative and robustness capabilities. Besides, the hash of the proposed technique is shorter in length and key dependent.

Lokanadham Naidu Vadlamudi, Rama Prasad V. Vaddella, Vasumathi Devara

Robust Parametric Twin Support Vector Machine and Its Application in Human Activity Recognition

This paper proposes a novel and Robust Parametric Twin Support Vector Machine (RPTWSVM) classifier to deal with the heteroscedastic noise present in the human activity recognition framework. Unlike Par-$$\nu $$ν-SVM, RPTWSVM proposes two optimization problems where each one of them deals with the structural information of the corresponding class in order to control the effect of heteroscedastic noise on the generalization ability of the classifier. Further, the hyperplanes so obtained adjust themselves in order to maximize the parametric insensitive margin. The efficacy of the proposed framework has been evaluated on standard UCI benchmark datasets. Moreover, we investigate the performance of RPTWSVM on human activity recognition problem. The effectiveness and practicability of the proposed algorithm have been supported with the help of experimental results.

Reshma Khemchandani, Sweta Sharma

Separating Indic Scripts with ‘matra’—A Precursor to Script Identification in Multi-script Documents

Here, we present a new technique for separating Indic scripts based on matra (or shirorekha), where an optimized fractal geometry analysis (FGA) is used as the sole pertinent feature. Separating those scripts having matra from those which do not have one, can be used as a precursor to ease the subsequent script identification process. In our work, we consider two matra-based scripts namely Bangla and Devanagari as positive samples, and the counter samples are obtained from two different scripts namely Roman and Urdu. Altogether, we took 1204 document images with a distribution of 525 matra-based (325 Bangla and 200 Devanagari) and 679 without matra-based (370 Roman and 309 Urdu) scripts. For experimentation, we have used three different classifiers: multilayer perceptron (MLP), random forest (RF), and BayesNet (BN), with the target of selecting the best performer. From a series of test, we achieved an average accuracy of 96.44 % from MLP classifier.

Sk.Md. Obaidullah, Chitrita Goswami, K. C. Santosh, Chayan Halder, Nibaran Das, Kaushik Roy

Efficient Multimodal Biometric Feature Fusion Using Block Sum and Minutiae Techniques

Biometric is widely used for identifying a person in different area like Security zones, Border crossings, Airports, Automatic teller machines, Passport, Criminal verification, etc. Currently, most of the deployed biometric systems use a single biometric trait for recognition. But there are several limitations of unimodal biometric system, such as Noise in sensed data, Non-universality, higher error rate, and lower recognition rate. These issues can be handled by designing a Multimodal biometric system. This research paper proposes a novel feature level fusion technique based on a distance metric to improve both recognition rate and response time. This algorithm is based on the textural features extracted from iris using Block sum and fingerprint using Minutiae method. The performance of the propose algorithms has been validated and compared with the other algorithms using the CASIA Version 3 iris database and YCCE Fingerprint database.

Ujwalla Gawande, Kamal Hajari, Yogesh Golhar

Video Synopsis for IR Imagery Considering Video as a 3D Data Cuboid

Video synopsis is a way to transform a recorded video into a temporal compact representation. Surveillance videos generally contain huge amount of recorded data as there are a lot of inherent spatio-temporal redundancies in the form of segments having no activities; browsing and retrieval of such huge data has always remained an inconvenient job. We present an approach to video synopsis for IR imagery in which considered video is mapped into a temporal compact and chronologically analogous way by removing these inherent spatio-temporal redundancies significantly. A group of frames of video sequence is taken to form a 3D data cuboid with X, Y and T axes, this cuboid is re-represented as stack of contiguous $$X-T$$X-T slices. With the help of Canny’s edge detection and Hough transform-based line detection, contents of these slices are analysed and segments having spatio-temporal redundancy are eliminated. Hence, recorded video is dynamically summarized on the basis of its content.

Nikhil Kumar, Ashish Kumar, Neeta Kandpal

Performance Analysis of Texture Image Retrieval in Curvelet, Contourlet, and Local Ternary Pattern Using DNN and ELM Classifiers for MRI Brain Tumor Images

The problem of searching a digital image in a very huge database is called content-based image retrieval (CBIR). Texture represents spatial or statistical repetition in pixel intensity and orientation. When abnormal cells form within the brain is called brain tumor. In this paper, we have developed a texture feature extraction of MRI brain tumor image retrieval. There are two parts, namely feature extraction process and classification. First, the texture features are extracted using techniques like curvelet transform, contourlet transform, and Local Ternary Pattern (LTP). Second, the supervised learning algorithms like Deep Neural Network (DNN) and Extreme Learning Machine (ELM) are used to classify the brain tumor images. The experiment is performed on a collection of 1000 brain tumor images with different modalities and orientations. Experimental results reveal that contourlet transform technique provides better than curvelet transform and local ternary pattern.

A. Anbarasa Pandian, R. Balasubramanian

ROI Segmentation from Brain MR Images with a Fast Multilevel Thresholding

A novel region of interest (ROI) segmentation for detection of Glioblastoma multiforme (GBM) tumor in magnetic resonance (MR) images of the brain is proposed using a two-stage thresholding method. We have defined multiple intervals for multilevel thresholding using a novel meta-heuristic optimization technique called Discrete Curve Evolution. In each of these intervals, a threshold is selected by bi-level Otsu’s method. Then the ROI is extracted from only a single seed initialization, on the ROI, by the user. The proposed segmentation technique is more accurate as compared to the existing methods. Also the time complexity of our method is very low. The experimental evaluation is provided on contrast-enhanced T1-weighted MRI slices of three patients, having the corresponding ground truth of the tumor regions. The performance measure, based on Jaccard and Dice indices, of the segmented ROI demonstrated higher accuracy than existing methods.

Subhashis Banerjee, Sushmita Mitra, B. Uma Shankar

Surveillance Scene Segmentation Based on Trajectory Classification Using Supervised Learning

Scene understanding plays a vital role in the field of visual surveillance and security where we aim to classify surveillance scenes based on two important information, namely scene’s layout and activities or motions within the scene. In this paper, we propose a supervised learning-based novel algorithm to segment surveillance scenes with the help of high-level features extracted from object trajectories. High-level features are computed using a recently proposed nonoverlapping block-based representation of surveillance scene. We have trained Hidden Markov Model (HMM) to learn parameters describing the dynamics of a given surveillance scene. Experiments have been carried out using publicly available datasets and the outcomes suggest that, the proposed methodology can deliver encouraging results for correctly segmenting surveillance with the help of motion trajectories. We have compared the method with state-of-the-art techniques. It has been observed that, our proposed method outperforms baseline algorithms in various contexts such as localization of frequently accessed paths, marking abandoned or inaccessible locations, etc.

Rajkumar Saini, Arif Ahmed, Debi Prosad Dogra, Partha Pratim Roy

Classification of Object Trajectories Represented by High-Level Features Using Unsupervised Learning

Object motion trajectory classification is an important task, often used to detect abnormal movement patterns for taking appropriate actions to prohibit occurrences of unwanted events. Given a set of trajectories recorded over a period of time, they can be clustered to understand usual flow of movement or detection of unusual flow. Automatic traffic management, visual surveillance, behavioral understanding, and sports or scientific video analysis are some of the typical applications that benefit from clustering object trajectories. In this paper, we have proposed an unsupervised way of clustering object trajectories to filter out movements that deviate large from the usual patterns. A scene is divided into nonoverlapping rectangular blocks and importance of each block is estimated. Two statistical parameters that closely describe the dynamic of the block are estimated. Next, these high-level features are used to cluster the set of trajectories using k-means clustering technique. Experimental results using public datasets reveal that, our proposed method can categorize object trajectories with higher accuracy when compared to clustering obtained using raw trajectory data or grouped using complex method such as spectral clustering.

Rajkumar Saini, Arif Ahmed, Debi Prosad Dogra, Partha Pratim Roy

A Hybrid Method for Image Categorization Using Shape Descriptors and Histogram of Oriented Gradients

Image categorization is the process of classifying all pixels of an image into one of several classes. In this paper, we have proposed a novel vision-based method for image categorization is invariant to affine transformation and robust to cluttered background. The proposed methodology consists of three phases: segmentation, feature extraction, and classification. In segmentation, an object of interest is segmented from the image. Features representing the image are extracted in feature extraction phase. Finally, an image is classified using multi-class support vector machine. The main advantage of this method is that it is simple and computationally efficient. We have tested the performance of proposed system on Caltech 101 object category and reported 76.14 % recognition accuracy.

Subhash Chand Agrawal, Anand Singh Jalal, Rajesh Kumar Tripathi

Local Binary Pattern and Its Variants for Target Recognition in Infrared Imagery

In this research work, local binary pattern (LBP)-based automatic target recognition system is proposed for classification of various categories of moving civilian targets using their infrared image signatures. Target recognition in infrared images is demanding owing to large variations in target signature and limited target to background contrast. This demands robust features/descriptors which can represent possible variations of the target category with minimal intra class variance. LBP, a simple yet efficient texture operator initially proposed for texture recognition of late is gaining popularity in face and object recognition applications. In this work, the suitability of LBP and two of its variants, local ternary pattern (LTP), complete local binary pattern (CLBP) for the task of recognition in infrared images has been evaluated. The performance of the method is validated with target clips obtained from ‘CSIR-CSIO moving object thermal infrared imagery dataset’. The number of classes is four- three different target classes (Ambassador, Auto and Pedestrian) and one class representing the background. Classification accuracies of 89.48 %, 100 % and 100 % were obtained for LBP, LTP and CLBP, respectively. The results indicate the suitability of LBP operator for target recognition in infrared images.

Aparna Akula, Ripul Ghosh, Satish Kumar, H. K. Sardana

Applicability of Self-Organizing Maps in Content-Based Image Classification

Image databases are getting larger and diverse with the coming up of new imaging devices and advancements in technology. Content-based image classification (CBIC) is a method to classify images from large databases into different categories, on the basis of image content. An efficient image representation is an important component of a CBIC system. In this paper, we demonstrate that Self-Organizing Maps (SOM)-based clustering can be used to form an efficient representation of an image for a CBIC system. The proposed method first extracts Scale-Invariant Feature Transform (SIFT) features from images. Then it uses SOM for clustering of descriptors and forming a Bag of Features (BOF) or Vector of Locally Aggregated Descriptors (VLAD) representation of image. The performance of proposed method has been compared with systems using k-means clustering for forming VLAD or BOF representations of an image. The classification performance of proposed method is found to be better in terms of F-measure (FM) value and execution time.

Kumar Rohit, R. K. Sai Subrahmanyam Gorthi, Deepak Mishra

Road Surface Classification Using Texture Synthesis Based on Gray-Level Co-occurrence Matrix

Advance Driving Assistance System (ADAS) has been a growing area of interest in the research community for automotive domain where scene understanding and modeling is one of the principally focus area of activities. Texture synthesis using gray-level co-occurrence matrix (GLCM) of any rigid body is not an exceptional task in image processing area. The additional integration of this method is for texture characterization and use it for the road surface classifications which is the primary focus of this paper. We have also introduced that GLCM based road surface analysis in a line scan manner that can be used as a module for ADAS application.

Somnath Mukherjee, Saurabh Pandey

Electroencephalography-Based Emotion Recognition Using Gray-Level Co-occurrence Matrix Features

Emotions are very essential for our day-to-day activities such as communication, decision-making and learning. Electroencephalography (EEG) is a non-invasive method to record electrical activity of the brain. To make Human–Machine Interaction (HMI) more natural, human emotion recognition is important. Over the past decade, various signal processing methods are used for analysing EEG-based emotion recognition (ER). This paper proposes a novel technique for ER using Gray-Level Co-occurrence Matrix (GLCM)-based features. The features are validated on benchmark DEAP database upto four emotions and classified using K-nearest neighbor (K-NN) classifier.

Narendra Jadhav, Ramchandra Manthalkar, Yashwant Joshi

Quick Reaction Target Acquisition and Tracking System

The most relevant application of visual tracking is in the field of surveillance and defense, where precision tracking of target with minimal reaction time is of prime importance. This paper examines an approach for reducing the reaction time for target acquisition. Algorithm for auto detection of potential targets under dynamic background has been proposed. Also, the design considerations for visual tracking and control system configuration to achieve very fast response with proper transient behavior for cued target position have been presented which ultimately leads to an integrated quick response visual tracking system.

Zahir Ahmed Ansari, M. J. Nigam, Avnish Kumar

Low-Complexity Nonrigid Image Registration Using Feature-Based Diffeomorphic Log-Demons

Traditional hybrid-type nonrigid registration algorithm uses affine transformation or class-specific distortion parameters for global matching assuming linear-type deformations in images. In order to consider generalized and nonlinear-type deformations, this paper presents an approach of feature-based global matching algorithm prior to certain local matching. In particular, the control points in images are identified globally by the well-known robust features such as the SIFT, SURF, or ASIFT and interpolation is carried out by a low-complexity orthogonal polynomial transformation. The local matching is performed using the diffeomorphic Demons, which is a well-established intensity-based registration method. Experiments are carried out on synthetic distortions such as spherical, barrel, and pin-cushion in commonly referred images as well as on real-life distortions in medical images. Results reveal that proposed introduction of feature-based global matching significantly improves registration performance in terms of residual errors, computational complexity, and visual quality as compared to the existing methods including the log-Demons itself.

Md. Azim Ullah, S. M. Mahbubur Rahman

Spotting of Keyword Directly in Run-Length Compressed Documents

With the rapid growth of digital libraries, e-governance and Internet applications, huge volume of documents are being generated, communicated and archived in the compressed form to provide better storage and transfer efficiencies. In such a large repository of compressed documents, the frequently used operations like keyword searching and document retrieval have to be carried out after decompression and subsequently with the help of an OCR. Therefore developing keyword spotting technique directly in compressed documents is a potential and challenging research issue. In this backdrop, the paper presents a novel approach for searching keywords directly in run-length compressed documents without going through the stages of decompression and OCRing. The proposed method extracts simple and straightforward font size invariant features like number of run transitions and correlation of runs over the selected regions of test words, and matches with that of the user queried word. In the subsequent step, based on the matching score, the keywords are spotted in the compressed document. The idea of decompression-less and OCR-less word spotting directly in compressed documents is the major contribution of this paper. The method is experimented on a data set of compressed documents and the preliminary results obtained validate the proposed idea.

Mohammed Javed, P. Nagabhushan, Bidyut Baran Chaudhuri

Design and Implementation of a Real-Time Autofocus Algorithm for Thermal Imagers

Good image quality is the most important requirement of a thermal imager or any other imaging system in almost all applications. Degree of focus in an image plays a very important role in determining the image quality, thus focusing mechanism is a very important requirement in thermal imagers. A real-time and reliable passive autofocus algorithm has been developed and implemented in FPGA-based hardware. This autofocus module has been integrated with the video processing pipeline of thermal imagers. Prior to the hardware implementation, different algorithms for image sharpness evaluation have been implemented in MATLAB and simulations have been done with test video sequences acquired by a thermal imager with motorized focus control to analyze the algorithms efficiency. Cumulative gradient algorithm has been developed for image sharpness evaluation. The algorithm has been tested on images taken from a thermal imager under varying contrast and background conditions, and it shows high precision and good discriminating power. The images have been prefiltered by a median rank-order filter using a 3 × 3 matrix to make it more robust in handling noisy images. Complete autofocus algorithm design comprising of a frame acquisition module for acquiring user selectable central region in the incoming thermal imager video, Cumulative Gradient-based image sharpness evaluation module, fixed step size search-based focal plane search module and a motor pulse generation module for generating motor drives have been implemented on Xilinx FPGA device XC4VLX100 using Xilinx ISE EDA tool.

Anurag Kumar Srivastava, Neeta Kandpal

Parameter Free Clustering Approach for Event Summarization in Videos

Digital videos, nowadays, are becoming more common in various fields like education, entertainment, etc. due to increased computational power and electronic storage capacity. With an increasing size of video collection, a technology is needed to effectively and efficiently browse through the video without losing contents of the video. The user may not always have sufficient time to watch the entire video or the entire content of the video may not be of interest of user. In such cases, user may just want to go through the summary of the video instead of watching the entire video. In this paper we propose an approach for event summarization in videos based on clustering method. Our proposed method provides a set of key frames as a summary for a video. The key frames which are closer to the cluster heads of the optimal clustering are combined to form the summarized video. The evaluation of the proposed model is done on a publicly available dataset and compared with ten state-of-the-art models in terms of precision, recall and F-measure. The experimental results demonstrate that the proposed model outperforms the rest in terms of F-measure.

Deepak Kumar Mishra, Navjot Singh

Connected Operators for Non-text Object Segmentation in Grayscale Document Images

This paper presents an unconventional method of segmenting nontext objects directly from a grayscale document image by making use of connected operators, combined with the Otsu thresholding method, where the connected operators are realized as a maxtree. The maxtree structure is used for a simplification of the image and at a later stage, it is used as a structure from which to extract the desired objects. The proposed solution is aimed at segmenting halftone images, tables, line drawings, and graphs. The solution has been evaluated and some of its shortcomings are highlighted. To the best of our knowledge, this is the first attempt at using connected operators for page segmentation.

Sheshera Mysore, Manish Kumar Gupta, Swapnil Belhe

Non-regularized State Preserving Extreme Learning Machine for Natural Scene Classification

Scene classification remains a challenging task in computer vision applications due to a wide range of intraclass and interclass variations. A robust feature extraction technique and an effective classifier are required to achieve satisfactory recognition performance. Herein, we propose a nonregularized state preserving extreme learning machine (NSPELM) to perform scene classification tasks. We employ a Bag-of-Words (BoW) model for feature extraction prior to performing the classification task. The BoW feature is obtained based on a regular grid method for point selection and Speeded Up Robust Features (SURF) technique for feature extraction on the selected points. The performance of NSPELM is tested and evaluated on three standard scene category classification datasets. The recognition accuracy is compared with the standard extreme learning machine classifier and it shows that the proposed NSPELM algorithm yields better accuracy.

Paheding Sidike, Md. Zahangir Alom, Vijayan K. Asari, Tarek M. Taha

A Local Correlation and Directive Contrast Based Image Fusion

In this paper, a local correlation and directive contrast-based multi-focus image fusion technique is proposed. The proposed fusion method is conducted into two parts. In first part, Discrete Wavelet Packet Transform (DWPT) is performed over the source images, by which low and high frequency coefficients are obtained. In second part, these transformed coefficients are fused using local correlation and directive contrast-based approach. The performance of the proposed method is tested on several pairs of multi-focus images and compared with few existing methods. The experimental results demonstrate that the proposed method provides better results than other existing methods.

Sonam, Manoj Kumar

Multi-exposure Image Fusion Using Propagated Image Filtering

Image fusion is the process of combining multiple images of a same scene to single high-quality image which has more information than any of the input images. In this paper, we propose a new fusion approach in a spatial domain using propagated image filter. The proposed approach calculates the weight map of every input image using the propagated image filter and gradient domain postprocessing. Propagated image filter exploits cumulative weight construction approach for filtering operation. We show that the proposed approach is able to achieve state-of-the-art results for the problem of multi-exposure fusion for various types of indoor and outdoor natural static scenes with varying amounts of dynamic range.

Diptiben Patel, Bhoomika Sonane, Shanmuganathan Raman

Tone Mapping HDR Images Using Local Texture and Brightness Measures

The process of adapting the dynamic range of a real-world scene or a photograph in a controlled manner to suit the lower dynamic range of display devices is called tone mapping. In this paper, we present a novel local tone mapping technique for high-dynamic range (HDR) images taking texture and brightness as cues. We make use of bilateral filtering to obtain base and detail layer of the luminance component. In our proposed approach, we weight the base layer using local to global brightness ratio and texture estimator, and then combine it with the detail layer to get the tone mapped image. To see the difference in contrasts between the original HDR Image and the tone mapped image using our model, we make use of an online dynamic range (in)dependent metric. We present our results and compare it with other tone mapping algorithms and demonstrate that our model is better suited to compress the dynamic range of HDR images preserving visibility and information and with minimal artifacts.

Akshay Gadi Patil, Shanmuganathan Raman

Pre- and Post-fingerprint Skeleton Enhancement for Minutiae Extraction

Automatic personal identification system by extracting minutiae points from the thinned fingerprint image is one of the popular methods in a biometric system based on fingerprint. Due to various structural deformations, extracted minutiae points from a skeletonized fingerprint image may contain a large number of false minutiae points. This largely affects the overall matching performance of the system. The solution is to validate the minutiae points extracted and to select only true minutiae points for the subsequent matching process. This paper proposes several pre- and post-processing techniques which are used to enhance the fingerprint skeleton image by detecting and canceling the false minutiae points in the fingerprint image. The proposed method is tested on FVC2002 standard dataset and the experimental results show that the proposed techniques can remove false minutiae points.

Geevar C. Zacharias, Madhu S. Nair, P. Sojan Lal

Content Aware Image Size Reduction Using Low Energy Maps for Reduced Distortion

On different devices images are often viewed with different resolutions which require image resizing. Resizing images often affects the quality of the images. To better resize images to different resolutions content aware image resizing should be done so that important features are preserved. Seam carving is one such content aware image resizing technique. In this work seam carving is used to downsize an image. This is achieved by carving out an optimal seam (either vertical or horizontal), which contains less information. Each seam removes a pixel from every row (or column) to reduce the height (or width) of the image. To prevent distortion resulting from uniform seam carving, we propose an algorithm that uses a new energy gradient function. In this method minimum of three neighboring pixels is calculated in both energy map and cumulative map and these values are added to find the value of pixel for the new cost matrix.

Pooja Solanki, Charul Bhatnagar, Anand Singh Jalal, Manoj Kumar

Artificial Immune Hybrid Photo Album Classifier

The personal photo collections are becoming significant in our day today existence. The challenge is to precisely intuit user’s complex and transient interests and to accordingly develop an adaptive and automated personalized photo management system which efficiently manages and organizes personal photos. This is increasingly gaining importance as it will be required to browse, search and retrieve efficiently the relevant information from personal collections which may extend from many years. Significance and relevance for the user also may undergo temporal and crucial shifts which need to be continually logged to generate patterns. The cloud paradigm makes available the basic platform but a system needs to be built wherein a personalized service with ability to capture diversity is guaranteed even when the training data size is small. An Artificial Immune Hybrid Photo Album Classifier (AIHPAC) is proposed using the nonlinear biological properties of Human Immune Systems. The system does event based clustering for an individual with embedded feature selection. The model is self learning and self evolving. The efficacy of the proposed method is efficiently demonstrated by the experimental results.

Vandna Bhalla, Santanu Chaudhury

Crowd Disaster Avoidance System (CDAS) by Deep Learning Using eXtended Center Symmetric Local Binary Pattern (XCS-LBP) Texture Features

In order to avoid crowd disaster in public gatherings, this paper aims to develop an efficient algorithm that works well in both indoor and outdoor scenes to give early warning message automatically. It also deals with high dense crowd and sudden illumination changing environment. To address this problem, first an XCS-LBP (eXtended Center Symmetric Local Binary Pattern) features are extracted which works well under sudden illumination changes. Subsequently, these features are trained using deep Convolutional Neural Network (CNN) for crowd count. Finally, a warning message is displayed to the authority, if the people count exceeds a certain limit in order to avoid the crowd disaster in advance. Benchmark datasets such as PETS2009, UCSD and UFC_CC_50 have been used for experimentation. The performance measures such as MSE (Mean Square Error), MESA (Maximum Excess over Sub Arrays) and MAE (Mean Absolute Error) have been calculated and the proposed approach provides high accuracy.

C. Nagananthini, B. Yogameena

A Novel Visualization and Tracking Framework for Analyzing the Inter/Intra Cloud Pattern Formation to Study Their Impact on Climate

Cloud Analysis plays an important role in understanding the climate changes which will be helpful in taking necessary mitigation policies. This work mainly aims to provide a novel framework for tracking as well as extracting characteristics of multiple cloud clusters by combining dense and sparse motion estimation techniques. The dense optical flow (Classic-Nonlocal) method estimates intra-cloud motion accurately from low contrast images in the presence of large motion. The sparse or feature (Region Overlap)-based estimation technique utilize the computed dense motion field to robustly estimate inter-cloud motion from consecutive images in the presence of cloud crossing, splitting, and merging scenario’s. The proposed framework is also robust in handling illumination effects as well as poor signal quality due to atmospheric noises. A quantitative evaluation of the proposed framework on a synthetic fluid image sequence shows a better performance over other existing methods which reveals the applicability of classic-NL technique on cloud images. Experiments on half hourly infrared image sequence from Kalpana-1 Geostationary satellite have been performed and results show closest match to the actual track data.

Bibin Johnson, J. Sheeba Rani, Gorthi R. K. S. S. Manyam

Cancelable Biometric Template Security Using Segment-Based Visual Cryptography

The cancelable biometric system is susceptible to a variety of attacks aimed at deteriorating the integrity of the authentication procedure. These attacks are intended to either ruin the security afforded by the system or deter the normal functioning of the authentication system. This paper describes various threats that can be encountered by a cancelable biometric system. It specifically focuses on preventing the attacks designed to extract information about the transformed biometric data of an individual, from the template database. Furthermore, we provide experimental results pertaining to a system combining the cancelable biometrics with segment-based visual cryptography, which converts traditional biocode into novel structures called shares.

P. Punithavathi, S. Geetha

PCB Defect Classification Using Logical Combination of Segmented Copper and Non-copper Part

In this paper, a new model for defect classification of PCB is proposed which is inspired from bottom-up processing model of perception. The proposed model follows a non-referential based approach because aligning test and reference image may be difficult. In order to minimize learning complexity at each level, defect image is segmented into copper and non-copper parts. Copper and non-copper parts are analyzed separately. Final defect class is predicted by combining copper and non-copper defect classes. Edges provide unique information about distortion in copper disc. In this model, circularity measures are computed from edges of copper disc of a Copper part. For non-copper part, color information is unique for every defect type. A 3D color histogram can capture the global color distribution. The proposed model tries to compute the histogram using nonuniform bins. Variations in intensity ranges along each dimension of bins reduce irrelevant computations effectively. The bins dimensions are decided based on the amount of correlation among defect types. Discoloration type defect is analyzed independently from copper part, because it is a color defect. Final defect class is predicted by logical combination of defect classes of Copper and Non-copper part. The effectiveness of this model is evaluated on real data from PCB manufacturing industry and accuracy is compared with previously proposed non-referential approaches.

Shashi Kumar, Yuji Iwahori, M. K. Bhuyan

Gait Recognition-Based Human Identification and Gender Classification

The main objective of this work is to identify the persons and to classify the gender of those persons with the help of their walking styles from the gait sequences with arbitrary walking directions. The human silhouettes are extracted from the given gait sequences using background subtraction technique. Median value approach is used for the background subtraction. After the extraction of the silhouettes, the affinity propagation clustering is performed to group the silhouettes with similar views and poses to one cluster. The cluster-based averaged gait image is taken as a feature for each cluster. To learn the distance metric, sparse reconstruction-based metric learning has been used. It minimizes the intraclass sparse reconstruction errors and maximizes the interclass reconstruction errors simultaneously. The above-mentioned steps have come under the training phase. With the help of the metric learned in the training and the feature extracted from the testing video sequence, sparse reconstruction-based classification has been performed for identifying the person and gender classification of that person. The accuracy achieved for the human identification and gender classification is promising.

S. Arivazhagan, P. Induja

Corner Detection Using Random Forests

We present a fast algorithm for corner detection, exploiting the local features (i.e. intensities of neighbourhood pixels) around a pixel. The proposed method is simple to implement but is efficient enough to give results comparable to that of the state-of-the-art corner detectors. The algorithm is shown to detect corners in a given image using a learning-based framework. The algorithm simply takes the differences of the intensities of candidate pixel and pixels around its neighbourhood and processes them further to make the similar pixels look even more similar and distinct pixels even more distinct. This task is achieved by effectively training a random forest in order to classify whether the candidate pixel is a corner or not. We compare the results with several state-of-the-art techniques for corner detection and show the effectiveness of the proposed method.

Shubham Pachori, Kshitij Singh, Shanmuganathan Raman

Symbolic Representation and Classification of Logos

In this paper, a model for classification of logos based on symbolic representation of features is presented. The proposed model makes use of global features of logo images such as color, texture, and shape features for classification. The logo images are broadly classified into three different classes, viz., logo image containing only text, an image with only symbol, and an image with both text and a symbol. In each class, the similar looking logo images are clustered using K-means clustering algorithm. The intra-cluster variations present in each cluster corresponding to each class are then preserved using symbolic interval data. Thus referenced logo images are represented in the form of interval data. A sample logo image is then classified using suitable symbolic classifier. For experimentation purpose, relatively large amount of color logo images is created consisting of 5044 logo images. The classification results are validated with the help of accuracy, precision, recall, F-measure, and time. To check the efficacy of the proposed model, the comparative analyses are given against the other models. The results show that the proposed model outperforms the other models with respect to time and F-measure.

D. S. Guru, N. Vinay Kumar

A Hybrid Method Based CT Image Denoising Using Nonsubsampled Contourlet and Curvelet Transforms

Computed tomography (CT) is one of the most widespread radio-logical tools for diagnosis purpose. To achieve good quality of CT images with low radiation dose has drawn a lot of attention to researchers. Hence, post-processing of CT images has become a major concern in medical image processing. This paper presents a novel edge-preserving image denoising scheme where noisy CT images are denoised using nonsubsampled contourlet transform (NSCT) and curvelet transform separately. By estimating variance difference on both denoised images, final denoised CT image has been achieved using a variation-based weighted aggregation. The proposed scheme is compared with existing methods and it is observed that the performance of proposed method is superior to existing methods in terms of visual quality, image quality index (IQI), and peak signal-to-noise ratio (PSNR).

Manoj Diwakar, Manoj Kumar

Using Musical Beats to Segment Videos of Bharatanatyam Adavus

We present an algorithm for audio-guided segmentation of the Kinect videos of Adavus in Bharatanatyam dance. Adavus are basic choreographic units of a dance sequence in Bharatanatyam. An Adavu is accompanied by percussion instruments (Tatta Palahai (wooden stick)—Tatta Kozhi (wooden block), Mridangam, Nagaswaram, Flute, Violin, or Veena) and vocal music. It is a combination of events that are either postures or small movements synchronized with rhythmic pattern of beats or Taals. We segment the videos of Adavus according to the percussion beats to determine the events for recognition of Adavus later. We use Blind Source Separation to isolate the instrumental sound from the vocal. Beats are tracked by onset detection to determine the instants in the video where the dancer assumes key postures. We also build a visualizer for test. From over 13000 input frames of 15 Adavus, 74 of the 131 key frames actually present get detected. Every detected key frame is correct. Hence, the system has 100 % precision, but only about 56 % recall.

Tanwi Mallick, Akash Anuj, Partha Pratim Das, Arun Kumar Majumdar

Parallel Implementation of RSA 2D-DCT Steganography and Chaotic 2D-DCT Steganography

Information security has been one of the major concerns in the field of communication today. Steganography is one of the ways used for secure communication, where people cannot feel the existence of the secret information. The need for parallelizing an algorithm increases, as any good algorithm becomes a failure if the computation time taken by it is large. In this paper two parallel algorithms—parallel RSA (Rivest Shamir Adleman) cryptosystem with 2D-DCT (Discrete Cosine Transformation) steganography and parallel chaotic 2D-DCT steganography—have been proposed. The performance of both algorithms for larger images is determined and chaotic steganography is proved to be an efficient algorithm for larger messages. The parallelized version also proves to have reduced processing time than serial version with the speed-up ratios of 1.6 and 3.18.

G. Savithri, Vinupriya, Sayali Mane, J. Saira Banu

Thermal Face Recognition Using Face Localized Scale-Invariant Feature Transform

Biometric face reorganization is an established means for the prevention of frauds in financial transactions and security issues. In particular, face verification has been extensively used to endorse financial transactions. Thermal face recognition is an upcoming approach in this field. This work proposes a robust thermal face recognition system based on face localized scale-invariant feature transform (FLSIFT). FLSIFT tackles the problem of thermal face recognition with complex backgrounds. Experimental results of proposed FLSIFT thermal face recognition system are compared with the existing Blood Vessel Pattern method. To test the performance of proposed and existing method, a new thermal face database consisting of Indian people and variations in the background is developed. The thermal facial images of 113 subjects are captured for this purpose. The test results show that the recognition accuracy of Blood Vessel Pattern technique and FLSIFT on face images with simple background is 79.28 % and 100 %, respectively. Moreover, the test performance on the complex background for the two methods is found to be 5.55 % and 98.14 %, respectively. It may be noted that FLSIFT is capable to handle background changes more efficiently and the performance is found to be robust.

Shruti R. Uke, Abhijeet V. Nandedkar

Integrating Geometric and Textural Features for Facial Emotion Classification Using SVM Frameworks

In this paper, we present a fast facial emotion classification system that relies on the concatenation of geometric and texture-based features. For classification, we propose to leverage the binary classification capabilities of a support vector machine classifier to a hierarchical graph-based architecture that allows multi-class classification. We evaluate our classification results by calculating the emotion-wise classification accuracies and execution time of the hierarchical SVM classifier. A comparison between the overall accuracies of geometric, texture-based, and concatenated features clearly indicates the performance enhancement achieved with concatenated features. Our experiments also demonstrate the effectiveness of our approach for developing efficient and robust real-time facial expression recognition frameworks.

Samyak Datta, Debashis Sen, R. Balasubramanian

Fast Non-blind Image Deblurring with Sparse Priors

Capturing clear images in dim light conditions remains a critical problem in digital photography. Long exposure time inevitably leads to motion blur due to camera shake. On the other hand, short exposure time with high gain yields sharp but noisy images. However, exploiting information from both the blurry and noisy images can produce superior results in image reconstruction. In this paper, we employ the image pairs to carry out a non-blind deconvolution and compare the performances of three different deconvolution methods, namely, Richardson Lucy algorithm, Algebraic deconvolution, and Basis Pursuit deconvolution. We show that the Basis Pursuit approach produces the best results in most cases.

Rajshekhar Das, Anurag Bajpai, Shankar M. Venkatesan

Backmatter

Weitere Informationen

BranchenIndex Online

Die B2B-Firmensuche für Industrie und Wirtschaft: Kostenfrei in Firmenprofilen nach Lieferanten, Herstellern, Dienstleistern und Händlern recherchieren.

Whitepaper

- ANZEIGE -

INDUSTRIE 4.0

Der Hype um Industrie 4.0 hat sich gelegt – nun geht es an die Umsetzung. Das Whitepaper von Protolabs zeigt Unternehmen und Führungskräften, wie sie die 4. Industrielle Revolution erfolgreich meistern. Es liegt an den Herstellern, die besten Möglichkeiten und effizientesten Prozesse bereitzustellen, die Unternehmen für die Herstellung von Produkten nutzen können. Lesen Sie mehr zu: Verbesserten Strukturen von Herstellern und Fabriken | Konvergenz zwischen Soft- und Hardwareautomatisierung | Auswirkungen auf die Neuaufstellung von Unternehmen | verkürzten Produkteinführungszeiten
Jetzt gratis downloaden!

Bildnachweise