Skip to main content

Über dieses Buch

This book constitutes the refereed proceedings of the First International Conference on Recent Trends in Image Processing and Pattern Recognition, RTIP2R 2016, held in Bidar, Karnataka, India, in December 2017.

The 39 revised full papers presented were carefully reviewed and selected from 99 submissions. The papers are organized in topical sections on document analysis; pattern analysis and machine learning; image analysis; biomedical image analysis; biometrics.



Document Analysis


Complex and Composite Graphical Symbol Recognition and Retrieval: A Quick Review

One of the key difficulties in graphics recognition domain is to work on complex and composite symbol recognition, retrieval and spotting. This paper covers a quick view on complex and composite symbol recognition, which is inspired by real-world industrial problem. Considering it as a pattern recognition problem, three different approaches: statistical, structural and syntactic are taken into account. It includes fundamental concepts or techniques and research standpoints or directions derived by a real-world application.

K. C. Santosh

Word-Level Thirteen Official Indic Languages Database for Script Identification in Multi-script Documents

Without a publicly available database, we cannot advance research nor can we make a fair comparison with the state-of-the-art methods. To bridge this gap, we present a database of eleven Indic scripts from thirteen official languages for the purpose of script identification in multi-script document images. Our database is composed of 39K words that are equally distributed (i.e., 3K words per language). At the same time, we also study three different pertinent features: spatial energy (SE), wavelet energy (WE) and the Radon transform (RT), including their possible combinations, by using three different classifiers: multilayer perceptron (MLP), fuzzy unordered rule induction algorithm (FURIA) and random forest (RF). In our test, using all features, MLP is found to be the best performer showing the bi-script accuracy of 99.24% (keeping Roman common), 98.38% (keeping Devanagari common) and tri-script accuracy of 98.19% (keeping both Devanagari and Roman common).

Sk Md Obaidullah, K. C. Santosh, Chayan Halder, Nibaran Das, Kaushik Roy

Skew Detection and Correction of Devanagari Script Using Interval Halving Method

We proposed a method for skew detection and correction of handwritten and printed Devanagari script which is based on pixels of axes parallel rectangle and interval halving also called bisection method. The proposed approach works for skewed word, uniform skewed angle of line and paragraph. The method uses tangential pixels of axes parallel rectangle. By finding these pixels, area of a rectangle which is axes parallel is calculated. Then using numerical bisection method, pixels of this rectangle box are rotated and again area of the rectangle using new rotated pixels is calculated and the calculated area is compared with the previously calculated area. This process continues till we cant get the least area of axes parallel rectangle. The angle with the least area of a rectangle is the skew angle. This technique achieves a good result as compared with other techniques in the literature with the accuracy rate of 95%. The manual trained dataset is used for testing purpose to calculate the accuracy.

Trupti A. Jundale, Ravindra S. Hegadi

Eigen Value Based Features for Offline Handwritten Signature Verification Using Neural Network Approach

Handwritten Signature is primary means for authentication and identification process. In this paper, we have extracted the features based on Eigen values techniques. Eigen values are computed from upper and lower envelope, envelopes represents the contours of the signature. In proposed work we are using GPDS Synthetic Signature Corpus database. Significant features are extracted from signatures which consist of large and small Eigen values computed from upper envelope and lower envelope and its union values. Both the envelopes are fused by performing union operation and their covariance is computed. The difference and ratios of high and low points of both the envelopes are computed. Lastly average values of both the envelopes are obtained. These features set are coupled with neural network pattern recognition classifier that lead to 98.1% of accuracy and FAR 1.9%.

Amruta B. Jagtap, Ravindra S. Hegadi

An Approach for Logo Detection and Retrieval in Documents

Detection and Retrieval of logos in document images has become a fundamental concept in the Document Image Analysis and Recognition (DIAR). In this work, we propose a system to identify logos from a given document. The approach initially eliminates the text and later the logos are extracted from the remaining contents through proposed logo detection algorithm using central moments. For detected logos, the scale invariant feature transforms are extracted and the extracted features are reduced using principle component analysis (PCA). For effective retrieval of logos, an indexing mechanism called k-d tree is used. In order to substantiate the efficacy of the proposed model experimentation is conducted based on a dataset over 500 various samples such as conference certificates, degree certificates, attendance certificates, etc. Further, to study the efficiency of the proposed method we have compared the obtained results with the results provided by five human experts and the results are more encouraging.

Y. H. Sharath Kumar, K. C. Ranjith

Spotting Symbol over Graphical Documents Via Sparsity in Visual Vocabulary

This paper proposes a new approach to localize symbol in the graphical documents using sparse representations of local descriptors over learning dictionary. More specifically, a training database, being local descriptors extracted from documents, is used to build the learned dictionary. Then, the candidate regions into documents are defined following the similarity property between sparse representations of local descriptors. A vector model for candidate regions and for a query symbol is constructed based on the sparsity in a visual vocabulary where the visual words are columns in the learned dictionary. The matching process is performed by comparing the similarity between vector models. The first evaluation on SESYD database demonstrates that the proposed method is promising.

Do Thanh Ha, Salvatore Tabbone, Oriol Ramos Terrades

Word Retrieval from Kannada Document Images Using HOG and Morphological Features

This paper presents a method to retrieve words from Kannada documents. It works on Histogram of Oriented Gradients (HOG) and Morphological filters. A large dataset of 50000 words is created using 250 document pages belongs to different categories. A preprocessed document image is segmented using simple morphological filters. The histogram channels are designed over four-sided cells (i.e. R-HOG) to compute gradients of a word image. In parallel, morphological erosion, opening, top and bottom hat transformations are applied on each word. The densities of the resultant images are estimated. Later on, HOG and morphological features are fused. Then, the cosine distance is used to measure the similarity between two words i.e., query and candidate word, based on it, the relevance of the word is estimated by generating distance ranks. Then correctly matched words are selected at threshold 98%. The experimental results confirm the efficiency of our proposed method in terms of the average precision rate 91.23%, and average recall rate 84.78% as well as average F-measure 89.47%.

Mallikarjun Hangarge, C. Veershetty, P. Rajmohan, Gururaj Mukarambi

Hand-Drawn Symbol Recognition in Immersive Virtual Reality Using Deep Extreme Learning Machines

Typical character and symbol recognition systems are based on images that are drawn on paper or on a tablet with actual physical contact between the pen and the surface. In this study, we investigate the recognition of symbols that are written while the user is immersed inside a room scale virtual reality experience using a consumer grade head-mounted display and related peripherals. A novel educational simulation was developed consisting of a virtual classroom with whiteboard where users can draw symbols. A database of 30 classes of hand-drawn symbols created from test subjects using this environment is presented. The performance of the symbol recognition system was evaluated with deep extreme learning machine classifiers, with accuracy rates of 94.88% with a single classifier and 95.95% with a multiple classifier approach. Further analysis of the results obtained support the conclusion that there are a number of challenges and difficulties related to drawing in this type of environment given the unique constraints and limitations imposed by virtual reality and in particular the lack of physical contact and haptic feedback between the controller and virtual space. Addressing the issues raised for these types of interfaces opens new challenges for both human-computer interaction and symbol recognition. Finally, the approach proposed in this paper creates a new avenue of research in the field of document analysis and recognition by exploring how texts and symbols can be analyzed and automatically recognized in virtual scenes of this type.

Hubert Cecotti, Cyrus Boumedine, Michael Callaghan

Comparative Study of Handwritten Marathi Characters Recognition Based on KNN and SVM Classifier

Robust handwritten Marathi character recognition is essential to the proper function in document analysis field. Many researches in OCR have been dealing with the complex challenges of the high variation in character shape, structure and document noise. In proposed system, noise is removed by using morphological and thresholding operation. Skewed scanned pages and segmented characters are corrected using Hough Transformation. The characters are segmented from scanned pages by using bounding box techniques. Size variation of each handwritten Marathi characters are normalized in 40 $$\times $$ 40 pixel size. Here we propose feature extraction from handwritten Marathi characters using connected pixel based features like area, perimeter, eccentricity, orientation and Euler number. The modified k-nearest neighbor (KNN) and SVM algorithm with five fold validation has been used for result preparation. The comparative accuracy of proposed methods are recorded. In this experiment modified SVM obtained high accuracy as compared with KNN classifier.

Parshuram M. Kamble, Ravindra S. Hegadi

Recognition of Handwritten Devanagari Numerals by Graph Representation and Lipschitz Embedding

In this paper, the task of recognizing handwritten devanagari numerals by giving graph representation is introduced. Lipschitz embedding is explored to extract style, size invariant features from numeral graphs. Graph based features adequately model the cursivenes and are invariant to shape transformations. Recognition is carried out by SVM with radial basis function. Extensive experiments have been carried on standard dataset of CVPR ISI Kolkata. Comparative study of our results is presented with previous reported results on the dataset. From this study, graph representation seems to be robust and resilient.

Mohammad Idrees Bhat, B. Sharada

Unlocking Textual Content from Historical Maps - Potentials and Applications, Trends, and Outlooks

Digital map processing has been an interest in the image processing and pattern recognition community since the early 80s. With the exponential growth of available map scans in the archives and on the internet, a variety of disciplines in the natural and social sciences grow interests in using historical maps as a primary source of geographical and political information in their studies. Today, many organizations such as the United States Geological Survey, David Rumsey Map Collection,, and National Library of Scotland, store numerous historical maps in either paper or scanned format. Only a small portion of these historical maps is georeferenced, and even fewer of them have machine-readable content or comprehensive metadata. The lack of a searchable textual content including the spatial and temporal information prevents researchers from efficiently finding relevant maps for their research and using the map content in their studies. These challenges present a tremendous collaboration opportunity for the image processing and pattern recognition community to build advance map processing technologies for transforming the natural and social science studies that use historical maps. This paper presents the potentials of using historical maps in scientific research, describes the current trends and challenges in extracting and recognizing text content from historical maps, and discusses the future outlook.

Yao-Yi Chiang

Document Image Analysis for a Major Indic Script Bangla - Advancement and Scope

Bangla is one of the most popular script from eastern part of India with a demographic distribution of 212 million of population, used in languages like Bengali, Assamese, Manipuri etc. There are several works reported in literature considering Bangla script in the areas of Bangla OCR, postal automation, writer verification, online document analysis, script identification etc. They need more attention from the researchers as almost all of the mentioned topics are yet far from getting maturity. In this paper, a study on the advancement of those techniques with emphasis on Bangla script is presented. Also, some future directions in this field will be discussed.

Kaushik Roy

Pattern Analysis and Machine Learning


Combination of Feature Selection Methods for the Effective Classification of Microarray Gene Expression Data

Gene selection from microarray gene expression data is very difficult due to the large dimensionality of the data. The number of samples in the microarray data set is very small compared to the number of genes as features. To reduce dimensionality, selection of significant genes is necessary. An effective method of gene feature selection helps in dimensionality reduction and improves the performance of the sample classification. In this work, we have examined if combination of feature selection methods can improve the performance of classification algorithms. We propose two methods of combination of feature selection techniques. Experimental results suggest that appropriate combination of filter gene selection methods is more effective than individual techniques for microarray data classification. We have compared our combination methods using different learning algorithms.

T. Sheela, Lalitha Rangarajan

Pattern Recognition Based on Hierarchical Description of Decision Rules Using Choquet Integral

A hierarchical approach to automatically extract subsets of soft output classifiers, assumed to decision rules, is presented in this paper. Output of classifiers are aggregated into a decision scheme using the Choquet integral. To handle this, two selection schemes are defined, aiming to discard weak or redundant decision rules so that most relevant subsets are restored. For validation, we have used two different datasets: shapes (Sharvit) and graphical symbols (handwritten, CVC - Barcelona). Our experimental study attests the interest of the proposed methods.

K. C. Santosh, Laurent Wendling

Classification of Summarized Sensor Data Using Sampling and Clustering: A Performance Analysis

As humans and machines generate a tremendous amount of digital data in their daily life, we are in the era of Big Data which poses unique challenges of storing, processing and analyzing this voluminous data. Sensors which continuously generate data are one important source of Big Data and have innumerous applications in real life scenario. As storing the entire data becomes expensive, summarization is the need of the hour. Data Summarization is a compact representation of the entire data which can reduce storage and processing requirements. In this work, we try to effectively summarize Sensor Data using simple but effective techniques such as sampling and clustering and analyze the performance of the summarized data in comparison to the complete dataset. Popular classification techniques like KNN, SVM and Naive Bayes are used to evaluate the efficiency of the summarization techniques by training the classifiers using the summarized data and testing with the test data set. The performance of the summarized dataset and the complete dataset are compared. The experimental results show that summarized data set performs almost equally well as the complete data set.

Lavanya P.G., Suresha Mallappa

Feature Reduced Weighted Fuzzy Binarization for Histogram Comparison of Promoter Sequences

Effective biological sequence analysis methods are in great demand. This is due to the increasing amount of sequence data being generated from the improved sequencing techniques. In this study, we select statistically significant features/motifs from the Position Specific Motif Matrices of promoters. Later, we reconstruct these matrices using the chosen motifs. The reconstructed matrices are then binarized using triangular fuzzy membership values. Then the binarized matrix is assigned weights to obtain the texture features. Histogram is plotted to visualize the distribution of texture values of each promoter and later histogram difference is computed across pairs of promoters. This histogram difference is a measure of underlying dissimilarity in the promoters being compared. A dissimilarity matrix is constructed using the histogram difference values of all the promoter pairs. From the experiments, the combination of feature reduction and fuzzy binarization seems to be useful in promoter differentiation.

K. Kouser, Lalitha Rangarajan

A Fast k-Nearest Neighbor Classifier Using Unsupervised Clustering

In this paper we propose a fast method to classify patterns when using a k-nearest neighbor (kNN) classifier. The kNN classifier is one of the most popular supervised classification strategies. It is easy to implement, and easy to use. However, for large training data sets, the process can be time consuming due to the distance calculation of each test sample to the training samples. Our goal is to provide a generic method to use the same classification strategy, but considerably speed up the distance calculation process. First, the training data is clustered in an unsupervised manner to find the ideal cluster setup to minimize the intra-class dispersion, using the so-called “jump” method. Once the clusters are defined, an iterative method is applied to select some percentage of the data closest to the cluster centers and furthest from the cluster centers, respectively. Beside some interesting property discovered by altering the different selection criteria, we proved the efficiency of the method by reducing by up to 71% the classification speed, while keeping the classification performance in the same range.

Szilárd Vajda, K. C. Santosh

DSS for Prognostication of Academic Intervention by Applying DM Technique

The decision making process in any field related to human behavior is an integral part of complex environment. Education has become a major concern of the development process in every field. The competent decision making to plan, execute and evaluate policies in this field became a necessity of this field which may be achieved by applying data mining techniques to educational environment. EDM uses many techniques such as decision trees, neural networks, k-nearest neighbor, naive bays, support vector machines and many others. The principal purpose of the study is identifying how each of these traditional processes can be improved through data mining techniques. And also to analyze the student behavior for future benefit by applying Data mining technique and using this valuable information for Decision Support to Prognostication of Academic Intervention for Higher Education using Data Mining Techniques. The study has suggested models and algorithms for every educational sphere we found while analyzing Education database. Data modeling processes is our main outcome, and enhanced processes achieved through data mining have been presented as research outcome. The techniques appropriate in achieving this enhanced processes has also been presented as modeling processes for finding the behavioral aspects and to effectively implement the organizational policies.

Pawan Wasnik, S. D. Khamitkar, P. U. Bhalchandra, S. N. Lokhande, Preetam Tamsekar, Govind Kulkarni, Kailas Hambarde, Shaikh Husen

Cluster Based Symbolic Representation for Skewed Text Categorization

In this work, a problem associated with imbalanced text corpora is addressed. A method of converting an imbalanced text corpus into a balanced one is presented. The presented method employs a clustering algorithm for conversion. Initially to avoid curse of dimensionality, an effective representation scheme based on term class relevancy measure is adapted, which drastically reduces the dimension to the number of classes in the corpus. Subsequently, the samples of larger sized classes are grouped into a number of subclasses of smaller sizes to make the entire corpus balanced. Each subclass is then given a single symbolic vector representation by the use of interval valued features. This symbolic representation in addition to being compact helps in reducing the space requirement and also the classification time. The proposed model has been empirically demonstrated for its superiority on bench marking datasets viz., Reuters 21578 and TDT2. Further, it has been compared against several other existing contemporary models including model based on support vector machine. The comparative analysis indicates that the proposed model outperforms the other existing models.

Lavanya Narayana Raju, Mahamad Suhil, D. S. Guru, Harsha S. Gowda

Semi-supervised Text Categorization Using Recursive K-means Clustering

In this paper, we present a semi-supervised learning algorithm for classification of text documents. A method of labeling unlabeled text documents is presented. The presented method is based on the principle of divide and conquer strategy. It uses recursive K-means algorithm for partitioning both labeled and unlabeled data collection. The K-means algorithm is applied recursively on each partition till a desired level partition is achieved such that each partition contains labeled documents of a single class. Once the desired clusters are obtained, the respective cluster centroids are considered as representatives of the clusters and the nearest neighbor rule is used for classifying an unknown text document. Series of experiments have been conducted to bring out the superiority of the proposed model over other recent state of the art models on 20Newsgroups dataset.

Harsha S. Gowda, Mahamad Suhil, D. S. Guru, Lavanya Narayana Raju

Class Specific Feature Selection for Interval Valued Data Through Interval K-Means Clustering

In this paper, a novel feature selection approach for supervised interval valued features is proposed. The proposed approach takes care of selecting the class specific features through interval K-Means clustering. The kernel of K-Means clustering algorithm is modified to adapt interval valued data. During training, a set of samples corresponding to a class is fed into the interval K-Means clustering algorithm, which clusters features into K distinct clusters. Hence, there are K number of features corresponding to each class. Subsequently, corresponding to each class, the cluster representatives are chosen. This procedure is repeated for all the samples of remaining classes. During testing the feature indices correspond to each class are used for validating the given dataset through classification using suitable symbolic classifiers. For experimentation, four standard supervised interval datasets are used. The results show the superiority of the proposed model when compared with the other existing state-of-the-art feature selection methods.

D. S. Guru, N. Vinay Kumar

Image Analysis


Performance Analysis of Impulse Noise Attenuation Techniques

At present, digital image processing is elevated vicinity. Image possession, a broadcast may corrupt an image with impulse noise. Several realistic appliances necessitate a superior, squat complex de-noising practice as a pre-processing maneuver. While impulse noise filtering, the need is to conserve edges and image features. The merely damaged pixel should be filtered, to evade image smoothing. Analyses of few impulse noise cutback procedures are discussed in the study, their outcomes are inspected as well as competences are estimated in MATLAB R2014a. An appraisal affords inclusive acquaintance of noise diminution methods and also assists pollsters in paramount impulse noise reduction technique selection.

M. S. Sonawane, C. A. Dhawale

Neuro-Fuzzy Approach for Speckle Noise Reduction in SAR Images

In recent years, SAR image processing plays a major role in coastal region monitoring through object identification for the betterment of the livelihood of sea shore people. In order to carry out the above said task, SAR images need to be analysed for extracting features which could be accomplished only after the removal of speckle noise. In this work, a new approach using Neuro-Fuzzy method is proposed for the removal of speckle noise. It is developed on fuzzy logic rule-based system. It is designed based on 3-input 1-output first order Sugeno type fuzzy inference system (FIS). The experimental analysis shows an effective performance of the proposed approach. The obtained results of the proposed approach are compared with results of the traditional approaches and it is proved that the NeuroFuzzy approach is giving better results compared with traditional methods.

Vijayakumar Singanamalla, Santhi Vaithyanathan

A Hybrid Approach for Hyper Spectral Image Segmentation Using SMLR and PSO Optimization

A hybrid approach for hyperspectral image segmentation is presented in this paper. The contribution of the proposed work is in two folds. First, learning of the class posterior probability distributions with Quadratic Programming or joint probability distribution by employing sparse multinomial logistic regression (SMLR) model. Secondly, estimation of the dependencies using spatial information and edge information by minimum spanning forest rooted on markers by acquiring the information from the first step to segment the hyper spectral image using a Markov Random field segments. The particle swarm optimization (PSO) is performed based on the SMLR posterior probabilities to reduce the large number of training data set. The performance of the proposed approach is illustrated in a number of experimental comparisons with recently introduced hyperspectral image analysis methods using both simulated and real hyper spectral data sets of Mars.

Rashmi P. Karchi, B. K. Nagesh

An Integrated Framework to Image Retrieval Using L*a*b Color Space and Local Binary Pattern

Information retrieval in the form of documents, images is playing a key role in day-to-day life of humans. Real world applications scenario is changing daily, so as the improvement is also becomes a necessary. Various approaches were proposed for retrieval, but all the input images were considered under proper illumination conditions. If the images suffered with illumination angle color and viewing angle changes then it’s very difficult to retrieve similar images. We propose a system which can deal with illumination angle and color variations. Experiments were conducted on ALOI (Amsterdam Library of Object Images) dataset, which is a collection of one thousand objects each with hundred similar images. These images were recorded under changing the illumination angle, color and viewing angle. Experimental results prove that the proposed approach outperforms well in terms of retrieval efficiency $$\dots $$

N. Neelima, P. Koteswara rao, N. Sai Praneeth, G. N. Mamatha

Texture Classification Using Deep Neural Network Based on Rotation Invariant Weber Local Descriptor

In recent times deep neural network is widely used in different challenging problem domain like object detection, character recognition, and texture classification. There are several texture descriptors present in the pattern recognition domain to classify challenging texture datasets. Recently an efficient texture descriptor called Rotation Invariant Weber local descriptor (WLDRI), an improvement over original Weber local descriptor (WLD) has been developed to classify skin diseases. In this work, we explore the combination of WLDRI kernel in a simple deep neural network. Though the normal WLD and WLDRI efficiently classify the textures but applying WLDRI kernel into deep neural network significantly improves the performance than normal WLD and WLDRI. The present method provides an average improvement of 8.06% over WLDRI with OUTEX-10 dataset and provides 5.03% better accuracy over normal WLD with KTH-TIPS2-a dataset. In addition CMATER skin diseases dataset is used for experiment and it shows 5.74% better performance over the normal WLDRI.

Arnab Banerjee, Nibaran Das, Mita Nasipuri

Industrial Applications of Colour Texture Classification Based on Anisotropic Diffusion

A novel method of colour texture classification based on anisotropic diffusion is proposed and is investigated with different colour spaces. The objective is to explore the colour spaces for their suitability in automatic classification of certain textures in industrial applications, namely, granite tiles and wood textures, using computer vision. The directional subbands of digital image of material samples are obtained using wavelet transform. The anisotropic diffusion is employed to obtain the texture components of directional subbands. Further, statistical features are extracted from the texture components. The linear discriminant analysis (LDA) is employed on feature space to achieve class separability. The proposed method has been experimented on RGB, HSV, YCbCr and Lab colour spaces. The k-NN classifier is used for texture classification. For experimentation, image samples from MondialMarmi database of granite tiles and Parquet database of hard wood are considered. The experimental results are encouraging due to reduced time complexity, reduced feature set size and improved classification accuracy as compared to the state-of-the-art-methods.

P. S. Hiremath, Rohini A. Bhusnurmath

Shot-Based Keyframe Extraction Using Bitwise-XOR Dissimilarity Approach

Keyframe extraction is an essential task in many video analysis applications such as video summarization, video classification, video indexing and retrieval. In this paper, a method of extracting keyframes from the shots using bitwise XOR dissimilarity has been proposed. The task of segmenting continuous video into shots and selection of key frames from segmented shots has been addressed. The above task has been accomplished through a feature extraction technique, based on bitwise XOR operation between consecutive gray scale frames of the video. Thresholding mechanism is employed to segment the videos into shots. Dissimilarity matrix is constructed to select a representative keyframe from every shot of a video sequence. The proposed shot boundary detection and keyframe extraction approach is implemented and evaluated on a subset of TRECVID 2001 data set. The proposed approach outperform other contemporary approaches in terms of efficiency and accuracy. Also, the experimental results on the data set have demonstrated the efficacy of the proposed keyframe extraction technique in terms of fidelity measure.

Rashmi B.S., Nagendraswamy H.S.

Biomedical Image Analysis


Automatic Compound Figure Separation in Scientific Articles: A Study of Edge Map and Its Role for Stitched Panel Boundary Detection

We present a technique that uses edge map to separate panels from stitched compound figures appearing in biomedical scientific research articles. Since such figures may comprise images from different imaging modalities, separating them is a critical first step for effective biomedical content-based image retrieval (CBIR). We study state-of-the-art edge detection algorithms to detect gray-level pixel changes. It then applies a line vectorization process that connects prominent broken lines along the panel boundaries while eliminating insignificant line segments within the panels. We have validated our fully automatic technique on a subset of stitched multipanel biomedical figures extracted from articles within the Open Access subset of PubMed Central repository, and have achieved precision and recall of 74.20% and 71.86%, respectively, in less than 0.272 s per image, on average.

A. Aafaque, K. C. Santosh

Automated Chest X-ray Image View Classification using Force Histogram

To advance and/or ease computer aided diagnosis (CAD) system, chest X-ray (CXR) image view information is required. In other words, separating CXR image view: frontal and lateral can be considered as a crucial step to effective subsequent processes, since the techniques that work for frontal CXRs may not equally work for lateral ones. With this motivation, in this paper, we present a novel machine learning technique to classify frontal and lateral CXR images, where we introduce a force histogram to extract features and apply three different state-of-the-art classifiers: support vector machine (SVM), random forest (RF) and multi-layer perceptron (MLP). We validated our fully automatic technique on a set of 8100 images hosted by National Library of Medicine (NLM), National Institutes of Health (NIH), and achieved an accuracy close to 100%.

K. C. Santosh, Laurent Wendling

An Automatic Computerized Model for Cancerous Lung Nodule Detection from Computed Tomography Images with Reduced False Positives

The objective of this work is to identify the malignant lung nodules accurately and early with less false positives. In our work block histogram based auto center seed k-means clustering technique is used to segment all the possible nodule candidates. Efficient shape and texture features (2D and 3D) were computed to eliminate the false nodule candidates. The two-stage classifier is used in this work to classify the malignant and benign nodules. First stage rule-based classifier producing 100% sensitivity, but with high false positive of 13.1 per patient scan. The BPN based ANN classifier is used as the second-stage classifier which reduces a false positive to 2.26 per patient scan with a good sensitivity of 88.8%. The nodule growth predictive measure was modeled through the features such as tissue deficit, tissue excess, isotropic factor and edge gradient. The overlap of these measures for larger, average and minimum nodule growth cases are less. Therefore this developed growth prediction model can be used to assist the physicians while taking the decision on the cancerous nature of lung nodules from an earlier CT scan.

Senthilkumar Krishnamurthy, Ganesh Narasimhan, Umamaheswari Rengasamy

Synthesis of a Neural Network Classifier for Hepatocellular Carcinoma Grading Based on Triphasic CT Images

Computer Aided Decision (CAD) systems based on Medical Imaging could support radiologists in grading Hepatocellular carcinoma (HCC) by means of Computed Tomography (CT) images, avoiding that patient undergo any medical invasive procedures such as biopsies. The individuation and characterization of Regions of Interest (ROIs) containing lesions is an important phase that enables an easier classification between two classes of HCCs. Two phases are needed for the individuation of lesioned ROIs: a liver isolation in each CT slice, and a lesion segmentation. Ultimately, all individuated ROIs are described by morphological features and, finally, a feed-forward supervised Artificial Neural Network (ANN) is used to classify them. Testing determined that the ANN topologies found through an evolutionary strategy showed a high generalization on the mean performance indices regardless of applied training, validation and test sets, showing good performances in terms of both accuracy and sensitivity, permitting a correct grading of HCC lesions.

Vitoantonio Bevilacqua, Leonarda Carnimeo, Antonio Brunetti, Andrea De Pace, Pietro Galeandro, Gianpaolo Francesco Trotta, Nicholas Caporusso, Francescomaria Marino, Vito Alberotanza, Arnaldo Scardapane

A Comprehensive Method for Assessing the Blepharospasm Cases Severity

Blepharospasm is characterized by bilateral, synchronous, and symmetric involuntary orbicularis oculi muscle spasms leading to partial/total eyelid closure. We proposed a comprehensive method for assessing the severity of blepharospasm cases, based on expert observation of fixed-length video recordings and natural feature detection algorithms. The developed software detects involuntary spasms and blinks in order to evaluate BSP severity. In this work we have considered a new BSP severity scale to realize an objective evaluation of BSP severity.

Vitoantonio Bevilacqua, Antonio Emmanuele Uva, Michele Fiorentino, Gianpaolo Francesco Trotta, Maurizio Dimatteo, Enrico Nasca, Attilio Nicola Nocera, Giacomo Donato Cascarano, Antonio Brunetti, Nicholas Caporusso, Roberta Pellicciari, Giovanni Defazio

Identification and Classification of Fruit Diseases

In this paper, we propose a method for identification and classification of fruit diseases. Initially, the fruit image is segmented using K-means and C-Means clustering algorithms, later performance evaluation of segmentation algorithm is conducted using the parameters such as Measure of under-segmentation (MUS), Measure of overlapping (MOL), Dice similarity measure (DSM), Measure of over segmentation (MOS), Error-rate (ER). The performance evaluation of segmentation parameters are calculated based on the comparison between the manually segmented ground truth and segmentation result generated by the image segmentation approaches. After segmentation, features are extracted using Gray Level Co-occurrence Matrix. The K-Nearest Neighbors Algorithm classifier is used to classification. The collected database consists of 10 classes with 243 images. The result of classification has relatively higher accuracy in all cases when segmented using K-Means compared to C-Means clustering algorithms ...

Y. H. Sharath Kumar, G. Suhas

Circular Foreign Object Detection in Chest X-ray Images

In automated chest X-ray screening (to detect i.e., Tuberculosis for instance), the presence of foreign objects (buttons, medical devices) hinders it’s performance. In this paper, we present a new technique for detecting circular foreign objects, in particular buttons, within chest X-ray (CXR) images. In our technique, we use a pre-processing step that enhances the CXRs. Using these enhanced images, we find the edge images performing four different edge detection algorithms (Sobel, Canny, Prewitt, and Roberts) and after that, we apply some morphological operations to select candidates (image segmentation) in the chest region. Finally, we apply circular Hough transform (CHT) to detect the circular foreign objects on those images. In all tests, our algorithm performed well under a variety of CXRs. We also compared our proposed technique’s performance with existing techniques in literature (Viola-Jones and CHT). Our technique was able to excel performance in terms of both detection accuracy and computational time.

Fatema Tuz Zohora, K. C. Santosh



FaceMatch: Real-World Face Image Retrieval

We researched and developed a practical methodology for face and image retrieval (FIR) based on optimally weighted image descriptor ensemble. We describe a single-image-per-person (SIPP) face image retrieval system for real-world applications that include large photo collection search, person location in disaster scenarios, semi-automatic image data annotation, etc. Our system provides efficient means for face detection, matching and annotation, working with unconstrained digital photos of variable quality, requiring no time-consuming training, yet showing a commercial performance level at its sub-tasks. Our system benefits public by providing practical FIR technology, annotated image data and web-services to a real-world family reunification system.

Eugene Borovikov, Szilárd Vajda

Evaluation of Texture Features for Biometric Verification System Using Handvein and Finger Knuckleprint

In this paper, we have evaluated a biometric verification system of universal acceptable hand based modalities. We have employed the Dorsal Handvein and finger Knuckle print modalities that are recently emerging modalities in the biometrics field. The Handvein and finger knuckle print databases are subjected to extract the texture features from the texture classification techniques like LBP, LPQ and Gabor filter. We compare both the modalities separately and conclude from the results obtained which modality performs better on texture descriptors. The results are presented graphically using the ROC curve which is plotted for GAR (Genuine Acceptance Rate) v/s FAR (False Acceptance Rate) with the threshold benchmark values of FAR (0.01%, 0.1%, 1%) in illustrating the performance of verification rate. From our experimental results, it is clearly evident that the LPQ texture features are more suitable for both modalities.

H. D. Supreetha Gowda, G. Hemantha Kumar

Face Sketch Matching Using Speed up Robust Feature Descriptor

In this paper, face sketch and face image matching problem is presented. Matching of the sketch with face is crucial for the law enforcement applications and received attention of the researchers in the recent years. Face sketch and face images are the two different modality representations of the same face. Face sketch is drawn based on description given by the witness when no other source of information is available about the suspect. Matching of the sketch with the face image is challenging problem due to the visual difference between a face sketch and face image. To find the potential suspect, the sketch is compared with the different face images. Speed Up Robust Feature (SURF) descriptor used for matching similarity between sketch and the face image. The result shows that, SURF descriptor gives better result as compared to Scale Invariant Feature Transform descriptor for the viewed and forensic sketches.

N. K. Bansode, P. K. Sinha

Enhanced Gray Scale Skeletonization of Fingerprint Ridges Using Parallel Algorithm

Thinning of fingerprint ridges plays a vital role in fingerprint identification systems as it simplifies the subsequent processing steps like fingerprint classification and feature extraction. In this paper, we analyze some of the parallel thinning algorithms and have proposed a methodology for skeletonization of fingerprint ridges directly on gray scale images as significant amount of information and features are lost during the binarization process. This algorithm is based on conditionally eroding the gray level ridges iteratively until a one pixel thick ridge is obtained. Refinement procedures have also been proposed to improve the quality of ridge skeleton. Experiments conducted on sample fingerprint images collected using an optical fingerprint Reader exhibit desirable features of the proposed approach.

Shoba Dyre, C. P. Sumathi


Weitere Informationen

BranchenIndex Online

Die B2B-Firmensuche für Industrie und Wirtschaft: Kostenfrei in Firmenprofilen nach Lieferanten, Herstellern, Dienstleistern und Händlern recherchieren.



Globales Erdungssystem in urbanen Kabelnetzen

Bedingt durch die Altersstruktur vieler Kabelverteilnetze mit der damit verbundenen verminderten Isolationsfestigkeit oder durch fortschreitenden Kabelausbau ist es immer häufiger erforderlich, anstelle der Resonanz-Sternpunktserdung alternative Konzepte für die Sternpunktsbehandlung umzusetzen. Die damit verbundenen Fehlerortungskonzepte bzw. die Erhöhung der Restströme im Erdschlussfall führen jedoch aufgrund der hohen Fehlerströme zu neuen Anforderungen an die Erdungs- und Fehlerstromrückleitungs-Systeme. Lesen Sie hier über die Auswirkung von leitfähigen Strukturen auf die Stromaufteilung sowie die Potentialverhältnisse in urbanen Kabelnetzen bei stromstarken Erdschlüssen. Jetzt gratis downloaden!