Skip to main content
main-content

Über dieses Buch

The two-volume set CCIS 483 and CCIS 484 constitutes the refereed proceedings of the 6th Chinese Conference on Pattern Recognition, CCPR 2014, held in Changsha, China, in November 2014. The 112 revised full papers presented in two volumes were carefully reviewed and selected from 225 submissions. The papers are organized in topical sections on fundamentals of pattern recognition; feature extraction and classification; computer vision; image processing and analysis; video processing and analysis; biometric and action recognition; biomedical image analysis; document and speech analysis; pattern recognition applications.

Inhaltsverzeichnis

Frontmatter

Section IV: Image Processing and Analysis

Transferring Segmentation from Image to Image via Contextual Sparse Representation

It is still a fundamental task to segment objects out from diverse background. To tackle this task, we propose a transferring segmentation framework, which aims to automatically segment new images when a single segmented example is given. Our segmentation approach is developed under the observation that some regions of foreground and background are often very similar but rarely share similar contextual information. To this end, we propose to construct a contextual dictionary by incorporating neighboring information as context. The segmentation task is finally accomplished in way of supervised classification via sparse representation with the constructed contextual dictionary. Experimental results on diverse natural images demonstrate that the proposed method achieves favorable results in both visual quality and accuracy.

Shuangshuang Li, Yonghao He, Shiming Xiang, Lingfeng Wang, Chunhong Pan

Fast Augmented Lagrangian Method for Image Smoothing with Hyper-Laplacian Gradient Prior

As a fundamental tool,

L

0

gradient smoothing has found a flurry of applications. Inspired by the progress of research on hyper-Laplacian prior, we propose a novel model, corresponding to

L

p

-norm of gradients, for image smoothing, which can better maintain the general structure, whereas diminishing insignificant texture and impulse noise-like highlights. Algorithmically, we use augmented Lagrangian method (ALM) to efficiently solve the optimization problem. Thanks to the fast convergence rate of ALM, the speed of the proposed method is much faster than the

L

0

gradient method. We apply the proposed method to natural image smoothing, cartoon artifacts removal, and tongue image segmentation, and the experimental results validate the performance of the proposed algorithm.

Li Chen, Hongzhi Zhang, Dongwei Ren, David Zhang, Wangmeng Zuo

Study on Distribution Coefficient in Regulation Services with Energy Storage System

Renewable energies like wind power and PV come with great fluctuation and uncertainties, and their penetration into power system has brought great challenge to the security and stability of the grid. Energy storage system (ESS) possesses the ability to track power accurately with nearly no delay, which makes it capable of providing regulation services of high quality. Taking into account the State of Charge (SOC) of ESS, this paper analyzes the regulation signal and processes it with blocks of delay, filter, clipping and so on, presenting a method to determine the distribution coefficient of ESS and conventional generator (CG) dynamically based on the available regulation capacity, and an aggregative indicator is used to evaluate the relating effects and SOC maintaining effects. Simulation results show the method proposed can improve the regulation service and maintain the SOC within a desired range, providing technical support to the optimized operation of the grid.

Shaojie Tan, Xinran Li, Ming Wang, Yawei Huang, Tingting Xu, Xingting Cheng

All-Focused Light Field Image Rendering

The coupling between aperture size and the depth of field (DOF) in traditional imaging remains one of the fundamental limits on photographic freedom. The emergence of the plenoptic camera imaging solves the issues. Based on the focus plenoptic camera implemented by Georgiev, we propose an all-focused image rendering algorithm which depends on the DOF. Using the raw image captured by the prototype of the camera, we successfully reconstruct an all-focused image of the scene and calculate a higher resolution depth map. Finally, we conclude that a large depth of field image can still be achieved even with a large aperture.

Zhang Rumin, Ruan Yu, Liu Dijun, Zhang Youguang

Hyperspectral Image Unmixing Based on Sparse and Minimum Volume Constrained Nonnegative Matrix Factorization

Hyperspectal Unmixing (HU) aims at getting the endmember signature and their corresponding abundance maps from highly mixed Hyperspctral image. Nonnegative Matrix Factorization (NMF) is a widely used method for HU recently. Traditional NMF only take sparse constraint or minimum volume constraint into consideration leading to unmixing results not accurately enough. In this paper, we propose a new method based on NMF through combining volume constraint with sparse constraint. According to the convex geometry, we impose minimum volume constraint on endmember matrix. Because sparsity is nature property of abundance, we add the sparse constraint on abundance matrix. Both the experiments on synthetic and real scene images show the effectiveness of the proposed method.

Denggang Li, Shutao Li, Huali Li

An Adaptive Harris Corner Detection Algorithm for Image Mosaic

Image Stitching refers to the technology fusing more than one images with overlapping part into a large field of view image. Image mosaic consists of image preprocessing, image registration and image fusion. To solve problems of serious clustering phenomenon and fewer corner points in the texture region caused by traditional Harris Corner detection algorithm, this paper proposes an improving adaptive threshold setting algorithm by calculating the second-order value of the corner response function, avoiding effects of the selection of scale factor k and threshold T on corner detection. To overcome the weakness of obvious traces in the jointing places caused by traditional weighted average method for image fusion, this paper enhances the weighted average method with trigonometric functions. Experimental results show our proposed algorithms can effectively eliminate the gap generated by image mosaic, with a better speed and precision.

Haixia Pan, Yanxiang Zhang, Chunlong Li, Huafeng Wang

A Study of Ancient Ceramics Verification Based on Vision Methods

Ceramics appraisal is a hot topic in field of cultural relic collection. Traditionally, there are mainly two types of ceramics appraisal methods, which are experience-based methods and technology-based methods. In practice, the both methods would cause high cost and time consuming. In this paper, a novel vision based method, which is mainly inspired by the idea of biometrics recognition techniques, is proposed to achieve efficiently verification of the identity of a ceramics. In this method, the microscopic information of a ceramics captured by a digital microscope camera are used as the characteristics for verification. In technical detail, SURF(Speeded Up Robust Features) is first employed to align the probe image to the gallery images. LBP(Local Binary Patterns) features are then extracted from the two aligned images. Finally, Chi-square distance is calculated to measure the similarity between probe and gallery. Experiments on the dataset constructed by this paper demonstrate the state-of-the-art performance of our method.

Yunqi Tang, Jianwei Ding, Wei Guo

A Two-Stage Blind Image Color Correction Using Color Cast Estimation

The color cast images usually have serious loss of the color information and are inconvenient for visual observation and image analysis. To tackle this problem, a novel two-stage image color cast correction scheme is proposed in this paper. Firstly, the proposed approach performs the color cast and stable channel detection by using extreme intensity ratio of the original image. Second, the distorted image color is restored by solving the constrained problem with the degree of color variation and the above-detected color cast and stable channel. The experimental results using surveillance videos demonstrate that the proposed scheme is not only feasible but also effectively. In addition, the results satisfy human subjective perception as well.

Dawei Zhu, Li Chen, Jing Tian, Xiaotong Huang

Encoding Optimization Using Nearest Neighbor Descriptor

The Bag-of-words framework is probably one of the best models used in image classification. In this model, coding plays a very important role in the classification process. There are many coding methods that have been proposed to encode images in different ways. The relationship between different codewords is studied, but the relationship among descriptors is not fully discovered. In this work, we aim to draw a relationship between descriptors, and propose a new method that can be used with other coding methods to improve the performance. The basic idea behind this is encoding the descriptor not only with its nearest codewords but also with the codewords of its nearest neighboring descriptors. Experiments on several benchmark datasets show that even using this simple relationship between the descriptors helps to improve coding methods.

Muhammad Rauf, Yongzhen Huang, Liang Wang

Multi-modal Image Fusion with KNN Matting

A single captured image of a scene is usually insufficient to reveal all the details due to the imaging limitations of single senor. To solve this problem, multiple images capturing the same scene with different sensors can be combined into a single fused image which preserves the complementary information of all input images. In this paper, a novel K nearest neighbor (KNN) matting based image fusion technique is proposed which consists of the following steps: First, the salient pixels of each input image is detected using a Laplician filtering based method. Then, guided by the salient pixels and the spatial correlation among adjacent pixels, the KNN matting method is used to calculate a globally optimal weight map for each input image. Finally, the fused image is obtained by calculating the weighed average of the input images. Experiments demonstrate that the proposed algorithm can generate high-quality fused images in terms of good visual quality and high objective indexes. Comparisons with a number of recently proposed fusion techniques show that the proposed method generates better results in most cases.

Xia Zhang, Hui Lin, Xudong Kang, Shutao Li

A Two-Step Adaptive Descreening Method for Scanned Halftone Image

Halftoning is a necessary technique for electrophotographic printers to print continuous tone images. Scanned images obtained from such printed hard copies are corrupted by screen like artifacts called halftone patterns. Descreening aims to recover high quality continuous tone image from the scanned image. In this paper, a two-step descreening method is proposed to remove screen like artifacts adaptively. Firstly, an Extreme Learning Machine (ELM) based halftone image classification scheme is introduced to categorize the scanned images into different resolutions. Then in the halftone pattern removal step, patch similarity based smoothing filtering and nonlinear enhancement are combined to remove halftone patterns and preserve the image quality. Experiments demonstrate that the proposed method removes halftone patterns effectively, while preserving more details and recovering cleaner smoothing regions.

Fei Chen, Shutao Li, Le Xu, Bin Sun, Jun Sun

Compressive Sensing Multi-focus Image Fusion

Based on the compressive sensing theory (CS), various compressive imaging (CI) systems have been developed. Meanwhile, image fusion methods that directly perform on the measurements from multiple CI sensors are also investigated in literatures. In this paper, we presented a multi-focus image fusion method in compressive sensing domain. The main contribution is to introduce a novel clarity level of the random CI measurements without prior geometric information. The CI measurements are sparsely represented with DCT bases which are also projected into the CS domain. Then the sparse coefficients responding to DCT bases are used to guide the fusion of CI measurements of CI sensors. Finally, the fused images are obtained with CS recovery algorithm based on the block compressive sensing (BCS) theory. The simulation results validate the proposed method.

Fang Cheng, Bin Yang, Zhiwei Huang

Pan-Sharpening Based on Improvement of Panchromatic Image to Minimize Spectral Distortion

In this paper, we propose a novel method to enhance the pan-sharpening result of low-resolution multispectral images (MS) and high-resolution panchromatic images (Pan) by minimizing the spectral distortion engendered by the fusion process. In fact, spectral distortion is the most significant problem in many pan-sharpening techniques, due to the non linearity between Pan and MS images. In this method, an improvement of the Pan image is performed in order to enhance the correlation between Pan and MS images before pan-sharpening process. The proposed method is applied as a preprocessing by fusing the intensity image derived from MS image with the original Pan to get an improved Pan image which could be more correlated with MS image. And later, the pan-sharpening is applied on both MS and the improved Pan using any pan-sharpening technique. Simulation results of proposed method are compared in four different techniques, such as: Generalized IHS, DWT, Brovey and HPF. It has been observed that simulation results of this method preserves more spectral information and gets better visual quality compared with earlier reported techniques using original Pan.

Akbi Abdelkrim, Zhaoxiang Zhang, Qingjie Liu

Combining SIFT and Individual Entropy Correlation Coefficient for Image Registration

Image registration is an important topic in many fields including industrial image analysis systems, medical and remote sensing. To improve the registration accuracy, an image registration method that combines scale invariant feature transform and individual entropy correlation coefficient (SIFT-IECC) is proposed in this paper. First, scale invariant feature transform algorithm is applied to extract feature points to construct a transformation model. Then, a rough registration image is obtained according to the transformation model. The individual entropy correlation coefficient is used as the similarity measure to refine the rough registration image. Finally, the experimental results show the superior performance of the proposed SIFT-IECC registration method by comparing with the state-of-the-art methods.

Gan Liu, Shengyong Chen, Xiaolong Zhou, Xiaoyan Wang, Qiu Guan, Hui Yu

Spectral Fidelity Analysis of Compressed Sensing Reconstruction Hyperspectral Remote Sensing Image Based on Wavelet Transformation

For hyperspectral image research, spectral characteristic retainment is more important than the spatial details retainment, so it is necessary to evaluate the spectral influence of hyperspectral image compressed sensing. In this paper, the researchers select a hyperspectral remote sensing image PROBE CHRIS with abundant coastal wetland ground objects to analyze spectral fidelity of wavelet transform compressed sensing algorithm on the basis of three indicators between reconstruction and original image pixel spectra: correlation coefficient, error and relative error. Meanwhile, eight typical ground objects are chosen to analyze their respective spectral deviation. The results indicate: (1) Image reconstruction algorithm based on wavelet transform compressed sensing functions well. Between the pixels of reconstruction image and original one, their average spectral correlation coefficient is 0.9428, error is 6.4096, and relative error is 13.81%; (2) Spectrum fidelity indicator values vary with wavebands. Reconstruction algorithm is selective about objects.

Yi Ma, Jie Zhang, Ni An

A Fast Algorithm for Image Defogging

In smoke and haze environment, images acquired by vision create serious distortion or degradation. Obtaining some inaccurate information from an unclear vision, it will have some bad impacts on outdoor activities. More and more common in recent years, the haze phenomena need to be further research. According to the images analysis of the atmospheric degradation model, this article puts forward the improved algorithm based on dark channel prior and morphology. Given the application of He’s algorithm to defog, it makes brightness reduce. Therefore, the article firstly proposes to increase the brightness of image before processing, and then estimates the global atmospheric value, the initial transmission rate and the haze density using morphology method, finally substitutes into the simplified model to get the haze-free image. The experimental results show that the proposed algorithm can recover effectively and quickly degraded images. Meanwhile, this algorithm can keep the detail edges of images.

Xiaoyan He, Jianxu Mao, Zewen Liu, Jiujiang Zhou, Yajing Hua

A New Image Structural Similarity Metric Based on K-L Transform

Recently, structural similarity image metric (SSIM) becomes the most popular model for image quality assessment (IQA). The idea behind SSIM is that natural images are highly structured, and estimate a general similarity of the image pairs from luminance, contrast and structure comparison. A novel similarity measure based on K-L transform is presented in this paper. It combines edge and texture components to provide a hierarchical description of image structure. We validate the performance of our algorithm with an extensive subjective study involving two sets of compressed images, the JPEG and the JPEG2000 images at the LIVE website. The experimental results show that the obtained quality metric had a high correlation with the subjective measure and outperforms SSIM.

Cheng Jiang, Fen Xiao, Xiaobo He

A New Restoration Algorithm for Single Image Defogging

Fog is an atmospheric phenomenon that significantly degrades the visibility of outdoor scenes. Thus, this paper presents an algorithm to remove fog for a single image. The method estimates the transmission map of image degradation model by assigning labels with MRF model and optimizes the map estimation process using the graph-cut based

α

-expansion technique. The algorithm goes with two steps: first, the transmission map is estimated using a dedicated MRF model combined with the bilateral filter. Then, the restored image is obtained by taking the estimated transmission map and the airlight into the image degradation model to recover the scene radiance. A comparative study is proposed with a few other state of the art algorithms which demonstrate that better quality results can be obtained using the proposed method.

Fan Guo, Hui Peng, Jin Tang

An Improved Laparoscopic Image Registration Algorithm Based on Sift

Image registration is a recognized difficulty and many people are working on it to make their algorithms more efficient and robust. In image-guided surgical and interventional procedures, the registration precision and real time effect are both quite important for the following accurate tissue deformation recovery and 3D anatomical registration as well as navigation. This article uses the radon-transform and bidirectional matching approach on SIFT(Scale Invariant Feature Transform) which is aiming at the registration in laparoscopic binocular vision. Finally, we test the new algorithm and give better experiment results by comparing with other common methods.

Jiujiang Zhou, Jianxu Mao, Xiaoyan He

Application of Image Processing Techniques in Infrared Detection of Faulty Insulators

The image processing technique is an essential way to ensure an accurate infrared detection of faulty insulators. In this paper, we analyze the necessity of images processing techniques in infrared detection of faulty insulators, research the related image processing techniques applied in infrared detection of faulty insulators and provide the corresponding practical examples. The work have done in this paper can make a contribution to the application of image processing techniques in infrared detection of faulty insulators.

Yefan Wu, Jiangang Yao, Tangbing Li, Peng Fu, Wei Liao, Mi Zhang

Section V: Video Processing and Analysis

Finding the Accurate Natural Contour of Non-rigid Objects in Video

Non-rigid object tracking is an important task in computer vision, while its natural contour extraction is one of the most difficult problems during the process. Most tracking-by-detection methods are based on rectangular bounding-boxes, this will lead errors into subsequent detection. This paper present a novel superpixel-based detector for accurate natural contour extraction, there are three main contributions: 1) combining real-time superpixel segmentation with natural contour detection, 2) proposing an object-oriented natural contour extraction method for non-rigid objects, 3) proposing a non-rigid object detection method based on flexible scanning window. Compared with those bounding-box based detection methods, our detector can provide very accurate initial input of object model, then produce accurate natural contour output of the non-rigid object. Our detector broke the conventional detection method based on scanning rectangle, which greatly reduced the interference caused by background information. The experiments show that the proposed method outperforms the state-of-the-art algorithms not only on the contour accuracy but also on the computation cost. In addition, the initialization stage of our method overcomes the limitation of HT caused by the size of initial bounding-box.

Gaoxuan Ying, Sheng Liu, Yiting Jin

An Improved Multipitch Tracking Algorithm with Empirical Mode Decomposition

Multipitch tracking is beneficial for speech separation, audio transcription and many other tasks. In this paper, we greatly improve a state-of-the-art multipitch tracking algorithm. While the amplitude and individual peak positions of autocorrelation function (ACF) were used in previous algorithms, a novel feature based on the average frequency of each time-frequency (T-F) unit is proposed in this paper. This feature is computed by an empirical mode decomposition (EMD) method. Then it is utilized to form the conditional probabilities in the hidden Markov model (HMM) given a pitch state of each frame, and finally the most likely state sequence is searched out. Quantitative evaluations show that the novel feature is more effective, and our algorithm significantly outperforms the previous one.

Wei Jiang, Wen-Ju Liu, Ying-Wei Tan, Shan Liang

Robust Appearance Learning for Object Tracking in Challenging Scenes

This paper studies the appearance learning of object tracking in challenging scenes. We propose a new appearance modeling approach in the deep learning architecture for object tracking. Visual prior is learned from a large set of unlabeled images. Then it is transferred to the appearance model during tracking. Traditional trackers usually do tracking before updating at every input image. Drift may occur when there are complex appearance variations. We propose to update the appearance model before tracking. This can effectively prevent tracking failures when there are complex appearance changes. And the motion parameters estimation could be more accurate with the updated appearance model. Experimental results on challenging videos demonstrate the robustness and accuracy of the proposed algorithm compared with several state of the art approaches.

Jianwei Ding, Yunqi Tang, Huawei Tian, Yongzhen Huang

Vehicle Recognition for Surveillance Video Using Sparse Coding

This paper presents a vehicle recognition approach for a real transportation surveillance system using sparse coding. Comparison between sparse coding and conventional histogram of orientation gradient (HOG) has been studied. The results showed that the sparse coding learned feature is better than HOG feature in such vehicle recognition application. Experiments indicated that overlapping spatial pooling over the learned sparse codes can improve accuracy in a great deal.

Shirong Zeng, Xin Niu, Yong Dou

Video Smoke Detection Based on the Optical Properties

Video smoke detection has many advantages such as high response speed and non-contact detecting. But the current video detection methods are either complicated or less reliable. A suitable method for ordinary video smoke detection by analyzing optical properties of smoky images is presented in this paper. The factors of optical properties such as scene radiance, medium transmission, path-length and total scattering coefficient were studied. Different scene radiances represent different objects. Using scene radiance helps us to recognize the suspected area that almost doesn’t change which may include those smoky areas. What’ more, it is found that the total scattering coefficient would increase along with the growing number of particles in the atmosphere caused by smoke, and lead the medium transmission to decrease. The decision rule based on this finding aims to narrow down the suspected smoky region. The experiment results show that this method is effective and practical.

Yingjing Wu, Ying Hu

Discovery of the Topical Object in Commercial Video: A Sparse Coding Method

In this paper, we propose a topical object discovery method in commerical video. This method utilizes the objectness measure to generate the object candidates from the key-frames of the video. Then a sparse coding method is developed to discover the most topical object. Such a method can provide ranked results and therefore we can easily select the most topical object. The experimental validation on 10 videos shows that the sparse coding method performs better than existing topic mining methods.

Yunhui Liu, Huaping Liu, Fuchun Sun

Study the Moving Objects Extraction and Tracking Used the Moving Blobs Method in Fisheye Image

This paper discusses a method of moving object detection and tracking in fisheye video sequences which based on the moving blob method. The fisheye lens has a very large angle of view and it has a better effective used at the no blind surveillance system, but the big distortion of the fisheye image that makes it difficult to achieve the intelligent function. In this paper some algorithms had been discussed which about detect and track the moving objects in the fisheye video sequences. It was divided three steps to discuss the processing algorithm. Firstly, the method of how to calculate the moving blob was discussed in fisheye image. This method included four main algorithms which are the background extracted algorithm, background updated algorithm, the algorithm of fisheye video sequence subtracted with the background to get the moving blobs, the algorithm of remove the shadow of blobs in RGB space. Secondly, the algorithm of how to determine every extracts blob are the real moving object was designed through calculated the pixels with threshold, it can discard the fault moving object. Lastly, the algorithm of tracking the moving objects was designed which based on the moving blobs of selected through calculated the geometry center of blobs. The experiment indicated that every algorithm have a better processing effective to the moving object in fisheye video sequences. The moving object can be detected effectively and stable. When too many objects are at the edge of the image, it is difficult to track every object because of the adhesions which are influenced by the large distortion. This method can be used in the large area fisheye surveillance system when there are not too many objects moving simultaneously.

Wu Jianhui, Zhang Guoyun, Yuan Shuai, Guo Longyuan, Tan Mengxia

Section VI: Biometric and Action Recognition

A Non-negative Low Rank and Sparse Model for Action Recognition

In this paper, we present a new method for video action recognition. The main contributions are two-fold. First, we propose local coordinates contained descriptors (LCCD) instead of appearance-only descriptors. We encode global geometric correspondence by combining descriptors with spatio-temporal locations, which is different from previous methods such as spatio-temporal pyramid matching (STPM). Spatio-temporal location is taken as part of the coding step by utilizing LCCD. Second, a novel non-negative low rank and sparse coding model is developed to encode descriptors for action recognition. Motivated by low rank matrix recovery and completion, local descriptors in a spatio-temporal neighborhood are similar and should be approximately low rank. The objective function is obtained by seeking non-negative low rank and sparse coefficients for local descriptors. The learned coefficients can capture location information and the structure of descriptors, hence improve the discriminability of representations. Experiments validate that our method achieves the state-of-the-art results on two benchmark datasets.

Biyun Sheng, Wankou Yang, Baochang Zhang, Changyin Sun

Extreme Learning Machine Based Hand Posture Recognition in Color-Depth Image

Hand posture recognition is one of the most challenging problems in the computer vision field, especially in the scenes with complex background and illumination variance. This paper presents a real time hand posture recognition method in color-depth image. To accurately locate hands in the images with complex background, a depth histogram based adaptive thresholding method is adopted for the depth image and a Bayesian skin-color detection is performed for the corresponding color image. Then two processed results are fused and refined with a region-growing method. Finally, the histogram of gradients feature of the hand posture is computed for Extreme Learning Machine classifier to recognize different postures. Experiments show that the proposed hand posture recognition method runs in real-time and achieves high recognition accuracy.

Zhen Zhou, Shutao Li, Bin Sun

Real-Time Human Detection Based on Optimized Integrated Channel Features

We propose an optimized integrated channel features which can effectively improve the detection performance at the frame rate of 30 fps on images size of 640x480. The proposed method utilizes the distribution of filter response from positive and negative features to formulate the optimized combination of multiple filters. The optimized combination coefficient is learned from linear discriminative criterion which is superior to integrated channel features with random coefficients. Experimental results based on INRIA dataset shows the superiority of our method to other state-of-arts methods.

Jifeng Shen, Xin Zuo, Wankou Yang, Guohai Liu

Facial Feature Extraction Based on Robust PCA and Histogram

Inspired by recently-proposed robust principal component analysis (RPCA), in this paper we present a feature extraction method for robust face recognition in the presence of random pixel corruption and occlusion. Unlike most work focusing on the low-rank structure recovered by RPCA, we consider that the sparse error component contains more discriminating power which is essential to face recognition. In order to illustrate the intensity distribution of the sparse error component, a histogram-based sparsity measure is introduced for feature extraction. Compared with the related state-of-the-art methods, experimental results on Extended Yale B database verify the advancement of the proposed method for partially corrupted and occluded face images.

Xiao Luan, Weisheng Li

Multimodal Finger Feature Fusion and Recognition Based on Delaunay Triangular Granulation

For personal identification, three modalities of fingers, fingerprint (FP), finger-vein (FV) and finger-knuckle-print (FKP), can be used respectively. Fusing these modalities together as a whole biometric measure should naturally highlight the finger superiority in convenience and universality as well as recognition accuracy improvement. In this paper, a new finger recognition method based on granular computing is proposed. This method can synergistically combine the features of FP, FV and FKP in feature level and provide robustness to finger pose variation. The proposed granular space is constructed in bottom-up manner with three granule-layers. And a coarse-to-fine scheme is used for granule matching. Experiments are performed on a self-built database with three modalities to validate the proposed method in personal identification.

Jinjin Peng, Yanan Li, Ruimei Li, Guimin Jia, Jinfeng Yang

Robust Face Recognition via Facial Disguise Learning

The sparse representation based classifier (SRC) has been successfully applied to robust face recognition (FR) with various disguises. Following SRC, recently regularized robust coding (RRC) was proposed for more robustness to facial occlusion by designing a new robust representation residual term. Although RRC has achieved the leading performance, it ignores the prior knowledge embedded in facial disguises. In this paper, we proposed a novel facial disguise learning (FDL) model, in which the unknown occlusion pattern in the query image is learned using a collected disguise mask dictionary. Two learning strategies with an iterative reweighted coding algorithm, independent FDL and joint FDL, were presented in this paper. The experiments on face recognition with disguise clearly show the advantage of the proposed FDL in accuracy and efficiency.

Meng Yang, Linlin Shen

A Static Hand Gesture Recognition Algorithm Based on Krawtchouk Moments

Owing to convenience and naturalness, hand gesture recognition has been widely used in various human-computer interaction (HCI) systems. In this paper, we address the problem from the perspective of system, and present a static hand gesture recognition algorithm based on Krawtchouk moments. The effect of the order and number of Krawtchouk moments on the recognition performance is studied in detail. In the experiments, 15 popular gesture signs are used to verify the effectiveness of the presented hand gesture recognition system. Experimental results demonstrate that lower order Krawtchouk moments are more suitable for classification. Furthermore, the number of Krawtchouk moments also has a significant impact on the recognition accuracy.

Shuping Liu, Yu Liu, Jun Yu, Zengfu Wang

Face Recognition in the Wild by Mining Frequent Feature Itemset

Face recognition has attracted a lot of attention in the last decades and achieved high recognition rate under controlled environment. More and more researchers now focus on face recognition in the wild, which is difficult because of the variance of pose, illumination, occlusion and so on. In this paper, we aim to solve this problem by combining image retrieval and feature weighting. By image retrieval method, we can find those face images in the gallery set which are the most similar to the probe face image. After getting similar face subset, feature weighting is then executed on this subset. This process includes two steps. In the first step, we learn a weight for each single feature in this subset by finding its nearest neighbor. In the second step, inspired by frequent item mining method we learn a weight for a group of features. In the testing process, by weighted nearest neighbor voting for both single and grouped features, we classify the probe image to the class which has the highest similarity score. We evaluate our method on AR and Pubfig83 face data sets. Experiment shows that our method has achieved state-of-the-art performance.

Yuzhuo Wang, Hong Cheng, Yali Zheng, Lu Yang

Single-Sample Face Recognition via Fusion Variant Dictionary

This paper presents a novel method called sparse representation based classification via fusion variant dictionary (FSRC) for single-sample face recognition. There are two points to be highlighted in our method: (1) A specific preprocessing step is introduced to help the gray level of the testing sample distributed uniformly. (2) A fusion variant dictionary is proposed including two parts: the first part is an intra-class variant term, which can help represent the moderate illuminations, expressions and disguises; the second part is a noise term, which can help remove the common noise (caused by pixel noise, severe illumination or our preprocessing step) in testing samples. Extensive experiments on public face databases demonstrate advantages of the proposed method over the state-of-the-art methods, especially in dealing with image corruption and severe illumination.

Ying Tai, Jian Yang, Jianjun Qian, Yu Chen

Supervised Kernel Construction for Unsupervised PCA on Face Recognition

This paper aims to establish a novel framework for high-performance Mercer kernel construction. Based on a given kernel matrix incorporated the class label information, a nonlinear mapping is firstly generated and well-defined on the training samples. The partial data-defined mapping can be extended and well-defined on the entire pattern space by means of interpolatory technology. The analytic expression of the nonlinear mapping is then obtained. It theoretically shows that the function

K

(

x

,

y

), created by the inner product of the nonlinear mapping, is a supervised Mercer kernel function. Our supervised kernel is successfully applied to unsupervised principal component analysis (PCA) method for face recognition. Two face databases, namely ORL and FERET databases, are selected for evaluations. Compared with KPCA with RBF kernel (RBF-PCA) method, experimental results demonstrate that KPCA with our supervised kernel (SK-PCA) has superior performance.

Yang Zhao, Wen-Sheng Chen, Binbin Pan, Bo Chen

Section VII: Biomedical Image Analysis

Medical Image Clustering Based on Improved Particle Swarm Optimization and Expectation Maximization Algorithm

We proposed a hybrid clustering algorithm based on the improved particle swarm optimization algorithm and EM clustering algorithm to overcome the shortcomings of EM algorithm, which is sensitive to initial value and easy to sink into local minimum. First, get the optimal clustering number of any dataset to obtain the initial parameter of mixed model with the improved PSO algorithm, whose inertia weight increased and decreased along the fold line automatically. Then build the mixed density model of image data by multiple iterations of the EM algorithm. Finally divide all the pixel value of the image into corresponding branch of hybrid model with the Bayesian criterion to get the classification of image data. The proposed algorithm can increase the diversity of EM clustering algorithm initialization and promote optimization search in the global scope. Experimental results of simulation prove its accuracy and validity.

Zheng Tang, Yu-Qing Song, Zhe Liu

Medical Image Fusion by Combining Nonsubsampled Contourlet Transform and Sparse Representation

In this paper, we present a novel medical image fusion method by taking the complementary advantages of two powerful image representation theories: nonsubsampled contourlet transform (NSCT) and sparse representation (SR). In our fusion algorithm, the NSCT is firstly performed on each of the pre-registered source images to obtain the low-pass and high-pass coefficients. Then, the low-pass bands are merged with a SR-based fusion approach, and the high-pass bands are fused by employing the absolute values of coefficients as activity level measurement. Finally, the fused image is obtained by performing inverse NSCT on the merged coefficients. Several sets of medical source images with different combinations of modalities are used to test the effectiveness of the proposed method. Experimental results demonstrate that our method owns clear advantages over the fusion method based on NSCT or SR individually in terms of both visual quality and objective assessments.

Yu Liu, Shuping Liu, Zengfu Wang

Automated Segmentation and Tracking of SAM Cells

In this paper, we propose an automated segmentation and tracking system for the shoot apical meristem (SAM) cells. Cells are segmented using a mixed filter based watershed segmentation method, which is proved to be very robust and efficient. After segmentation, a Triangle Neighborhood Structure matching method is proposed to track the segmented cells across different time instances. Our tracking method reduces the dependence on neighbors, because we only need two neighbors for any cells for matching while the other local graph matching methods require a much larger number of neighbors. Using our proposed segmentation and tracking system, we are able to track 97% of the plant SAM cells.

Min Liu, Peng Xiang

Automatic Estimation of Muscle Thickness in Ultrasound Images Based on Revoting Hough Transform (RVHT)

As an important parameter related to musculoskeletal functions, muscle thickness has been studied for various purposes. However, muscle thickness is usually measured manually by an experienced clinical expert, which is subjective and time consuming, and there are few studies on automatic tracking of muscle thickness during dynamic contraction. In this paper, we proposed a modified Hough transform (HT) to achieve the quantitative and continuous measurement for muscle thickness in ultrasound images. The method involved three steps: image enhancement, locating of superficial and deep aponeuroses by RVHT, and computation of the distance between aponeuroses. The performance of the new method is evaluated using ultrasound images from gastrocnemius muscles of seven patients. The result from the proposed method is also compared to manual detection and another method which was based on Compressive Tracking Algorithm (CTA) applied in our previous work. It was demonstrated in the experiment that the proposed method agrees well with the manual measurement and was able to provide a more convenient and effective approach than the CTA. It could be used for objective muscle thickness tracking in musculoskeletal ultrasound images.

Jianhao Tan, Xiaolong Li, Wentao Zhang, Yaoqin Xie, Yongjin Zhou

Influence of Scan Duration on the Reliability of Resting-State fMRI Regional Homogeneity

Regional homogeneity (ReHo) is widely used in the analysis of fMRI data of patients with schizophrenia. However, the influence of scan duration on the results is not clear. In this work, intraclass correlation coefficient (ICC) was applied to investigate the reliability of a popular method called KCC-ReHo algorithm, using rest-state fMRI data of schizophrenia patients. The full length 6 minutes data collected was split into data with six different durations from 1 min to 6 min with 1 min equal separation. With increasing scan duration, the mean ICC value of the whole brain is found to increase monotonically from 0.55 to 0.97, and the standard deviation decreases from 0.21 to 0.02. The high ICC values mainly occurred in the superior parietal gyrus, paracentral lobule, superior frontal gyrus dorsolateral, supplementary motor area, fusiform gyrus and inferior temporal gyrus of both hemispheres.

Xiaotang Li, Jiansong Zhou, Xiaoyan Liu

A Global Eigenvalue-Driven Balanced Deconvolution Approach for Network Direct-Coupling Analysis

It is an important and unsettled issue to distinguish direct dependencies from the indirect ones without any prior knowledge in biological networks and social networks, which contain important biological features and co-authorship information. We present a new algorithm, called balanced network deconvolution (BND), by exploiting eigen-decomposition and the statistical behavior of the eigenvalues of random symmetric matrices. Specially, the BND is a parameter-free algorithm that can be directly applied to different networks. Experimental results establish BND as a robust and general approach for filtering the transitive noise on various input matrices generated by different prediction algorithms.

Hai-Ping Sun, Hong-Bin Shen

Sequence-Based Prediction of Protein-Protein Binding Residues in Alpha-Helical Membrane Proteins

A specific number of chains form alpha-helical membrane protein complexes in order to realize the biochemical function, i.e. as gateways to decide whether specific substances can be transported across the membrane or not. However, few structures of membrane proteins have been solved. The knowledge of protein-protein binding residues can help biologists figure out how the function works and solve the 3D structures.

We present a novel, sequence-based method to predict protein-protein binding residues from primary protein sequences by machine learning classifiers. We use a support vector regression model to predict relative solvent accessibility by features based on sequences, including position specific scoring matrix, conserved score, z-coordinate prediction, second structure prediction, physical parameter and sequence length. Afterwards, combining features mentioned above with the predicted solvent accessibility, we use ensemble support vector machines to predict protein-protein binding residues. To the best of our knowledge, there is no method to predict protein-protein binding residues in alpha-helical membrane proteins. Our method outperforms MAdaBoost successfully used in predicting protein-ligand binding residues and random forest used in protein-protein binding residues from surface residues. We also assess the importance of each individual type of features. PSSM profile and conserved score are shown to be more effective to predict protein-protein binding residues in alpha-helical membrane proteins.

Feng Xiao, Hong-Bin Shen

Section VIII: Document and Speech Analysis

Robust Voice Activity Detection Using the Combination of Short-Term and Long-Term Spectral Patterns

In this paper, we present a robust voice activity detection (VAD) algorithm using the combination of short-term and long-term spectral patterns. We analyze the benefit of short-term and long-term spectral patterns, respectively, when applied to robust VAD. Based on the analysis, we find the combination of short-term and long-term spectral patterns can be used to achieve a higher VAD accuracy than one of them only in noisy environments. We evaluate its performance under four types of noises and six types of signal-to-noise ratio (SNR) conditions. Compared with standard VAD schemes, the evaluation almost demonstrates promising results with the proposed scheme being comparable or favorable over the whole test set for various criterions of the VAD evaluation.

Ying-Wei Tan, Wen-Ju Liu

Speech Emotion Recognition Based on Coiflet Wavelet Packet Cepstral Coefficients

A wavelet packet based adaptive filter-bank construction method is proposed for speech signal processing in this paper. On this basis, a set of acoustic features are proposed for speech emotion recognition, namely Coiflet Wavelet Packet Cepstral Coefficients (CWPCC). CWPCC extends the conventional Mel-Frequency Cepstral Coefficients (MFCC) by adapting the filter-bank structure according to the decision task; Speech emotion recognition system is constructed with the proposed feature set and Gaussian mixture model as classifier. Experimental results on Berlin emotional speech database show that the Coiflet Wavelet Packet is more suitable in speech emotion recognition than other Wavelet Packets and proposed features improve emotion recognition performance over the conventional features.

Yongming Huang, Ao Wu, Guobao Zhang, Yue Li

Text Detection in Natural Scene Images Leveraging Context Information

In this paper, we propose a method leveraging context information for text detection in natural scene images. Most of the existing methods just utilize the hand-engineered features to describe the text area, but we focus on building a confidence map model by integrating the candidate appearance and the relationships with its adjacent candidates. Three layers of filtering strategy is designed to judge the category of the text candidates, which can remove abundant non-text regions. In order to retrieve the missing text regions, a context fusion step is performed. Finally, the remaining connected components (CCs) are grouped into text lines and are further verified, and then the text lines are broken into separate words. Experimental results on two benchmark datasets, i.e., ICDAR 2005, ICDAR 2013, demonstrate that the proposed approach has achieved the competitive performances with the state-of-the-art algorithms.

Runmin Wang, Nong Sang, Changxin Gao, Xiaoqin Kuang, Jun Xiang

Adaptive Local Receptive Field Convolutional Neural Networks for Handwritten Chinese Character Recognition

The success of convolutional neural networks (CNNs) in the field of image recognition suggests that local connectivity is one of the key issues to exploit the prior information of structured data. But the problem of selecting optimal local receptive field still remains. We argue that the best way to select optimal local receptive field is to let CNNs learn how to choose it. To this end, we first use different sizes of local receptive fields to produce several sets of feature maps, then an element-wise max pooling layer is introduced to select the optimal neurons from these sets of feature maps. A novel training process ensures that each neuron of the model has the opportunity to be fully trained. The results of the experiments on handwritten Chinese character recognition show that the proposed method significantly improves the performance of traditional CNNs.

Li Chen, Chunpeng Wu, Wei Fan, Jun Sun, Satoshi Naoi

Character Segmentation for Classical Mongolian Words in Historical Documents

There are many classical Mongolian historical documents which are reserved in image form, and as a result it is inconvenient for us to search and mining the desired content. In order to facilitate the word recognition in the document digitization procedure, this paper proposes a novel approach to segment the historical words in which the characters are intrinsically connected together and possess remarkable overlapping and variation. The approach consist of three steps: (1)significant contour point (SCP) detection on the approximated polygon of the word’s external contour, (2)baseline locating based on the logistic regression model and (3)segment path generation and validation based on the heuristic rules and the neural network. The SCP helps in the baseline locating and segment path generation. Experiment on the historical Mongolian Kanjur demonstrates that our approach could effectively locate the words’ baselines and segment the words into characters.

Xiangdong Su, Guanglai Gao, Weihua Wang, Feilong Bao, Hongxi Wei

MCDF Based On-Line Handwritten Character Recognition for Total Uyghur Character Forms

This paper proposed the Modified Center Distance Feature (MCDF) and its different forms for Uyghur handwritten character recognition. By combination with some low dimensional features, MCDF gifted remarkable recognition accuracy of 87.6% for total Uyghur character forms. This result is higher than previous record by more than 11 points. Samples from 400 volunteers are used in experiments.

Askar Hamdulla, Wujiahemaiti Simayi, Mayire Ibrayim, Dilmurat Tursun

Natural Scene Text Image Compression Using JPEG2000 ROI Coding

Regarding text region as region of interest (ROI) and assigning higher bit budget to ROIs than rest regions, ROI-based text image compression can provide both higher quality for text regions and higher compression ratio for an entire text image. JPEG2000 is a high performance image compression standard for common images but has no special optimization for text images. After image characteristic analysis, this paper proposed a natural scene text image compression method based on JPEG2000 ROI coding. ROI coding parameters are optimized for text regions. In this stage, the redundancy analysis is used to measure compression capability of different regions and scale factors in ROI coding are adjusted adaptively. With the specially designed optimization, the proposed method shows practicality in real applications. The experiment results show the improvement of compression performance which verifies the optimization effectiveness.

Yuanping Zhu, Li Song

Off-Line Uyghur Handwritten Signature Verification Based on Combined Features

An off-line Uyghur handwritten signature verification method based on combined features was proposed in this paper. Firstly, the signature images were preprocessed using techniques adapted to the Uyghur signature. The preprocessing included noise reduction, binarization, and normalization. Then, the global features, local features which each of them include several features were extracted respectively after the preprocessing, and they are combined together. Finally, two types of classifiers, Euclidean distance classifier, and non-linear SVM classifier are used to classify 75 genuine signatures and 36 random forgeries in our experiment. Two kinds of experiments were performed for and variations in the number of training and testing datasets. Experiments indicate that the combination of directional features with local central point features has obtained 2.26% of FRR and 2.97% of FAR with SVM classifier. The experimental results indicated that the combination method can capture the nature of Uyghur signature and its writing style effectively.

Kurban Ubul, Tuergen Yibulayin, Alimjan Aysa

Off-Line Signature Verification Based on Local Structural Pattern Distribution Features

Handwritten signature is a widely used biometric. The most challenging problem in automatic signature verification is to detect skilled forgery which is similar to the genuine signatures. This paper presents a novel method for extracting features for off-line signature verification. These features is based on probability distribution function, which characterizes the frequent structural patterns distribution of a signature image. Experiments were conducted on an publicly available signature database MCYT corpus. Experimental results show that the proposed method was able to improve the verification accuracy.

Jing Wen, MoHan Chen, JiaXin Ren

Section IX: Pattern Recognition Applications

Coordination of Electric Vehicles Charging to Maximize Economic Benefits

Under the constraints of distribution transformer capacity and customer charging needs, an coordinated charging model of electric vehicles is proposed to maximize the overall economic benefits of charging stations based on time periods of time-of-use(TOU) electricity price in power grids. Monte Carlo simulation method is utilized to generate the customer charging needs based on actual customer’s charging profiles. The economic benefits of charging stations is simulated under uncoordinated and coordinated charging modes correspondingly. Simulation results have indicated that the economic benefits of the charging stations can be significantly improved by responding the TOU electricity price.

Yongwang Zhang, Haoming Yu, Chun Huang, Wei Zhao, Min Luo

Traffic Sign Recognition Using Perturbation Method

Automatic traffic sign recognition (TSR) expects high accuracy and speed for real-time applications in intelligent transportation systems. Convolutional neural networks (CNNs) have yielded state-of-the-art performance on the public dataset GTSRB, but involve intensive computation. In this paper, we propose a traffic sign recognition method using computationally efficient feature extraction and classification techniques, and using the perturbation strategy to improve the accuracy. On the GTSRB dataset, using gradient direction histogram feature and learning vector quantization (LVQ) classifier achieves a test accuracy 98.48%. Using simple perturbation operations of image translation, the accuracy is improved to 98.88%. The accuracy is higher than that of single CNN and the speed is much higher.

Lin-Lin Huang, Fei Yin

A Novel Two-Stage Multi-objective Ant Colony Optimization Approach for Epistasis Learning

Recently, genome-wide association study (GWAS) which aims to discover genetic effects in phenotypic traits is a hot issue in genetic epidemiology. Epistasis known as genetic interaction is an important challenge in GWAS since it explains most individual susceptibility to complex diseases and it is difficult to detect due to its non-linearity. Here we present a novel two-stage method based on multi-objective ant colony optimization for epistasis learning. We conduct a lot of experiments on a wide range of simulated datasets and compare the outcome of our method with some other recent epistasis learning methods like AntEpiSeeker, Bayesian epistasis association mapping (BEAM) and BOolean Operation-based Screening and Testing (BOOST) method, finding that our method has a high power and is time efficient to learn epistatic interactions. We also do experiments in the real Late-onset Alzheimer’s disease (LOAD) dataset and the results substantiate that our method has a potential in searching the suspicious epistasis in large scale real GWAS datasets.

Peng-Jie Jing, Hong-Bin Shen

Hydraulic Excavators Recognition Based on Inverse ”V” Feature of Mechanical Arm

Detecting hydraulic excavators in videos can increase the confidence coefficient of illegal construction in nationalized land. Hydraulic Excavators have multifarious working postures making them a difficult target using even state of the art object recognition algorithms. The contribution of this paper is to propose an inverse

V

model for hydraulic excavator detection. In this paper, we describe an hydraulic excavator detection system based on inverse

V

feature of mechanical arm which is formed by boom and dipper and show a detection system. Then a real-time video processing method is presented which is used for monitoring illegal construction activities on a land of state-ownership.

Wenming Yang, Dedi Li, Daren Sun, Qingmin Liao

Real-Time Traffic Sign Detection via Color Probability Model and Integral Channel Features

This paper aims to deal with real-time traffic sign detection. To this end, a two-stage method is proposed to reduce the processing time with little influence to AUC (area under curve) value. In first stage, a color probability model is proposed to transform an input image to probability maps. The traffic sign proposals are then extracted by finding maximally stable extremal regions on these maps. In second stage, an integral channel features detector is employed to remove false positives of the proposals. Experiments on the GTSDB benchmark [1] show that the proposed color probability model achieves the highest recall rate and the proposed two-stage method significantly improves computational efficiency with good AUC value in comparison with the state-of-the-art methods.

Yi Yang, Fuchao Wu

Study of Charging Station Short-Term Load Forecast Based on Wavelet Neural Networks for Electric Buses

With the large-scale use of electric vehicles (EVs), a short-term load forecast method based on wavelet neural network (WNN) for electric buses is proposed to analyze load characteristics in order to better arrange transmission and distribution planning and regulate EVs charging or discharging, which comes from the current measured data related the charging station, Guangdong. This method is used to predicting EVs’ load data of two test day selected randomly, compared with the effect of the single BP network model. The statistical results show that the prediction method has higher accuracy to meet certain application requirements than BP network applying to short-term load forecast of charging station for electric buses.

Zhang Lei, Huang Chun, Yu Haoming

The Layout Optimization of Charging Stations for Electric Vehicles Based on the Chaos Particle Swarm Algorithm

Electric vehicle is an important part of the smart grid, and the location selection and the constant volume of charging stations for electric vehicles have been the research hotspot in the field of electric vehicles. In order to reasonably determine the scale and layout of charging station for electric vehicles, a novel model of the location selection and the constant volume of charging stations for electric vehicles considering time-space distribution ,power losses and the cost of new lines is established by taking the investment cycle costs and user convenience as indexes. Under the constraint of the related conditions, the objective function is constituted of the initial investment by the new station, network loss costs, the new line costs and electricity costs, and the target is to minimize the investment and user costs .Then the layout of charging station is optimized by the improved chaotic particle swarm algorithm; then chaotic sequence was formed and the corresponding relationship of variable range was optimized through logical mapping function. Example analysis shows that the proposed method has better convergence properties than the particle swarm optimization (PSO) algorithm, which can offer a new way for the layout of the electric vehicle charging stations.

Zhang Zhenghui, Huang Qingxiu, Huang Chun, Yuan Xiuguang, Dewei Zhang

An Improved Feature Weighted Fuzzy Clustering Algorithm with Its Application in Short-Term Prediction of Wind Power

Based on improved feature weighted fuzzy clustering and Elman neural network, short-term forecasting method of wind power is proposed in the paper. Because physical properties of wind identify wind types with different importance, the paper introduces weighted factor in traditional FCM fuzzy clustering algorithm and synthetically clusters the data samples of historical wind type. Aim at clustering results, it dynamically establishes model of Elman neural network in order to predict wind power output value of the same clustering results in target day. Furthermore, the paper simulates experiments with measured data of a domestic wind field, which proves the superiority and practicability of the proposed method.

Xinkun Wang, Diansheng Luo, Hongying He

Charging Load Forecasting for Electric Vehicles Based on Fuzzy Inference

Large scale of electric vehicles (EVS) integration will pose great impacts on the power system, due to their disorderly charging. Electric cars’ charging load cannot be forecasted as the traditional power load, which is usually forecasted based on historical data. There need to be some other methods to predict electric vehicles charging load, in order to improve the reliability and security of the grid. This paper analyze the travel characteristics of electric vehicles, then use the fuzzy inference system to emulate the process of drivers’ decision to charge their cars, the charging probability is attained in the given location. Finally, the daily profile of charging load can be predicted according to the numbers of electric vehicles forecasted in Beijing.

Yang Jingwei, Luo Diansheng, Yang Shuang, Hu Shiyu

Security Event Classification Method for Fiber-optic Perimeter Security System Based on Optimized Incremental Support Vector Machine

The way of efficiently classifying the fence climbing, fabric cutting, wall breaking and other environment factors, is an imperative problem for fiber-optic perimeter security system. To solve this problem, a security threats classification method based on optimized incremental support vector machine is proposed. In this method the artificial bee colony algorithm is introduced to optimize the penalty factor and kernel parameter of incremental support vector machine under specified fitness function, and the optimized incremental support vector machine is used to classify the perimeter security threats. To testify the performance of the proposed method, the experiment based on UCI datasets and actual vibration signal are made. Comparing with the support vector machine optimized by other algorithms, higher classification accuracy and less time consumption is achieved by the proposed method. Therefore, the effectiveness and the engineering application value of this proposed method is testified.

Lu Liu, Wei Sun, Yan Zhou, Yuan Li, Jun Zheng, Botao Ren

Backmatter

Weitere Informationen

Premium Partner

    Bildnachweise