Skip to main content
Erschienen in: EURASIP Journal on Wireless Communications and Networking 1/2018

Open Access 01.12.2018 | Research

Vehicle target detection methods based on color fusion deformable part model

verfasst von: Dongbing Zhang

Erschienen in: EURASIP Journal on Wireless Communications and Networking | Ausgabe 1/2018

Aktivieren Sie unsere intelligente Suche um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

In this paper, the traditional vehicle target detection method is improved, and a vehicle target detection method based on color fusion deformable part model (DPM) is proposed. Firstly, the traffic image is conducted with HSI color space conversion, and then, the information of each channel in color space is extracted, the DPM of each channel is trained, and then the color fusion DPM is obtained by using the adaptive fusion method. In the process of vehicle detection, traversal search of vehicle images is conducted using color fusion DPM through the sliding window traversal; areas with a score exceeding the threshold are deemed as the vehicle targets. In the experimental phase, we first trained the color fusion DPM, then validated the validity and accuracy, and the experiment images were from the practical traffic junctions. The results show that the method proposed in this paper can carry out vehicle target detection accurately and effectively. Compared with other vehicle target detection methods, it has high detection rate and low false positive rate, which can achieve the accurate detection of vehicle targets in intelligent transportation.

1 Introduction

With the rapid development of intelligent transportation system, vehicle target detection has become a popular research field as it is an important part of modern intelligent transportation system. The traditional vehicle detection method is to install the induction coil on the road to collect the vehicle images. The disadvantage of this method is that the road is damaged and the installation and maintenance of the system are inconvenient.
With the development of image processing technology, more and more vehicle detection algorithms based on computer vision and image processing technology have been widely used [1]. At present, the detection of vehicle target is based on the existence of motion information, which is classified into two categories. One is vehicle detection using motion information, such as inter-frame difference method [2], background difference method [3], and optical flow method [4, 5]. These typical methods rely on the vehicle’s motion information and the detection algorithm fails when the information is lost. The other is vehicle detection that does not rely on motion information, which usually starts with the characteristics of the vehicle itself. According to the vehicle shape and posture, many scholars have proposed a method based on modeling and template matching [6, 7]. This kind of method usually builds the vehicle model first and then matches the model with the test image to get the vehicle target. This method has high requirements on the model and is susceptible to noise when the model is not consistent with the actual situation. According to the appearance characteristics of the vehicle such as color and texture, many scholars have proposed a method based on these features [8, 9]. This kind of method usually studies the appearance difference of vehicle and non-vehicle in color, texture, etc. This method allows for better detection of typical vehicle targets but can easily undetect vehicles that are close to ground color and texture. Relying on new technologies such as big data, some scholars have proposed methods based on statistics and machine learning [1012]. This kind of methods adopt the idea of statistical analysis, extract the features suitable for vehicle detection, and adopt neural network and support vector machine and other methods to carry out learning and testing. The methods feature a high accuracy but require a large number of samples for classifier learning; performance also differs among different classifiers, and a large amount of computation is needed. These methods have played a crucial role in promoting vehicle target detection. Based on the above methods, this paper proposes a vehicle target detection method based on color fusion DPM, which takes into account the vehicle’s gradient information and color information while modeling the vehicle, and can effectively and accurately realize the vehicle target detection.

2 Vehicle detection method based on color fusion DPM

2.1 DPM detection principle

Deformable part model (DPM) [1315] is a target detection classification method featuring high efficiency and high precision. DPM first establishes a model for the detection target; the model is divided into a root model and a part model to make the target shape and posture more robust. The root model describes the overall characteristics of the target, and the part model describes the local detail features of the target; the relationship between the parts and between the part and the whole is not fixed but is adaptively adjusted by the weight vector so that the model can be deformed to adapt to accurate detection of different shape and posture targets. The method performed very well in various target detection methods and achieved good detection results on different data sets such as PASCSAL VOC benchmarks and INRIA Person.
In the process of target detection, DPM first needs to design the root filter and part filter according to the weight vector. Then, the root filter and the part filter are respectively dot-product-operated with the feature vector to obtain the scores of each filter. Finally, the root filter score and each part filter score are taken as the overall score of the model, whichever is the largest is taken as the best target position. In the process of detection, in order to make the model have better robustness, multi-resolution changes of the image size are usually carried out, and the target detection is carried out successively in different sizes.
In the detection of images with different sizes, it is assumed that the position of the target root filter is p0 and the position of the ith part filter is p i , and i = 1,..., m, m is the number of parts. The size of the root filter is w0 × h0, and the weight vector F0, F i is the weight vector of the ith part filter. Then, the score of the model at location (p0, p1, ..., p m ) is:
$$ {\displaystyle \begin{array}{c}\mathrm{score}\left({p}_0,{p}_1,\mathrm{K},{p}_m\right)={F}_0.\phi \left({p}_0,{w}_0,{h}_0\right)\\ {}+\sum \limits_{i=1}^m\left({F}_i.\phi \left({p}_i,{w}_i,{h}_i\right)-\mathrm{cost}\left({du}_i,{dv}_i\right)\right)\end{array}} $$
(1)
where ϕ(p i , w i , h i ) is the eigenvector of size (w i , h i ) calculated in the subwindow p i and (du i , dv i ) is the deviation of the actual position p i  = (u i , v i ) of the ith part filter from the original position \( {\tilde{p}}_i=\left({\tilde{u}}_i,{\tilde{v}}_i\right) \) of the original filter in the model, as shown in the following formula.
$$ \left({du}_i,{dv}_i\right)=\left({\tilde{u}}_i,{\tilde{v}}_i\right)-\left({u}_i,{v}_i\right) $$
(2)
The loss function of the part filter corresponding to the deviation between the original filters in the process of offsetting is represented by the quadratic function as shown in formula (3), and the parameter of the quadratic function adjustment is expressed by α i and β i .
$$ cost\left({du}_i,{dv}_i\right)={\alpha}_i\left({du}_i,{dv}_i\right)+{\beta}_i\left({\left({du}_i\right)}^2,{\left({dv}_i\right)}^2\right) $$
(3)
When the ith part filter of the part model finds its best position, the difference value between the part filter’s fraction and its corresponding deformation loss function takes the largest value. Therefore, the final score corresponding to the detection window at position p0 is:
$$ {\displaystyle \begin{array}{c}\mathrm{score}\left({p}_0\right)=\underset{p_1,\dots, {p}_m}{\max}\mathrm{score}\left({p}_0,{p}_1,\dots, {p}_m\right)\\ {}={F}_0\cdotp \phi \left({p}_0,{w}_0,{h}_0\right)+\sum \limits_{i=1}^m\underset{p_i}{\max}\left({F}_i\cdotp \phi \left({p}_i\right)-\mathrm{cost}\left({du}_i,{dv}_i\right)\right)\end{array}} $$
(4)
After obtaining the final score for each position, the target position can be detected by setting the threshold.

2.2 Vehicle detection algorithm of color fusion DPM

The traditional DPM performs well in general vehicle targets and shows excellent detection performance on various data sets. However, vehicle detection in the actual traffic conditions will encounter all kinds of complex interference, for example, vehicle color close to the ground, zebra crossing interference, dramatic changes in lighting, interference by the shadow, and noise interference by the image itself. These pose a great challenge to the accuracy and robustness of vehicle detection. Considering that the traditional DPM is mainly modeled from the gradient features of the graph, the color information containing rich target features is discarded. In order to overcome the complex and changing interference in real-time traffic, a vehicle detection algorithm based on color fusion DPM is proposed in this paper. This method can not only retain the deformable character between the root model and the part model according to the gradient information, but also can enhance the gradient information of each part according to the color information, maximizing the stability and accuracy of vehicle target detection in complex environments.
The vehicle detection algorithm process based on color fusion DPM is shown in Fig. 1:
In order to contain the color information of the original image in DPM, the image is first conducted with color space conversion. Considering that the HSI color space is more stable to changes in lighting, shading, etc. and better reflects the color nature of the image, the HSI color space is selected for conversion. RGB image to HSI image conversion is a conversion from a unit cube based on Cartesian rectangular coordinates to a bicondendum based on cylindrical polar coordinates; the conversion process can separate and classify the brightness and decompose the chroma into hue and saturation. In this paper, the segmentation definition method is used to perform HSI color space conversion, and the converted images are respectively extracted from H, S, and I channel data templates and saved as two-bit matrices M h , M s , and M i . HOG features H h , H S , and H i of three matrices are calculated respectively.
According to the HOG features of H, S, and I channels, we can train the DPM DPM h , DPM s , and DPM i corresponding to each channel. To assess the DPM of each channel, we collected 1000 vehicle images to train the model and collected another 1000 pictures containing vehicles, ground, pedestrians, and other non-vehicle targets for testing. The results showed that the performance of DPM i in DPM in each channel is more excellent than DPM h and DPM s and the missed detection rate and false positive rate are lower. However, when the color of the vehicle is close to the ground or under shadow, light, and other serious interference conditions, DPM h and DPM s can detect the vehicle targets that cannot be detected by DPM i . Therefore, this paper uses a fusion approach to extract the vehicle’s color fusion DPM. Experiments show that using adaptive weighting method to blend DPMs of different channels can preserve the advantages of each channel’s own model and has better detection performance than the single model. Color fusion DPM is shown below:
$$ {\mathrm{DPM}}_M={\omega}_h\times {\mathrm{DPM}}_h+{\omega}_s\times {\mathrm{DPM}}_s+{\omega}_i\times {\mathrm{DPM}}_i $$
(5)
where DPM M is the color fusion DPM and ω h , ω s , and ω i are the weighting coefficients of DPM h , DPM s , and DPM i , respectively, representing the weight of model components of H, S, and I channels, the weighted sum is 1. The weight value of each channel is adaptively determined by DPM of each channel. This paper uses the following determination method:
$$ {\omega}_h=\frac{{\mathrm{DPM}}_h}{{\mathrm{DPM}}_h+{\mathrm{DPM}}_s+{\mathrm{DPM}}_i} $$
(6)
$$ {\omega}_s=\frac{{\mathrm{DPM}}_s}{{\mathrm{DPM}}_h+{\mathrm{DPM}}_s+{\mathrm{DPM}}_i} $$
(7)
$$ {\omega}_i=\frac{{\mathrm{DPM}}_i}{{\mathrm{DPM}}_h+{\mathrm{DPM}}_s+{\mathrm{DPM}}_i} $$
(8)
After the color fusion DPM is obtained, the color fusion feature of the test image is also calculated, and then, the fusion deformable model is calculated a score on the fusion feature diagram using sliding window traversal. When the traversal position score exceeds the threshold, the area can be a vehicle area, which is the detection result of the vehicle.

3 Vehicle detection test based on color fusion DPM

3.1 Training of color fusion DPM

Training images and test images were taken from 2448 pixels × 2048 pixels HD surveillance images collected at multiple traffic bay ports in Wuhan to form a training image set and a test image set. The training samples for color fusion DPM come from the training image set. During the training, 1000 forward-direction vehicle images with different shapes, colors, and lighting conditions were selected as training samples; some are shown in Fig. 2. First, the HSI color space conversion is conducted on the training samples, then the DPM of each channel is calculated, and then the color fusion DPM is obtained according to formula 5, as shown in Fig. 3.

3.2 Validation of vehicle detection method

To verify the validity of the vehicle detection method proposed in this paper, we selected the images of vehicles with different shapes, colors, and lighting conditions from the test image set. The test platform is Visual Studio 2010 and the computer is configured for Intel(R) Core(TM) i5-3230M, clocked at 2.60 GHz with a memory of 4.00 GB.
Figure 4 shows an actual traffic intersection image. In the experiment, the images were traversed using channel DPM and color fusion DPM, and each sliding window image was calculated according to formula (4) and compared with the set vehicle threshold. Part of the sliding window is shown in Fig. 4a on the right; the corresponding score is shown in Fig. 4b.
As can be seen from Fig. 4, the channel scores and the fused scores of the channels in background areas 1–3 in the sliding window image are obviously lower than the scores corresponding to the vehicle area 4, and meanwhile, the fused DPM score is better than the single-channel DPM score. For this image, we set the vehicle score threshold as 0.6; vehicle target can be accurately detected, and non-vehicle targets will be excluded.

4 Experiment results and analysis

To verify the accuracy of the vehicle detection method in this paper, the snapshot images of 3000 vehicles at the traffic bay were randomly selected for testing, and vehicle detection rate (DR) and vehicle false positive rate (FPR) were used as evaluation criteria for vehicle detection [16], as shown below:
$$ \mathrm{DR}=\frac{\mathrm{TP}}{\mathrm{TP}+\mathrm{FN}} $$
(9)
$$ \mathrm{FPR}=\frac{\mathrm{TP}}{\mathrm{TP}+\mathrm{FP}} $$
(10)
In the above formula, TP, FN, and FP respectively denote the actual number of vehicles detected, the actual number of vehicles not detected, and the actual number of non-vehicles falsely detected. In order to examine the performance of the color fusion DPM method in this paper, the contribution of each channel in vehicle detection is evaluated. Firstly, the performance of the single-channel model is evaluated and then the performance of the fusion model is evaluated. If there is little difference between the performance of the single model and of the fusion model, it indicates that the channel has great contribution, vice versa. The test result is shown in Fig. 5.
It can be seen from Fig. 5 that the performance of Merge-DPM is superior to that of the single-channel model; DR and FPR are optimal. I-DPM has great contribution to the overall model; DR and FPR are the closest to the corresponding value of Merge-DPM. H-DPM and S-DPM contribute less than I-DPM but still make a corresponding contribution to the performance improvement of Merge-DPM. The reason is that each channel contains the corresponding color information, so Merge-DPM can get as much color information as possible.
To further verify the detection performance of the proposed algorithm, this paper selects the traditional DPM method [13], HOG feature matching method [17], color analysis [8], and other commonly used algorithms in vehicle target detection, which are compared with the proposed algorithm. The snapshot images of 3000 vehicles at the traffic bay were randomly selected for testing; test results and comparison results are shown in Fig. 6.
Seen from Fig. 6, the DR value of this method is higher than 90% and the FPR value is lower than 10%, indicating that the proposed algorithm has better detection results than other commonly used methods. In the control method, the DPM method outperforms the HOG method because DPM considers the deformability of the vehicle model. The HOG method outperforms the color method, because extracting vehicle features only depending on the color information will have a great interference. The method proposed in this paper does not discard the color information while taking into account the deformability of the vehicle model, thus obtaining the best detection results.

5 Conclusions

According to the detection principle of vehicle target by DPM and combined with vehicle color information, a vehicle target detection method based on color fusion DPM is proposed. Firstly, the traffic image is conducted with HSI color space conversion, the information of each channel is extracted, and then the color fusion DPM is obtained by using the adaptive fusion method. Finally, the fusion model is used for vehicle detection. The method retains the vehicle’s color information based on the traditional vehicle DPM. Experiments show that the proposed method is superior to the commonly used vehicle detection methods and achieves a good vehicle detection effect, which can effectively solve the practical vehicle target detection problems encountered in intelligent transportation. In the future research, time consumption in vehicle detection process and the transplant in the hardware system will be the research targets.

Acknowledgements

The paper was supported by the Key Scientific Research Project of Education Department of Anhui Province (Analysis and research on the safety behavior of the staff for traffic images) Grant No. KJ2018A0182.

Funding

The authors acknowledge the Key Scientific Research Project of Education Department of Anhui Province (Analysis and research on the safety behavior of the staff for traffic images) Grant No. KJ2018A0182.

Availability of data and materials

To verify the validity of the vehicle detection method proposed in this paper, we selected the images of vehicles with different shapes, colors, and lighting conditions from the test image set. The test platform is Visual Studio 2010, the computer is configured for Intel(R) Core(TM) i5-3230M, clocked at 2.60 GHz with a memory of 4.00 GB.

Competing interests

The author declares that he has no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://​creativecommons.​org/​licenses/​by/​4.​0/​), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Literatur
1.
Zurück zum Zitat Z Sun, G Bebis, R Miller, On-road vehicle detection: A review. IEEE Trans. Pattern Anal. Mach. Intell. 28(11), 694–711 (2006). Z Sun, G Bebis, R Miller, On-road vehicle detection: A review. IEEE Trans. Pattern Anal. Mach. Intell. 28(11), 694–711 (2006).
2.
Zurück zum Zitat W Li, J Yao, T Dong, et al., in International Congress on Image and Signal Processing. Moving vehicle detection based on an improved interframe difference and a Gaussian model (IEEE, New York, 2016), pp. 969–973. W Li, J Yao, T Dong, et al., in International Congress on Image and Signal Processing. Moving vehicle detection based on an improved interframe difference and a Gaussian model (IEEE, New York, 2016), pp. 969–973.
3.
Zurück zum Zitat Y Malinovskiy, YJ Wu, Y Wang, Video-based vehicle detection and tracking using spatiotemporal maps. Transportation research record. J. Transp. Res. Board 45(2121), 81–89 (2013). Y Malinovskiy, YJ Wu, Y Wang, Video-based vehicle detection and tracking using spatiotemporal maps. Transportation research record. J. Transp. Res. Board 45(2121), 81–89 (2013).
4.
Zurück zum Zitat A Gowacz, Z Mikrut, P Pawlik, in Multimedia Communications, Services and Security. Video detection algorithm using an optical flow calculation method (Springer, Berlin, Heidelberg, 2012), pp. 118–129.CrossRef A Gowacz, Z Mikrut, P Pawlik, in Multimedia Communications, Services and Security. Video detection algorithm using an optical flow calculation method (Springer, Berlin, Heidelberg, 2012), pp. 118–129.CrossRef
5.
Zurück zum Zitat H Chao, Y Gu, M Napolitano, A survey of optical flow techniques for robotics navigation applications. J. Intell. Robot. Syst. Theory App. 73(1–4), 361–372 (2014).CrossRef H Chao, Y Gu, M Napolitano, A survey of optical flow techniques for robotics navigation applications. J. Intell. Robot. Syst. Theory App. 73(1–4), 361–372 (2014).CrossRef
6.
Zurück zum Zitat A Jazayeri, H Cai, JY Zheng, et al., Vehicle detection and tracking in car video based on motion model. IEEE Trans. Intell. Transp. Syst. 12(2), 583–595 (2011).CrossRef A Jazayeri, H Cai, JY Zheng, et al., Vehicle detection and tracking in car video based on motion model. IEEE Trans. Intell. Transp. Syst. 12(2), 583–595 (2011).CrossRef
7.
Zurück zum Zitat JW Hsieh, LC Chen, DY Chen, Symmetrical SURF and its applications to vehicle detection and vehicle make and model recognition. IEEE Trans. Intell. Transp. Syst. 15(1), 6–20 (2014).CrossRef JW Hsieh, LC Chen, DY Chen, Symmetrical SURF and its applications to vehicle detection and vehicle make and model recognition. IEEE Trans. Intell. Transp. Syst. 15(1), 6–20 (2014).CrossRef
8.
Zurück zum Zitat LW Tsai, JW Hsieh, KC Fan, Vehicle detection using normalized color and edge map. IEEE Trans. Image Process. 16(3), 850–864 (2007).MathSciNetCrossRef LW Tsai, JW Hsieh, KC Fan, Vehicle detection using normalized color and edge map. IEEE Trans. Image Process. 16(3), 850–864 (2007).MathSciNetCrossRef
9.
Zurück zum Zitat L Zhang, LI Zhi neng, Adaptive HSV color background modeling for real-time vehicle tracking with shadow detection in traffic surveillance. Journal of Image & Graphics. 8(7), 60–64 (2003). L Zhang, LI Zhi neng, Adaptive HSV color background modeling for real-time vehicle tracking with shadow detection in traffic surveillance. Journal of Image & Graphics. 8(7), 60–64 (2003).
10.
Zurück zum Zitat W Chu, Y Liu, C Shen, et al., Multi-task vehicle detection with region-of-interest voting. IEEE Trans. Image Process. PP(99), 1 (2017). W Chu, Y Liu, C Shen, et al., Multi-task vehicle detection with region-of-interest voting. IEEE Trans. Image Process. PP(99), 1 (2017).
11.
Zurück zum Zitat X Yuan, S Su, H Chen, A graph-based vehicle proposal location and detection algorithm. IEEE Trans. Intell. Transp. Syst. PP(99), 1–8 (2017). X Yuan, S Su, H Chen, A graph-based vehicle proposal location and detection algorithm. IEEE Trans. Intell. Transp. Syst. PP(99), 1–8 (2017).
12.
Zurück zum Zitat Y Zhou, L Liu, L Shao, et al., Fast automatic vehicle annotation for urban traffic surveillance. IEEE Trans. Intell. Transp. Syst. PP(99), 1–12 (2017).CrossRef Y Zhou, L Liu, L Shao, et al., Fast automatic vehicle annotation for urban traffic surveillance. IEEE Trans. Intell. Transp. Syst. PP(99), 1–12 (2017).CrossRef
13.
Zurück zum Zitat P Felzenszwalb, D McAllester, D Ramanan, in IEEE Conference on Computer Vision and Pattern Recognition (CVPR). A discriminatively trained, multiscale, deformable part model (IEEE, New York, 2008), pp. 1–8. P Felzenszwalb, D McAllester, D Ramanan, in IEEE Conference on Computer Vision and Pattern Recognition (CVPR). A discriminatively trained, multiscale, deformable part model (IEEE, New York, 2008), pp. 1–8.
14.
Zurück zum Zitat PF Felzenszwalb, RB Girshick, D McAllester, et al., Object detection with discriminatively trained part-based models. IEEE Trans. Pattern Anal. Mach. Intell. 32(9), 1627–1645 (2010).CrossRef PF Felzenszwalb, RB Girshick, D McAllester, et al., Object detection with discriminatively trained part-based models. IEEE Trans. Pattern Anal. Mach. Intell. 32(9), 1627–1645 (2010).CrossRef
15.
Zurück zum Zitat PF Felzenszwalb, RB Girshick, D McAllester, in IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Cascade object detection with deformable part models (IEEE, New York, 2010), pp. 2241–2248. PF Felzenszwalb, RB Girshick, D McAllester, in IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Cascade object detection with deformable part models (IEEE, New York, 2010), pp. 2241–2248.
16.
Zurück zum Zitat B Sharma, VK Katiyar, AK Gupta, et al., The automated vehicle detection of highway traffic images by differential morphological profile. J. Transp. Technol. 4(2), 150–156 (2014).CrossRef B Sharma, VK Katiyar, AK Gupta, et al., The automated vehicle detection of highway traffic images by differential morphological profile. J. Transp. Technol. 4(2), 150–156 (2014).CrossRef
17.
Zurück zum Zitat X Cao, C Wu, P Yan, et al., in IEEE International Conference on Image Processing. Linear SVM classification using boosting HOG features for vehicle detection in low-altitude airborne videos (IEEE, New York, 2011), pp. 2421–2424. X Cao, C Wu, P Yan, et al., in IEEE International Conference on Image Processing. Linear SVM classification using boosting HOG features for vehicle detection in low-altitude airborne videos (IEEE, New York, 2011), pp. 2421–2424.
Metadaten
Titel
Vehicle target detection methods based on color fusion deformable part model
verfasst von
Dongbing Zhang
Publikationsdatum
01.12.2018
Verlag
Springer International Publishing
DOI
https://doi.org/10.1186/s13638-018-1111-8

Weitere Artikel der Ausgabe 1/2018

EURASIP Journal on Wireless Communications and Networking 1/2018 Zur Ausgabe