Detection of Scratches on Cars by Means of CNN and R-CNN

Cesar G. Pachón-Suescún (1), Javier Orlando Pinzón-Arenas (2), Robinson Jiménez-Moreno (3)
(1) Department of Mechatronics Engineering, Nueva Granada Military University, Bogotá D.C, 110111, Colombia
(2) Department of Mechatronics Engineering, Nueva Granada Military University, Bogotá D.C, 110111, Colombia
(3) Department of Mechatronics Engineering, Nueva Granada Military University, Bogotá D.C, 110111, Colombia
Fulltext View | Download
How to cite (IJASEIT) :
Pachón-Suescún, Cesar G., et al. “Detection of Scratches on Cars by Means of CNN and R-CNN”. International Journal on Advanced Science, Engineering and Information Technology, vol. 9, no. 3, May 2019, pp. 745-52, doi:10.18517/ijaseit.9.3.6470.
Failure detection systems have become important not only in production processes, but nowadays there is a need for their implementation in various daily areas, for example, for the detection of physical damages that a car may present in a parking lot, in order to provide to the client with the assurance that their personal property will not be affected inside the lot. This paper presents an algorithm based on convolutional neural networks and a variant of these using regions (CNN and R-CNN), which allows to detect scratches in a car. In the first instance, the capture of one of the sides of a conventional car is done, an R-CNN is designed to extract only the region of the image where the car is located. After this procedure, the extracted region is divided into multiple sections, and each of the sections is evaluated in a CNN to detect in which parts of the vehicle the scratches are located. With the test images, a precision percentage of 98.3% is obtained in the R-CNN, and 96.89% in the CNN, demonstrating in this way the robustness of the Deep Learning techniques implemented in the detection of car scratches. The processing times of each one of the algorithm stages corresponding to the R-CNN and the classification of the sections in the CNN were 1.6563 and 1.264 seconds respectively.

B. Zhang, W. Huang, J. Li, C. Zhao, S. Fan, J. Wu and C. Liu, “Principles, developments and applications of computer vision for external quality inspection of fruits and vegetables: A review,” Food Research International, vol. 62, pp. 326-343, 2014. doi: 10.1016/j.foodres.2014.03.012.

T. Sun, F. Tien, F.Chih and R. Kuo, “Automated thermal fuse inspection using machine vision and artificial neural networks,” Journal of Intelligent Manufacturing, vol. 27, no. 3,pp. 639-651, 2016. doi: 10.1007/s10845-014-0902-y.

A. Dias, M. Silva, N. Lima and R. Guedes, “Identification of marks on tires using artificial vision for quality control,” International Journal for Quality Research, vol. 9, no. 1, pp. 27-36, 2015.

S.Gontscharov, H. Baumgí¤rtel, A. Kneifel and K. Krieger, “Algorithm Development for Minor Damage Identification in Vehicle Bodies Using Adaptive Sensor Data Processing,” Procedia Technology, vol. 15, pp. 586-594, 2014. doi: 10.1016/j.protcy.2014.09.019.

Y. LeCun, B. Boser, J. Denker, D. Henderson, R. Howard, W. Hubbard and L. Jackel, “Backpropagation Applied to Handwritten Zip Code Recognition,” Neural Computation, vol. 1, no. 4, pp. 541-551, 1989. doi: 10.1162/neco.1989.1.4.541.

A. Krizhevsky, I. Sutskever and G. Hinton, “ImageNet classification with deep convolutional neural networks,” in Advances in Neural Information Processing Systems 25 (NIPS), 2012, pp. 1097-1105.

J. O. P. Arenas, P. C. U. Murillo and R. J. Moreno, “Convolutional neural network architecture for hand gesture recognition,” in 2017 IEEE XXIV International Conference on Electronics, Electrical Engineering and Computing (INTERCON), Cusco, Peru, 2017, pp. 1-4. doi: 10.1109/INTERCON.2017.8079644.

J. Li, X. Mei, D. Prokhorov and D. Tao, “Deep Neural Network for Structural Prediction and Lane Detection in Traffic Scene,” IEEE Transactions on Neural Networks and Learning Systems, vol. 23, no. 3, pp. 690-703, 2017. doi: 10.1109/TNNLS.2016.2522428.

R. Qian, Q. Liu, Y. Yue, F. Coenen and B. Zhang, “Road surface traffic sign detection with hybrid region proposal and fast R-CNN,” in 2016 12th International Conference on Natural Computation, Fuzzy Systems and Knowledge Discovery (ICNC-FSKD), 2016, pp. 555-559. doi: 10.1109/FSKD.2016.7603233.

B. Huynh, H. Li and M. Giger, “Digital mammographic tumor classification using transfer learning from deep convolutional neural networks,” J. of Medical Imaging, vol. 3, no. 3, p.034501, 2016. doi: 10.1117/1.JMI.3.3.034501.

S. Arnold and K. Yamazaki, “Real-time scene parsing by means of a convolutional neural network for mobile robots in disaster scenarios,” in 2017 IEEE International Conference on Information and Automation (ICIA), 2017, pp. 201-207. doi: 10.1109/ICInfA.2017.8078906.

R. Girshick, J. Donahue, T. Darrell and J. Malik, “Rich feature hierarchies for accurate object detection and semantic segmentation,” in IEEE Conference on Computer Vision and Pattern Recognition, 2014, pp. 580-587. doi: 10.1109/CVPR.2014.81.

X. Peng and C. Schmid, “Multi-region Two-Stream R-CNN for Action Detection,” in Computer Vision - ECCV, 2016, pp. 744-759. doi: 10.1007/978-3-319-46493-0_45.

V. Hoang, M. Le, T. Tran and V. Pham, “Improving Traffic Signs Recognition Based Region Proposal and Deep Neural Networks,” in Intelligent Information and Database Systems, ACIIDS, 2018, pp. 604-613, 2018. doi: 10.1007/978-3-319-75420-8_57.

C. L. Zitnick and P. Dollí¡r, “Edge boxes: Locating object proposals from edges”, in European Conference on Computer Vision, Springer, Cham, 2014, pp. 391-405. doi: 10.1007/978-3-319-10602-1_26.

M. Zeiler and R. Fergus, “Visualizing and Understanding Convolutional Networks,” in European conference on computer vision, Springer, Cham, 2014, pp. 818-833. doi: 10.1007/978-3-319-10590-1_53.

D. Masters and C. Luschi, “Revisiting Small Batch Training for Deep Neural Networks,” arXiv preprint arXiv:1804.07612, 2018.

Creative Commons License

This work is licensed under a Creative Commons Attribution 4.0 International License.

Authors who publish with this journal agree to the following terms:

    1. Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.
    2. Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
    3. Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See The Effect of Open Access).