Skip to main content
Log in

Invariant chromatic descriptor for LADAR data processing

  • Original Paper
  • Published:
Machine Vision and Applications Aims and scope Submit manuscript

Abstract

A new LADAR data descriptor is proposed. This descriptor is produced from the application of the chromatic methodology to extract features from the LADAR data by applying invariant spatial chromatic processors. The descriptor developed has a high discrimination capability, robust to the effects that disturb LADAR data, and requires less storage space and computational time for recognition. The performance of the proposed LADAR descriptor is evaluated using simulated LADAR data, which are generated from special software called LADAR simulator. The simulation results show high discrimination capability for the new descriptor over the traditional techniques such as Moments descriptor. This Moments descriptor is used to benchmark the results. The results also show the robustness of the proposed descriptor in the presence of noise, low resolution, view change, rotation, translation, and scaling effects.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

References

  1. Artist-3d model library. http://artist-3d.com/

  2. Meshlab Software. http://meshlab.sourceforge.net/

  3. Model Library 3dvia. http://www.3dvia.com/

  4. Al-Batah, M.S., Mat Isa, N.A., Zamli, K.Z., Sani, Z.M., Azizli, K.A.: A novel aggregate classification technique using moment invariants and cascaded multilayered perceptron network. Int. J. Mineral Process. 92(1–2), 92–102 (2009)

  5. Al-Temeemy, A.A., Spencer, J.W.: Invariant spatial chromatic processors for region image description. In: 2010 IEEE International Conference on Imaging Systems and Techniques (2010)

  6. Alghoniemy, M., Tewfik, A.H.: Geometric invariance in image watermarking. IEEE Trans. Image Process. 13(2), 145–153 (2004)

    Article  Google Scholar 

  7. Ankerst, M., Kastenmuller, G., Kriegel, H.P., Seidl, T.: 3d shape histograms for similarity search and classification in spatial databases. In: Advances in Spatial Databases, vol. 1651, pp. 207–228. Springer, Berlin (1999)

  8. Besl, P.J.: Surfaces in Range Image Understanding. Springer, Berlin (1988)

  9. Bostelman, R., Hong, T., Madhavan, R.: Obstacle detection using a time-of-flight range camera for automated guided vehicle safety and navigation. Integr. Comput. Aided Eng. 12(3), 237–249 (2005)

    Google Scholar 

  10. Bouchette, G., Iles, P., English, C., Labrie, M., Powaschuk, B., Church, P., Maheux, J.: Rapid automatic target recognition using generic 3d sensor and shape-from-motion data. SPIE 619(1–12), 656 (2007)

  11. Cho, P., Anderson, H., Hatch, R., Ramaswami, P.: Real time 3d ladar imaging. Lincoln Lab. J. 16(1), 147–164 (2006)

    Google Scholar 

  12. Deakin, A.G., Rallis, I., Zhang, J., Spencer, J.W., Jones, G.R.: Towards holistic chromatic intelligent monitoring of complex systems. Sensor Rev. 26(1), 11–17 (2006)

    Article  Google Scholar 

  13. Delingette, H., Hebert, M., Ikeuchi, K.: A spherical representation for the recognition of curved objects. In: Fourth International Conference on Computer Vision, 1993. Proceedings, pp. 103–112 (1993)

  14. Dudani, S.A., Breeding, K.J., McGhee, R.B.: Aircraft identification by moment invariants. IEEE Trans. Comput. C-26(1), 39–46 (1977)

  15. English, C., Ruel, S., Melo, L., Church, P., Maheux, J.: Development of a practical 3d automatic target recognition and pose estimation algorithm. Proceedings of SPIE—The International Society for optical engineering 5426, 112–123 (2004)

    Google Scholar 

  16. Gonzalez, R.C., Woods, R.E.: Digital Image Processing, 3th edn. Pearson/Prentice Hall, London (2008)

  17. Han, J., Kamber, M.: Data Mining Concepts and Techniques, 2nd. edn. Morgan Kaufmann, Burlington (2006)

  18. Jiménez, A.R., Ceres, R., Seco, F.: A laser range-finder scanner system for precise maneouver and obstacle avoidance in maritime and inland navigation. In: Proceedings Elmar—International Symposium Electronics in Marine, pp. 101–106 (2004)

  19. Johnson, A.: Spin-images: a representation for 3-d surface matching. Ph.D. Thesis, Robotics Institute, Carnegie Mellon University (1997)

  20. Jones, G.R., Deakin, A.G., Spencer, J.W.: Chromatic Monitoring of Complex Conditions. CRC Press, Boca Raton (2008)

  21. Jones, G.R., Russell, P.C., Vourdas, A.: Chromatic compression of sensor signals in the wavelength, time and parameter domains. In: IEE Intelligent and Self-Validating Sensors, pp. 9/1–9/4 (1999)

  22. Jones, G.R., Russell, P.C., Vourdas, A., Cosgrave, J., Stergioulas, L., Haber, R.: The gabor transform basis of chromatic monitoring. Meas. Sci. Technol. 11(5), 489–498 (2000)

    Article  Google Scholar 

  23. Langer, D., Mettenleiter, M., Hartl, F., Frohlich, C.: Imaging ladar for 3-d surveying and cad modeling of real-world environments. Int. J. Robot. Res. 19(11), 1075–1088 (2000)

    Article  Google Scholar 

  24. Lappas, C.: Chromatic ultrasonic tracking. Ph.D. Thesis, University of Liverpool (2007)

  25. Lengyel, E.: Mathematics for 3D Game Programming and Computer Graphics, 2nd edn. Charles River Media, MA (2004)

  26. Li, D., Deogun, J.S., Wang, K.: Gene function classification using fuzzy k-nearest neighbor approach. In: 2007 IEEE International Conference on Granular Computing, pp. 644–647 (2007)

  27. Li, L., Guo, B., Guo, L.: Rotation, scaling and translation invariant image watermarking using feature points. J. China Univ. Posts Telecommun. 15(2), 82–87 (2008)

    Article  Google Scholar 

  28. Li, S., Zhao, D.: Three-dimensional range data interpolation using b-spline surface fitting. J. Electron. Imag. 10(1), 268–273 (2001)

    Article  Google Scholar 

  29. Liu, W., He, Y.: 3d model retrieval based on orthogonal projections. In: Proceedings—Ninth International Conference on Computer Aided Design and Computer Graphics, CAD/CG 2005, vol. 2005, pp. 157–162 (2005)

  30. Maas, H.: Closed solutions for the determination of parametric building models from invariant moments of airborne laserscanner data. Transform. J. 32, 193–199 (1999)

    Google Scholar 

  31. Marino, R.M., Davis, W.R.: Jigsaw: a foliage-penetrating 3d imaging laser radar system. Lincoln Lab. J. 15(1), 23–36 (2005)

    Google Scholar 

  32. Meyer, G.J., Weber, J.R.: The effects of different shape-based metrics on identification of military targets from 3d ladar data. SPIE 560H(1–11), 60 (2006)

  33. Mian, A.S., Bennamoun, M., Owens, R.A.: Automatic correspondence for 3d modeling: an extensive review. Int. J. Shape Model. 11(2), 253–291 (2005)

    Article  MATH  Google Scholar 

  34. Mitchell, T.M.: Machine Learning, Computer Science, vol. 6. McGraw-Hill, New York (1997)

  35. Moon, P.H., Spencer, D.E.: The Photic Field. MIT Press, Cambridge (1981)

  36. Paquet, E., Rioux, M.: Content-based search engine for vrml databases. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 541–546 (1998)

  37. Paquet, E., Robinette, K.M., Rioux, M.: Management of three-dimensional and anthropometric databases: Alexandria and cleopatra. J. Electron. Imag. 9(4), 421–431 (2000)

    Article  Google Scholar 

  38. Perona, M.T., Mahalanobis, A., Zachery, K.N.: Ladar automatic target recognition using correlation filters. Proceedings of SPIE—The International Society for Optical Engineering 3718, 388–396 (1999)

    Google Scholar 

  39. Petitjean, S.: A survey of methods for recovering quadrics in triangle meshes. ACM Comput. Surv. 34(2), 2110–0262 (2002)

    Article  Google Scholar 

  40. Powell, G., Marshall, D., Milliken, R., Markham, K.: Data fusion of flir and ladar in autonomous weapons systems. In: Proceedings of Information Fusion (2003)

  41. Radhika, K.R., Venkatesha, M.K., Sekhar, G.N.: Off-line signature authentication based on moment invariants using support vector machine. J. Comput. Sci. 6(3), 305–311 (2010)

    Article  Google Scholar 

  42. Rizon, M., Yazid, H., Saad, P., Shakaff, A.Y.M., Saad, A.R., Mamat, M.R., Yaacob, S., Desa, H., Karthigayan, M.: Object detection using geometric invariant moment. Am. J. Appl. Sci. 2(6), 1876–1878 (2006)

    Google Scholar 

  43. Ruel, S., English, C.E., Melo, L., Berube, A., Aikman, D., Deslauriers, A.M., Church, P.M., Maheux, J.: Field testing of a 3d automatic target recognition and pose estimation algorithm. SPIE, pp. 102–111 (2004)

  44. Stergioulas, L.K., Vourdas, A., Jones, G.R.: Gabor representation of optical signals using a truncated von neumann lattice and its practical implementation. Opt. Eng. 39(7), 1965–1971 (2000)

    Article  Google Scholar 

  45. Stevens, M.R., Snorrason, M., Ruda, H., Amphay, S.: Feature based target classification in laser radar. Proceedings of SPIE—The International Society for Optical Engineering 4726, 46–57 (2002)

    Google Scholar 

  46. Suzuki, M.T., Kato, T., Otsu, N.: A similarity retrieval of 3d polygonal models using rotation invariant shape descriptors. IEEE Int. Conf. Syst. Man Cybern. 4, 2946–2952 (2000)

    Google Scholar 

  47. Tang, C., Hang, H.: A feature-based robust digital image watermarking scheme. IEEE Trans. Signal Process. 51(4), 950–959 (2003)

    Article  MathSciNet  Google Scholar 

  48. Trussell, C.W.: 3d imaging for army applications. In: Proceedings of SPIE—The International Society for Optical Engineering, vol. 4377, pp. 126–131 (2001)

  49. Vadlamani, A.K., De Haag, M.U.: Aerial vehicle navigation over unknown terrain environments using inertial measurements and dual airborne laser scanners or flash ladar. In: Proceedings of SPIE—The International Society for Optical Engineering, vol. 6550, pp. 65, 500B (1–12) (2007)

  50. Wijesoma, W.S., Kodagoda, K.R.S., Balasuriya, A.P.: Road-boundary detection and tracking using ladar sensing. IEEE Trans. Robot. Autom. 20(3), 456–464 (2004)

    Article  Google Scholar 

  51. Xu, S., Jones, G.R.: Event and movement monitoring using chromatic. Meas. Sci. Technol. 17(12), 3204–3211 (2006)

    Article  Google Scholar 

  52. Yan, F., Mei, W., Chunqin, Z.: Sar image target recognition based on hu invariant moments and svm. In: 5th International Conference on Information Assurance and Security, IAS 2009, vol. 1, pp. 585–588 (2009)

  53. Zhang, G., Cheng, B., Feng, R., Li, J.: Real-time driver eye detection method using support vector machine with hu invariant moments. In: Proceedings of the 7th International Conference on Machine Learning and Cybernetics, ICMLC, vol. 5, pp. 2999–3004 (2008)

  54. Zhang, N., Watanabe, T.: Image representation and classification based on data compression. In: SAC 10 Proceedings of the 2010 ACM Symposium on Applied Computing, pp. 981–982 (2010)

Download references

Acknowledgments

We would like to express our sincere gratitude to those who have provided the financial support throughout the research, the Iraqi Ministry of Higher Education and Scientific Research.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ali A. Al-Temeemy.

Appendices

Appendix A: Procedure of extracting invariant chromatic features

The spatial distributions of the chromatic processors make them variant to the shift and scale effects. If the signal is moved from its original location, or enlarged the processed chromatic values will change. To make these processors invariant, a special approach was proposed, which makes the centres of the processors and their widths adaptable to the input signal type [5]. Referring to Fig. 11, the approach starts by calculating the centroid \(c_{m}\) of the input signal \(P_{R}(l_\mathrm{o})\) of \(\ell \) length by

$$\begin{aligned} c_{m}=\sum _{l_\mathrm{o}=1}^{\ell } l_\mathrm{o} P_{R}(l_\mathrm{o})/\sum _{l_\mathrm{o}=1}^{\ell }P_{R}(l_\mathrm{o}) \end{aligned}$$
(3)

This centroid is then used as a boundary condition for calculating two processors’ centres \(C_{R}\) and \(C_{B}\) as follows:

$$\begin{aligned}&\!\!\! C_{R}=\sum _{l_\mathrm{o}=1}^{c_{m}} l_\mathrm{o} P_{R}(l_\mathrm{o})/\sum _{l_\mathrm{o}=1}^{c_{m}}P_{R}(l_\mathrm{o})\end{aligned}$$
(4)
$$\begin{aligned}&\!\!\! C_{B}=\sum _{l_\mathrm{o}=c_{m}}^{\ell } l_\mathrm{o} P_{R}(l_\mathrm{o})/\sum _{l_\mathrm{o}=c_{m}}^{\ell }P_{R}(l_\mathrm{o}) \end{aligned}$$
(5)

The third processor centre \(G_{G}\) can be then determined from the following equation:

$$\begin{aligned} C_{G}=(C_{R}+C_{B})/2 \end{aligned}$$
(6)

The widths for these three processors \(W_{RGB}\) are all equal and they define as

$$\begin{aligned} W_{RGB}=(C_{B}-C_{R})/2 \end{aligned}$$
(7)
Fig. 11
figure 11

Calculated centres and widths for Gaussian processors using the proposed approach

The calculated centres and widths are then used to determine the response profiles (\(R (l_\mathrm{o}), G (l_\mathrm{o}), B (l_\mathrm{o})\)) for these Gaussian processors using the following equations, respectively:

$$\begin{aligned} R (l_\mathrm{o})=\exp \left( \frac{-2.8(l_\mathrm{o}-C_{R})^{2}}{W_{RGB}^{2}}\right) \end{aligned}$$
(8)
$$\begin{aligned} G (l_\mathrm{o})=\exp \left( \frac{-2.8(l_\mathrm{o}-C_{G})^{2}}{W_{RGB}^{2}}\right) \end{aligned}$$
(9)
$$\begin{aligned} B (l_\mathrm{o})=\exp \left( \frac{-2.8(l_\mathrm{o}-C_{B})^{2}}{W_{RGB}^{2}}\right) \end{aligned}$$
(10)

The processors (of the adapted centres and widths), are then applied on the discrete input signal \(P_{R}(l_\mathrm{o})\) to extract its features, where their outputs (\(R _\mathrm{o},G _\mathrm{o},B _\mathrm{o}\)) are calculated by applying the following equations, respectively: [20]:

$$\begin{aligned} R _\mathrm{o}=\sum _{l_\mathrm{o}=1}^{\ell }R (l_\mathrm{o})\,P_{R}(l_\mathrm{o})\end{aligned}$$
(11)
$$\begin{aligned} G _\mathrm{o}=\sum _{l_\mathrm{o}=1}^{\ell }G (l_\mathrm{o})\,P_{R}(l_\mathrm{o})\end{aligned}$$
(12)
$$\begin{aligned} B _\mathrm{o}=\sum _{l_\mathrm{o}=1}^{\ell }B (l_\mathrm{o})\,P_{R}(l_\mathrm{o}) \end{aligned}$$
(13)

The approach then is to evaluate combinations of these cross correlations to yield coordinates which define the signal in one of several chromatic modes (e.g. x:y, Lab, \(HLS \), etc.) depending on the nature of the information sought [20].

The Hue–Lightness–Saturation (\(HLS \)) scheme is used in this work. The choice of this scheme enables the intuitive methods of colour science to be related to signal defining factors, where \(L \) represents the strength of the signal, \(S \) its spread in the measured domain and \(H \) is the dominant measured value [51].

The transformation of the processors’ outputs (\(R _\mathrm{o}\), \(G _\mathrm{o}\), \(B _\mathrm{o}\)) to \(HLS \) is performed using the following relationships [20, 51]:

$$\begin{aligned} H= & {} 0.667-0.333\left( \frac{g_\mathrm{o}}{g_\mathrm{o}+b_\mathrm{o}}\right) ,\quad r_\mathrm{o}=0 \\= & {} 1.000-0.333\left( \frac{b_\mathrm{o}}{b_\mathrm{o}+r_\mathrm{o}}\right) ,\quad g_\mathrm{o}=0 \nonumber \\= & {} 0.333-0.333\left( \frac{r_\mathrm{o}}{r_\mathrm{o}+g_\mathrm{o}}\right) ,\quad b_\mathrm{o}=0 \nonumber \end{aligned}$$
(14)
$$\begin{aligned} L= & {} \frac{R _\mathrm{o}+G _\mathrm{o}+B _\mathrm{o}}{3}\end{aligned}$$
(15)
$$\begin{aligned} S= & {} \frac{\mathrm{max}(R _\mathrm{o},G _\mathrm{o},B _\mathrm{o})-\mathrm{min}(R _\mathrm{o},G _\mathrm{o},B _\mathrm{o})}{\mathrm{max}(R _\mathrm{o},G _\mathrm{o},B _\mathrm{o})+\mathrm{min}(R _\mathrm{o},G _\mathrm{o},B _\mathrm{o})} \end{aligned}$$
(16)

where

$$\begin{aligned} r_\mathrm{o}= & {} R _\mathrm{o}-\mathrm{min}(R _\mathrm{o},G _\mathrm{o},B _\mathrm{o})\end{aligned}$$
(17)
$$\begin{aligned} g_\mathrm{o}= & {} G _\mathrm{o}-\mathrm{min}(R _\mathrm{o},G _\mathrm{o},B _\mathrm{o})\end{aligned}$$
(18)
$$\begin{aligned} b_\mathrm{o}= & {} B _\mathrm{o}-\mathrm{min}(R _\mathrm{o},G _\mathrm{o},B _\mathrm{o}) \end{aligned}$$
(19)

\(\mathrm{\mathrm{max}}(R _\mathrm{o},G _\mathrm{o},B _\mathrm{o})\) and \(\mathrm{min}(R _\mathrm{o},G _\mathrm{o},B _\mathrm{o})\) represents the parameter (\(R _\mathrm{o},G _\mathrm{o},B _\mathrm{o}\)) having the highest and lowest values, respectively. If \( R _\mathrm{o} \, \& \, G _\mathrm{o} \, \& \, B _\mathrm{o} =0\) then \(S =0\) and \(H \) is undefined.

The \(H \) and \(S \) processors’ outputs only are used without \(L \), because of their robustness to the scale effect and their built-in ability to have a fixed range of values ([0 1]), while \(L \) requires normalisation and it is sensitive to scale effect.

Appendix B: Object rotation angle

The object rotation angles \(\Phi _\mathrm{o}\) can be calculated using different methods such as moments [6] and principle component analysis PCA [16, 25]. It is the angle between the natural axis \(x_{\mathfrak {R}}^{'}\) for that object and the \(x_{\mathfrak {R}}\)-axis as shown in Fig. 12.

Fig. 12
figure 12

Car image rotation angle \(\Phi _\mathrm{o}\) and its natural axis (\(x_{\mathfrak {R}}^{'}\), \(y_{\mathfrak {R}}^{'}\))

The object natural axis (\(x_{\mathfrak {R}}^{'}\), \(y_{\mathfrak {R}}^{'}\)) can be simply calculated by determining the eigenvalues of the image covariance matrix. This matrix is defined by [16]:

$$\begin{aligned} C_\mathbf{sv }= & {} \frac{1}{K_{s}}\sum _{k_{s}=1}^{K_{s}}\mathbf{sv }_{k_{s}}\mathbf{sv }^{T}_{k_{s}}- \mathbf m _\mathbf{sv }\mathbf m _\mathbf{sv }^{T}\end{aligned}$$
(20)
$$\begin{aligned} \mathbf m _\mathbf{sv }= & {} \frac{1}{K_{s}}\sum _{k_{s}=1}^{Ks}\mathbf{sv }_{k_{s}} \end{aligned}$$
(21)

where \(\mathbf sv \) is the pixels’ distribution vector for the image and \(K_{s}\) is the number of the vector samples \(\mathbf sv \).

The matrix \(C_\mathbf{sv }\) is real and symmetric; therefore, its eigenvalues are nonnegative real numbers [25]. These values are sorted in a non-increasing order to find the corresponding eigenvectors, which represent the natural axis of the object. The eigenvectors have mirror symmetry which can be handled by making the positive axis direction towards the highest standard deviation (STD) of the vector’s lengths (lengths between the pixel locations and the origin). More information about this procedure can be found in [36, 37]. If the STD are equal in both directions (positive and negative) then the natural axis of the second highest eigenvalues is used instead.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Al-Temeemy, A.A., Spencer, J.W. Invariant chromatic descriptor for LADAR data processing. Machine Vision and Applications 26, 649–660 (2015). https://doi.org/10.1007/s00138-015-0675-0

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00138-015-0675-0

Keywords

Navigation