Skip to main content
Erschienen in: International Journal of Computer Vision 2/2019

24.05.2018

Efficiently Annotating Object Images with Absolute Size Information Using Mobile Devices

verfasst von: Martin Hofmann, Marco Seeland, Patrick Mäder

Erschienen in: International Journal of Computer Vision | Ausgabe 2/2019

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

The projection of a real world scenery to a planar image sensor inherits the loss of information about the 3D structure as well as the absolute dimensions of the scene. For image analysis and object classification tasks, however, absolute size information can make results more accurate. Today, the creation of size annotated image datasets is effort intensive and typically requires measurement equipment not available to public image contributors. In this paper, we propose an effective annotation method that utilizes the camera within smart mobile devices to capture the missing size information along with the image. The approach builds on the fact that with a camera, calibrated to a specific object distance, lengths can be measured in the object’s plane. We use the camera’s minimum focus distance as calibration distance and propose an adaptive feature matching process for precise computation of the scale change between two images facilitating measurements on larger object distances. Eventually, the measured object is segmented and its size information is annotated for later analysis. A user study showed that humans are able to retrieve the calibration distance with a low variance. The proposed approach facilitates a measurement accuracy comparable to manual measurement with a ruler and outperforms state-of-the-art methods in terms of accuracy and repeatability. Consequently, the proposed method allows in-situ size annotation of objects in images without the need for additional equipment or an artificial reference object in the scene.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Literatur
Zurück zum Zitat Agarwal, S. (2009). R.: Building rome in a day. In International conference on computer vision (ICCV). Agarwal, S. (2009). R.: Building rome in a day. In International conference on computer vision (ICCV).
Zurück zum Zitat Bay, H., Ess, A., Tuytelaars, T., & Van Gool, L. (2008). Speeded-up robust features (surf). Computer Vision and Image Understanding, 110(3), 346–359.CrossRef Bay, H., Ess, A., Tuytelaars, T., & Van Gool, L. (2008). Speeded-up robust features (surf). Computer Vision and Image Understanding, 110(3), 346–359.CrossRef
Zurück zum Zitat Bradski, G. (2000). The OpenCV library. Dr Dobb’s Journal of Software Tools, 25, 120–123. Bradski, G. (2000). The OpenCV library. Dr Dobb’s Journal of Software Tools, 25, 120–123.
Zurück zum Zitat Bursuc, A., Tolias, G., & Jégou, H. (2015). Kernel local descriptors with implicit rotation matching. In Proceedings of the 5th ACM on international conference on multimedia retrieval (pp. 595–598). ACM, New York, NY, USA, ICMR ’15. https://doi.org/10.1145/2671188.2749379. Bursuc, A., Tolias, G., & Jégou, H. (2015). Kernel local descriptors with implicit rotation matching. In Proceedings of the 5th ACM on international conference on multimedia retrieval (pp. 595–598). ACM, New York, NY, USA, ICMR ’15. https://​doi.​org/​10.​1145/​2671188.​2749379.
Zurück zum Zitat Criminisi, A., Reid, I., & Zisserman, A. (1999). A plane measuring device. Image and Vision Computing, 17(8), 625–634.CrossRef Criminisi, A., Reid, I., & Zisserman, A. (1999). A plane measuring device. Image and Vision Computing, 17(8), 625–634.CrossRef
Zurück zum Zitat Davison, A. J., Reid, I. D., Molton, N. D., & Stasse, O. (2007). Monoslam: Real-time single camera slam. IEEE Transactions on Pattern Analysis and Machine Intelligence, 29(6), 1052–1067.CrossRef Davison, A. J., Reid, I. D., Molton, N. D., & Stasse, O. (2007). Monoslam: Real-time single camera slam. IEEE Transactions on Pattern Analysis and Machine Intelligence, 29(6), 1052–1067.CrossRef
Zurück zum Zitat Everingham, M., Van Gool, L., Williams, C. K., Winn, J., & Zisserman, A. (2010). The pascal visual object classes (voc) challenge. International Journal of Computer Vision, 88(2), 303–338.CrossRef Everingham, M., Van Gool, L., Williams, C. K., Winn, J., & Zisserman, A. (2010). The pascal visual object classes (voc) challenge. International Journal of Computer Vision, 88(2), 303–338.CrossRef
Zurück zum Zitat Fuentes-Pacheco, J., Ruiz-Ascencio, J., & Rendón-Mancha, J. M. (2015). Visual simultaneous localization and mapping: A survey. Artificial Intelligence Review, 43(1), 55–81.CrossRef Fuentes-Pacheco, J., Ruiz-Ascencio, J., & Rendón-Mancha, J. M. (2015). Visual simultaneous localization and mapping: A survey. Artificial Intelligence Review, 43(1), 55–81.CrossRef
Zurück zum Zitat Karlsson, N., di Bernardo, E., Ostrowski, J., Goncalves, L., Pirjanian, P., & Munich, M. E. (2005). The vslam algorithm for robust localization and mapping. In Proceedings of the 2005 IEEE international conference on robotics and automation (pp. 24–29). https://doi.org/10.1109/ROBOT.2005.1570091. Karlsson, N., di Bernardo, E., Ostrowski, J., Goncalves, L., Pirjanian, P., & Munich, M. E. (2005). The vslam algorithm for robust localization and mapping. In Proceedings of the 2005 IEEE international conference on robotics and automation (pp. 24–29). https://​doi.​org/​10.​1109/​ROBOT.​2005.​1570091.
Zurück zum Zitat Ke, Y., & Sukthankar, R. (2004). Pca-sift: A more distinctive representation for local image descriptors. In Proceedings of the 2004 IEEE computer society conference on computer vision and pattern recognition, 2004 (Vol. 2, pp. II–506–II–513). CVPR 2004. https://doi.org/10.1109/CVPR.2004.1315206. Ke, Y., & Sukthankar, R. (2004). Pca-sift: A more distinctive representation for local image descriptors. In Proceedings of the 2004 IEEE computer society conference on computer vision and pattern recognition, 2004 (Vol. 2, pp. II–506–II–513). CVPR 2004. https://​doi.​org/​10.​1109/​CVPR.​2004.​1315206.
Zurück zum Zitat Kim, H., Richardt, C., & Theobalt, C. (2016). Video depth-from-defocus. In 2016 fourth international conference on 3D vision (3DV) (pp. 370–379). IEEE. Kim, H., Richardt, C., & Theobalt, C. (2016). Video depth-from-defocus. In 2016 fourth international conference on 3D vision (3DV) (pp. 370–379). IEEE.
Zurück zum Zitat Levin, A., Fergus, R., Durand, F., & Freeman, W. T. (2007). Image and depth from a conventional camera with a coded aperture. ACM Transactions on Graphics (TOG), 26(3), 70.CrossRef Levin, A., Fergus, R., Durand, F., & Freeman, W. T. (2007). Image and depth from a conventional camera with a coded aperture. ACM Transactions on Graphics (TOG), 26(3), 70.CrossRef
Zurück zum Zitat Lowe, D. G. (2004). Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision, 60(2), 91–110.MathSciNetCrossRef Lowe, D. G. (2004). Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision, 60(2), 91–110.MathSciNetCrossRef
Zurück zum Zitat Luhmann, T., Robson, S., Kyle, S., & Harley, I. (2006). Close range photogrammetry: Principles, methods and applications. Dunbeath: Whittles. Luhmann, T., Robson, S., Kyle, S., & Harley, I. (2006). Close range photogrammetry: Principles, methods and applications. Dunbeath: Whittles.
Zurück zum Zitat Nitzan, D. (1985). Development of intelligent robots: Achievements and issues. IEEE Journal on Robotics and Automation, 1(1), 3–13.CrossRef Nitzan, D. (1985). Development of intelligent robots: Achievements and issues. IEEE Journal on Robotics and Automation, 1(1), 3–13.CrossRef
Zurück zum Zitat Robertson, P., Frassl, M., Angermann, M., Doniec, M., Julian, B. J., Puyol, M. G., Khider, M., Lichtenstern, M., & Bruno, L. (2013). Simultaneous localization and mapping for pedestrians using distortions of the local magnetic field intensity in large indoor environments. In International conference on indoor positioning and indoor navigation (pp. 1–10). https://doi.org/10.1109/IPIN.2013.6817910. Robertson, P., Frassl, M., Angermann, M., Doniec, M., Julian, B. J., Puyol, M. G., Khider, M., Lichtenstern, M., & Bruno, L. (2013). Simultaneous localization and mapping for pedestrians using distortions of the local magnetic field intensity in large indoor environments. In International conference on indoor positioning and indoor navigation (pp. 1–10). https://​doi.​org/​10.​1109/​IPIN.​2013.​6817910.
Zurück zum Zitat Saxena, A., Sun, M., & Ng, A. Y. (2009). Make3d: Learning 3d scene structure from a single still image. IEEE Transactions on Pattern Analysis and Machine Intelligence, 31(5), 824–840.CrossRef Saxena, A., Sun, M., & Ng, A. Y. (2009). Make3d: Learning 3d scene structure from a single still image. IEEE Transactions on Pattern Analysis and Machine Intelligence, 31(5), 824–840.CrossRef
Zurück zum Zitat Schönberger, J. L., Hardmeier, H., Sattler, T., & Pollefeys, M. (2017). Comparative evaluation of hand-crafted and learned local features. In Conference on computer vision and pattern recognition (CVPR) Schönberger, J. L., Hardmeier, H., Sattler, T., & Pollefeys, M. (2017). Comparative evaluation of hand-crafted and learned local features. In Conference on computer vision and pattern recognition (CVPR)
Zurück zum Zitat Seeland, M., Rzanny, M., Alaqraa, N., Wäldchen, J., & Mäder, P. (2017). Plant species classification using flower imagesa comparative study of local feature representations. PLoS ONE, 12(2), e0170,629.CrossRef Seeland, M., Rzanny, M., Alaqraa, N., Wäldchen, J., & Mäder, P. (2017). Plant species classification using flower imagesa comparative study of local feature representations. PLoS ONE, 12(2), e0170,629.CrossRef
Zurück zum Zitat Smith, R. C., & Cheeseman, P. (1986). On the representation and estimation of spatial uncertainty. The International Journal of Robotics Research, 5(4), 56–68.CrossRef Smith, R. C., & Cheeseman, P. (1986). On the representation and estimation of spatial uncertainty. The International Journal of Robotics Research, 5(4), 56–68.CrossRef
Zurück zum Zitat Thrun, S., et al. (2002). Robotic mapping: A survey. Exploring Artificial Intelligence in the New Millennium, 1, 1–35. Thrun, S., et al. (2002). Robotic mapping: A survey. Exploring Artificial Intelligence in the New Millennium, 1, 1–35.
Zurück zum Zitat Torralba, A., Murphy, K. P., & Freeman, W. T. (2004). Sharing features: Efficient boosting procedures for multiclass object detection. In Proceedings of the 2004 IEEE computer society conference on computer vision and pattern recognition, 2004 (Vol. 2, pp. II–762–II–769). CVPR 2004. https://doi.org/10.1109/CVPR.2004.1315241. Torralba, A., Murphy, K. P., & Freeman, W. T. (2004). Sharing features: Efficient boosting procedures for multiclass object detection. In Proceedings of the 2004 IEEE computer society conference on computer vision and pattern recognition, 2004 (Vol. 2, pp. II–762–II–769). CVPR 2004. https://​doi.​org/​10.​1109/​CVPR.​2004.​1315241.
Zurück zum Zitat Wäldchen, J., Rzanny, M., Seeland, M., & Mäder, P. (2018). Automated plant species identificationtrends and future directions. PLoS Computational Biology, 14(4), e1005,993.CrossRef Wäldchen, J., Rzanny, M., Seeland, M., & Mäder, P. (2018). Automated plant species identificationtrends and future directions. PLoS Computational Biology, 14(4), e1005,993.CrossRef
Metadaten
Titel
Efficiently Annotating Object Images with Absolute Size Information Using Mobile Devices
verfasst von
Martin Hofmann
Marco Seeland
Patrick Mäder
Publikationsdatum
24.05.2018
Verlag
Springer US
Erschienen in
International Journal of Computer Vision / Ausgabe 2/2019
Print ISSN: 0920-5691
Elektronische ISSN: 1573-1405
DOI
https://doi.org/10.1007/s11263-018-1093-3

Weitere Artikel der Ausgabe 2/2019

International Journal of Computer Vision 2/2019 Zur Ausgabe