Skip to main content

2019 | OriginalPaper | Buchkapitel

Estimation of the Distance Between Fingertips Using Silhouette and Texture Information of Dorsal of Hand

verfasst von : Takuma Shimizume, Takeshi Umezawa, Noritaka Osawa

Erschienen in: Advances in Visual Computing

Verlag: Springer International Publishing

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

A three-dimensional virtual object can be manipulated by hand and finger movements with an optical hand tracking device which can recognize the posture of one’s hand. Many of the conventional hand posture recognitions are based on three-dimensional coordinates of fingertips and a skeletal model of the hand. It is difficult for the conventional methods to estimate the posture of the hand when a fingertip is hidden from an optical camera, and self-occlusion often hides the fingertip. Our study, therefore, proposes an estimation of the posture of a hand based on a hand dorsal image that can be taken even when the hand occludes its fingertips. Manipulation of a virtual object requires the recognition of movements like pinching, and many of such movements can be recognized based on the distance between the fingertips of the thumb and the forefinger. Therefore, we use a regression model to estimate the distance between the fingertips of the thumb and forefinger using hand dorsal images. The regression model was constructed using Convolution Neural Networks (CNN). Our study proposes Silhouette and Texture methods for estimation of the distance between fingertips using hand dorsal images and aggregates them into two methods: Clipping method and Aggregation method. The Root Mean Squared Error (RMSE) of estimation of the distance between fingertips was 1.98 mm or less by Aggregation method for hand dorsal images which does not contain any fingertips. The RMSE of Aggregation method is smaller than that of other methods. The result shows that the proposed Aggregation method could be an effective method which is robust to self-occlusion.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literatur
1.
Zurück zum Zitat Rohrbach, M., Amin, S., Andriluka, M., Schiele, B.: A database for fine grained activity detection of cooking activities. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1194–1201 (2012) Rohrbach, M., Amin, S., Andriluka, M., Schiele, B.: A database for fine grained activity detection of cooking activities. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1194–1201 (2012)
2.
Zurück zum Zitat Zhao, W., Zhang, J., Min, J., Chai, J.: Robust realtime physics-based motion control for human grasping. ACM Trans. Graph. (TOG) 32(6), 207 (2013)CrossRef Zhao, W., Zhang, J., Min, J., Chai, J.: Robust realtime physics-based motion control for human grasping. ACM Trans. Graph. (TOG) 32(6), 207 (2013)CrossRef
3.
Zurück zum Zitat Sinha, A., Choi, C., Ramani, K.: DeepHand: robust hand pose estimation by completing a matrix imputed with deep features. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4150–4158 (2016) Sinha, A., Choi, C., Ramani, K.: DeepHand: robust hand pose estimation by completing a matrix imputed with deep features. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4150–4158 (2016)
4.
Zurück zum Zitat Kawashima, K.: Vision-based data glove considering the hiding of the thumb by hand-drawn image. Nagoya Institute of Technology Graduation thesis (2014) Kawashima, K.: Vision-based data glove considering the hiding of the thumb by hand-drawn image. Nagoya Institute of Technology Graduation thesis (2014)
5.
Zurück zum Zitat Katou, H., Mark, B., Asano, K., Tachibana, K.: Augmented reality system and its calibration based on marker tracking. Trans. Virtual Reality Soc. Jpn. 4(4), 607–616 (1999) Katou, H., Mark, B., Asano, K., Tachibana, K.: Augmented reality system and its calibration based on marker tracking. Trans. Virtual Reality Soc. Jpn. 4(4), 607–616 (1999)
6.
Zurück zum Zitat Kamakura, N.: Hand Shape and Hand Movement. Medical and Tooth Drug Publishing Co., Ltd. (1989) Kamakura, N.: Hand Shape and Hand Movement. Medical and Tooth Drug Publishing Co., Ltd. (1989)
7.
Zurück zum Zitat Ichikawa, R.: Motion modeling of finger joints during grasping and manipulation of objects. Wakayama University Bachelor thesis (2002) Ichikawa, R.: Motion modeling of finger joints during grasping and manipulation of objects. Wakayama University Bachelor thesis (2002)
8.
Zurück zum Zitat Yamamoto, S., Funahashi, K., Iwahori, Y.: Study for vision based data glove considering hidden fingertip with self-occlusion. In: Proceedings of SNPD 2012, pp. 315–320 (2012) Yamamoto, S., Funahashi, K., Iwahori, Y.: Study for vision based data glove considering hidden fingertip with self-occlusion. In: Proceedings of SNPD 2012, pp. 315–320 (2012)
9.
Zurück zum Zitat Mueller, F., Mehta, D., Sotnychenko, O., Sridhar, S., Casas, D., Theobalt, C.: Real-time hand tracking under occlusion from an egocentric RGB-D sensor. In: ICCV (2017) Mueller, F., Mehta, D., Sotnychenko, O., Sridhar, S., Casas, D., Theobalt, C.: Real-time hand tracking under occlusion from an egocentric RGB-D sensor. In: ICCV (2017)
10.
Zurück zum Zitat Jang, Y., Noh, S.-T., Chang, H.J., Kim, T.K., Woo, W.: 3D finger CAPE: clicking action and position estimation under self-occlusions in egocentric viewpoint. IEEE Trans. Vis. Comput. Graph. (TVCG) 21(4), 501–510 (2015)CrossRef Jang, Y., Noh, S.-T., Chang, H.J., Kim, T.K., Woo, W.: 3D finger CAPE: clicking action and position estimation under self-occlusions in egocentric viewpoint. IEEE Trans. Vis. Comput. Graph. (TVCG) 21(4), 501–510 (2015)CrossRef
11.
Zurück zum Zitat Rogez, G., Supancic, J.S., Ramanan, D.: First-person pose recognition using egocentric workspaces. In: CVPR (2015) Rogez, G., Supancic, J.S., Ramanan, D.: First-person pose recognition using egocentric workspaces. In: CVPR (2015)
12.
Zurück zum Zitat Farrell, R., Oza, O., Zhang, N., Morariu, V., Darrell, T., Davis, L.: Birdlets: subordinate categorization using volumetric primitives and pose-normalized appearance. In: ICCV (2011) Farrell, R., Oza, O., Zhang, N., Morariu, V., Darrell, T., Davis, L.: Birdlets: subordinate categorization using volumetric primitives and pose-normalized appearance. In: ICCV (2011)
14.
Zurück zum Zitat Yang, S., Bo, L., Wang, J., Shapiro, L.G.: Unsupervised template learning for fine-grained object recognition. In: Pereira, F., Burges, C.J.C., Bottou, L., Weinberger, K.Q. (eds.) Advances in Neural Information Processing Systems 25, pp. 3122–3130. Curran Associates Inc., Red Hook (2012) Yang, S., Bo, L., Wang, J., Shapiro, L.G.: Unsupervised template learning for fine-grained object recognition. In: Pereira, F., Burges, C.J.C., Bottou, L., Weinberger, K.Q. (eds.) Advances in Neural Information Processing Systems 25, pp. 3122–3130. Curran Associates Inc., Red Hook (2012)
15.
Zurück zum Zitat Yao, B., Khosla, A., Fei-Fei, L.: Combining randomization and discrimination for fine-grained image categorization. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1577–1584. IEEE (2011) Yao, B., Khosla, A., Fei-Fei, L.: Combining randomization and discrimination for fine-grained image categorization. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1577–1584. IEEE (2011)
16.
Zurück zum Zitat Shimizume, T., Noritaka, O., Umezawa, T.: Contact estimation between thumb and forefinger from hand dorsal image using deep learning. Chiba University Graduation thesis (2018) Shimizume, T., Noritaka, O., Umezawa, T.: Contact estimation between thumb and forefinger from hand dorsal image using deep learning. Chiba University Graduation thesis (2018)
17.
Zurück zum Zitat Schröder, M., Waltemate, T., Maycock, J., Röhlig, T., Ritter, H., Botsch, M.: Design and evaluation of reduced marker layouts for hand motion capture. Comput. Animat. Virtual Worlds 29(6), e1751 (2017)CrossRef Schröder, M., Waltemate, T., Maycock, J., Röhlig, T., Ritter, H., Botsch, M.: Design and evaluation of reduced marker layouts for hand motion capture. Comput. Animat. Virtual Worlds 29(6), e1751 (2017)CrossRef
Metadaten
Titel
Estimation of the Distance Between Fingertips Using Silhouette and Texture Information of Dorsal of Hand
verfasst von
Takuma Shimizume
Takeshi Umezawa
Noritaka Osawa
Copyright-Jahr
2019
DOI
https://doi.org/10.1007/978-3-030-33720-9_36

Premium Partner