Skip to main content

2018 | OriginalPaper | Buchkapitel

User Defined Conceptual Modeling Gestures

verfasst von : Bige Tunçer, Sumbul Khan

Erschienen in: Computational Studies on Cultural Variation and Heredity

Verlag: Springer Singapore

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Gesture and speech based interaction offers designers a powerful technique to create 3D CAD models. Previous studies on gesture based modeling have employed author defined gestures which may not be very user friendly. The aim of this study was to collect a data set of user generated gestures and accompanying voice commands for 3D modeling for form exploration in the conceptual architectural design phase. We conducted an experiment with 41 subjects to elicit their preferences in using gestures and speech for twelve 3D CAD modeling referents. In this paper we present the different types of gestures we found, and present user preferences of gestures and speech. Findings from this study will be used for the design of a speech and gesture based Cad modeling interface.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literatur
1.
Zurück zum Zitat Kang, J., et al. (2013). Instant 3D design concept generation and visualization by real-time hand gesture recognition. Computers in Industry, 64(7), 785–797.CrossRef Kang, J., et al. (2013). Instant 3D design concept generation and visualization by real-time hand gesture recognition. Computers in Industry, 64(7), 785–797.CrossRef
2.
Zurück zum Zitat Wobbrock, J. O., Morris, M. R. & Wilson, A. D. (2009). User-defined gestures for surface computing. In Proceedings of the 27th International Conference on Human Factors in Computing Systems—CHI 09 (p. 1083). Wobbrock, J. O., Morris, M. R. & Wilson, A. D. (2009). User-defined gestures for surface computing. In Proceedings of the 27th International Conference on Human Factors in Computing Systems—CHI 09 (p. 1083).
3.
Zurück zum Zitat van den Hoven, E., & Mazalek, A. (2011). Grasping gestures: Gesturing with physical artifacts. Artificial Intelligence for Engineering Design, Analysis and Manufacturing, 25(3), 255–271.CrossRef van den Hoven, E., & Mazalek, A. (2011). Grasping gestures: Gesturing with physical artifacts. Artificial Intelligence for Engineering Design, Analysis and Manufacturing, 25(3), 255–271.CrossRef
4.
Zurück zum Zitat Malizia, A., & Bellucci, A. (2012). The artificiality of natural user interfaces. Communications of the ACM, 55(3), 36.CrossRef Malizia, A., & Bellucci, A. (2012). The artificiality of natural user interfaces. Communications of the ACM, 55(3), 36.CrossRef
5.
Zurück zum Zitat Zamborlin, B., et al. (2014). Fluid gesture interaction design. ACM Transactions on Interactive Intelligent Systems, 3(4), 1–30.CrossRef Zamborlin, B., et al. (2014). Fluid gesture interaction design. ACM Transactions on Interactive Intelligent Systems, 3(4), 1–30.CrossRef
6.
Zurück zum Zitat Quek, F. K. (1995). Eyes in the interface. Image and Vision Computing, 13(6), 511–525.CrossRef Quek, F. K. (1995). Eyes in the interface. Image and Vision Computing, 13(6), 511–525.CrossRef
7.
Zurück zum Zitat Grandhi, S. A., Joue, G. & Mittelberg, I. (2011). Understanding naturalness and intuitiveness in gesture production. In Proceedings of the 2011 Annual Conference on Human Factors in Computing Systems—CHI’11 (p. 821). Grandhi, S. A., Joue, G. & Mittelberg, I. (2011). Understanding naturalness and intuitiveness in gesture production. In Proceedings of the 2011 Annual Conference on Human Factors in Computing Systems—CHI’11 (p. 821).
8.
Zurück zum Zitat Thompson, L. A., & Massaro, D. W. (1986). Evaluation and Integration of Speech and Pointing Gestures during Referential Understanding. Journal of Experimental Child Psychology, 42, 144–168.CrossRef Thompson, L. A., & Massaro, D. W. (1986). Evaluation and Integration of Speech and Pointing Gestures during Referential Understanding. Journal of Experimental Child Psychology, 42, 144–168.CrossRef
9.
Zurück zum Zitat McNeill, D. (1992). Hand and mind: what gestures reveal about thought (Univ. of Chicago Press). McNeill, D. (1992). Hand and mind: what gestures reveal about thought (Univ. of Chicago Press).
10.
Zurück zum Zitat Athavankar, U. (1999). Gestures, imagery and spatial reasoning. ‘Visual and Spatial Reasoning’. In J. S Garo & B. Tversky (Eds.), Preprints of the International Conference on Visual Reasoning (VR 99) (pp 103–128). MIT. Athavankar, U. (1999). Gestures, imagery and spatial reasoning. ‘Visual and Spatial Reasoning’. In J. S Garo & B. Tversky (Eds.), Preprints of the International Conference on Visual Reasoning (VR 99) (pp 103–128). MIT.
11.
Zurück zum Zitat Varshney, S. (1998). Castle in the air: A strategy to model shapes in a computer. In Proceedings of the Asia Pacific Conference on Computer Human Interaction (APCHI 98) (pp. 350–355). Varshney, S. (1998). Castle in the air: A strategy to model shapes in a computer. In Proceedings of the Asia Pacific Conference on Computer Human Interaction (APCHI 98) (pp. 350–355).
12.
Zurück zum Zitat Athavankar, U. (1997). Mental imagery as a design tool. In Cybernetics and Systems (Vol. 28, No. 1, pp. 25–42). Athavankar, U. (1997). Mental imagery as a design tool. In Cybernetics and Systems (Vol. 28, No. 1, pp. 25–42).
13.
Zurück zum Zitat Wardak, D. (2016). Gestures orchestrating the multimodal development of ideas in educational design team meetings. Design Studies, 47, 1–22.CrossRef Wardak, D. (2016). Gestures orchestrating the multimodal development of ideas in educational design team meetings. Design Studies, 47, 1–22.CrossRef
14.
Zurück zum Zitat Salisbury, M. W., Hendrickson, J. H., Lammers, T. L., Fu, C., & Moody, S. A. (1990). Talk and draw: Bundling speech and graphics. Computer, 23(8), 59–65.CrossRef Salisbury, M. W., Hendrickson, J. H., Lammers, T. L., Fu, C., & Moody, S. A. (1990). Talk and draw: Bundling speech and graphics. Computer, 23(8), 59–65.CrossRef
15.
Zurück zum Zitat Weimer, D. & Ganapathy, S. K. (1989). A synthetic visual environment with hand gesturing and voice input. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems: Wings for the Mind—CHI’89 (Vol. 20, pp. 235–240). Weimer, D. & Ganapathy, S. K. (1989). A synthetic visual environment with hand gesturing and voice input. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems: Wings for the Mind—CHI’89 (Vol. 20, pp. 235–240).
16.
Zurück zum Zitat Herold, J., & Stahovich, T. F. (2011). Using speech to identify gesture pen strokes in collaborative, multimodal device descriptions. Artificial Intelligence for Engineering Design, Analysis and Manufacturing: AIEDAM, 25(3), 237–254.CrossRef Herold, J., & Stahovich, T. F. (2011). Using speech to identify gesture pen strokes in collaborative, multimodal device descriptions. Artificial Intelligence for Engineering Design, Analysis and Manufacturing: AIEDAM, 25(3), 237–254.CrossRef
17.
Zurück zum Zitat Bolt, R. A. (1980). Put-that-there. In Proceedings of the 7th Annual Conference on Computer Graphics and Interactive Techniques—SIGGRAPH’80 (Vol. 14, pp. 262–270). New York, New York, USA: ACM. Bolt, R. A. (1980). Put-that-there. In Proceedings of the 7th Annual Conference on Computer Graphics and Interactive Techniques—SIGGRAPH’80 (Vol. 14, pp. 262–270). New York, New York, USA: ACM.
18.
Zurück zum Zitat Nanjundaswamy, V. G. et al. (2013). Intuitive 3D computer-aided design (CAD) system with multimodal interfaces. In Proceedings of the ASME Design Engineering Technical Conference, 2 A. Portland, Oregon, USA. Nanjundaswamy, V. G. et al. (2013). Intuitive 3D computer-aided design (CAD) system with multimodal interfaces. In Proceedings of the ASME Design Engineering Technical Conference, 2 A. Portland, Oregon, USA.
19.
Zurück zum Zitat Dave, D., Chowriappa, A., & Kesavadas, T. (2013). Gesture interface for 3D CAD modeling using Kinect. Computer-Aided Design and Applications, 10(4), 663–669.CrossRef Dave, D., Chowriappa, A., & Kesavadas, T. (2013). Gesture interface for 3D CAD modeling using Kinect. Computer-Aided Design and Applications, 10(4), 663–669.CrossRef
20.
Zurück zum Zitat Arroyave-Tobón, S., Osorio-Gómez, G., & Cardona-McCormick, J. F. (2015). Air-modelling: A tool for gesture-based solid modelling in context during early design stages in AR environments. Computers in Industry, 66, 73–81.CrossRef Arroyave-Tobón, S., Osorio-Gómez, G., & Cardona-McCormick, J. F. (2015). Air-modelling: A tool for gesture-based solid modelling in context during early design stages in AR environments. Computers in Industry, 66, 73–81.CrossRef
21.
Zurück zum Zitat Song, J., Cho, S., Baek, S.-Y., Lee, K., & Bang, H. (2014). GaFinC: Gaze and finger control interface for 3D model manipulation in CAD application. Computer Aided Design, 46, 239–245.CrossRef Song, J., Cho, S., Baek, S.-Y., Lee, K., & Bang, H. (2014). GaFinC: Gaze and finger control interface for 3D model manipulation in CAD application. Computer Aided Design, 46, 239–245.CrossRef
22.
Zurück zum Zitat Huang, J. & Rai, R. (2014). Hand gesture based intuitive CAD interface. In Proceedings of the ASME 2014 International Design Engineering Technical Conferences & Computers and Information in Engineering Conference, 1 A. Buffalo, New York, USA. Huang, J. & Rai, R. (2014). Hand gesture based intuitive CAD interface. In Proceedings of the ASME 2014 International Design Engineering Technical Conferences & Computers and Information in Engineering Conference, 1 A. Buffalo, New York, USA.
23.
Zurück zum Zitat Zhong, K. et al. (2011). Rapid 3D conceptual design based on hand gesture. In 2011 3rd International Conference on Advanced Computer Control, ICACC 2011 (pp. 192–197). Zhong, K. et al. (2011). Rapid 3D conceptual design based on hand gesture. In 2011 3rd International Conference on Advanced Computer Control, ICACC 2011 (pp. 192–197).
24.
Zurück zum Zitat Yi, X., Qin, S., & Kang, J. (2009). Generating 3D architectural models based on hand motion and gesture. Computers in Industry, 60(9), 677–685.CrossRef Yi, X., Qin, S., & Kang, J. (2009). Generating 3D architectural models based on hand motion and gesture. Computers in Industry, 60(9), 677–685.CrossRef
25.
Zurück zum Zitat Nacenta, M.A. et al. (2013). Memorability of Pre-designed & user-defined gesture sets. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 1099–1108). Nacenta, M.A. et al. (2013). Memorability of Pre-designed & user-defined gesture sets. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 1099–1108).
26.
Zurück zum Zitat Hinckley, K., Pausch, R. & Proffitt, D. (1997). Attention and visual feedback. In Proceedings of the 1997 Symposium on Interactive 3D Graphics—SI3D’97 (pp. 121–130). New York, New York, USA: ACM Press. Hinckley, K., Pausch, R. & Proffitt, D. (1997). Attention and visual feedback. In Proceedings of the 1997 Symposium on Interactive 3D Graphics—SI3D’97 (pp. 121–130). New York, New York, USA: ACM Press.
27.
Zurück zum Zitat Boyes Braem, P. & Bram, T. (2000). A pilot study of the expressive gestures used by classical orchestral conductors. In: K. Emmorey & H. Lane (Eds.), The Signs of Language Revisted. An Anthology to Honor Ursula Bellugi and Edward Klima (pp. 143–167). New Jersey: Lawrence Erlbaum Associates. Boyes Braem, P. & Bram, T. (2000). A pilot study of the expressive gestures used by classical orchestral conductors. In: K. Emmorey & H. Lane (Eds.), The Signs of Language Revisted. An Anthology to Honor Ursula Bellugi and Edward Klima (pp. 143–167). New Jersey: Lawrence Erlbaum Associates.
Metadaten
Titel
User Defined Conceptual Modeling Gestures
verfasst von
Bige Tunçer
Sumbul Khan
Copyright-Jahr
2018
Verlag
Springer Singapore
DOI
https://doi.org/10.1007/978-981-10-8189-7_10

Premium Partner