Skip to main content
Log in

Human–Robot Interaction and Cooperation Through People Detection and Gesture Recognition

  • Published:
Journal of Control, Automation and Electrical Systems Aims and scope Submit manuscript

Abstract

In this paper, we propose a system that performs human–robot interaction in order to carry out a cooperation between a robot and a person. The system is based on people detection and gesture recognition. Human beings are detected using a technique that combines face and legs detection from data provided by a camera and a laser range finder. Besides that the gesture recognition method allows the person to choose the kind of help he/she wants. In this work, the cooperative tasks the robot can perform are to guide the user up to a desired place in the work environment, to carry a load together with or for a person, to follow a person or navigate to a place to get something the user wants. Many experiments were performed and two of them (person guidance and navigation up to a predefined place carrying an object) will be shown in this paper with the purpose of validating the proposed system.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18
Fig. 19
Fig. 20
Fig. 21
Fig. 22
Fig. 23
Fig. 24

Similar content being viewed by others

References

  • Arras, K., Mozos, O., & Burgard, W. (2007). Using boosted features for the detection of people in 2d range data. In 2007 IEEE International Conference on Robotics and Automation (ICRA 2007) (pp. 3402–3407).

  • Bekey, G. A. (2005). Autonomous robots: From biological inspiration to implementation and control (intelligent robotics and autonomous agents). Cambridge: The MIT Press.

    Google Scholar 

  • Bellotto, N., & Hu, H. (2009). Multisensor-based human detection and tracking for mobile service robots. IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, 39(1), 167–181.

    Article  Google Scholar 

  • Bertsch, F., & Hafner, V. (2009). Real-time dynamic visual gesture recognition in human–robot interaction. In 9th IEEE-RAS International Conference on Humanoid Robots (pp. 447–453).

  • Carballo, A., Ohya, A., & Yuta, S. (2009). Multiple people detection from a mobile robot using a double layered laser range finder. In IEEE ICRA 2009 Workshop on People Detection and Tracking, Kobe, Japan.

  • Chen, Y., & Chen, C. (2008). Fast human detection using a novel boosted cascading structure with meta stages. IEEE Transactions on Image Processing, 17(8), 1452–1464.

    Article  MathSciNet  Google Scholar 

  • Ferreira, A., Pereira, F. G., Vassallo, R. F., Sarcinelli-Filho, M., & Bastos-Filho, T. F. (2008). An approach to avoid obstacles in mobile robot navigation: The tangential escape. SBA. Sociedade Brasileira de Automática, 19, 395–405.

    Google Scholar 

  • Gavrila, D., & Munder, S. (2007). Multi-cue pedestrian detection and tracking from a moving vehicle. International Journal of Computer Vision, 73(1), 41–59.

    Article  Google Scholar 

  • Green, S., Billinghurst, M., Chen, X. Q., & Chase, G. (2007). Human–robot collaboration: A literature review and augmented reality approach in design. International Journal of Advanced Robotic Systems, 5, 1–18.

    Google Scholar 

  • Hall, E. (1990). The silent language. New York: Anchor.

    Google Scholar 

  • Heinrich, S., & Wermter, S. (2011). Towards robust speech recognition for human–robot interaction. In Proceedings of the IROS2011 Workshop on Cognitive Neuroscience Robotics (pp. 29–34).

  • Hu, M.-K. (1962). Visual pattern recognition by moment invariants. IRE Transactions on Information Theory, 8(2), 179–187.

    Article  MATH  Google Scholar 

  • Jones, M. J., & Rehg, J. M. (2002). Statistical color models with application to skin detection. International Journal of Computer Vision, 46(1), 81–96.

    Article  MATH  Google Scholar 

  • Kelly, P., O’Connor, N., & Smeaton, A. (2008). A framework for evaluating stereo-based pedestrian detection techniques. CirSysVideo, 18(8), 1163–1167.

    Google Scholar 

  • Lee-Ferng, J., Ruiz-del Solar, J., Verschae, R., & Correa, M. (2009). Dynamic gesture recognition for human–robot interaction. In 6th Latin American Robotics Symposium (LARS) (pp. 1–8).

  • Martins, F. N. (2009). Modelagem e Compensação da Dinâmica de Robôs Móveis e sua Aplicação em Controle de Formação. Tese de Doutorado, Universidade Federal do Espírito Santo (UFES).

  • Munder, S., Schnorr, C., & Gavrila, D. (2008). Pedestrian detection and tracking using a mixture of view-based shape-texture models. Intelligent Transportation Systems, 9(2), 333–343.

    Google Scholar 

  • Ozaki, A. H., Mendonça, F., & Borges, D. L. (2010). Segmentacão, deteccão e acompanhamento automático de gestos em vídeo usando visão por computador. In XVIII Congresso Brasileiro de Automática—CBA 2010.

  • Papageorgiou, C., Oren, M., & Poggio, T. (1998). A general framework for object detection. In Sixth International Conference on Computer Vision (pp. 555–562).

  • Pereira, F. G., Santos, M. C. P., & Vassallo, R. F. (2011). A nonlinear controller for people guidance based on omnidirecional vision. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2011).

  • Pransky, J. (1996). Service robots—How should we define them? Service Robot: An International Journal, 2(1), 4–5.

    Google Scholar 

  • Pszczokowski, S. (2007). Human detection in indoor environments using multiple visual cues and a mobile robot. Tese de Doutorado.

  • Santos, M. C. P., Pereira, F. G., & Vassallo, R. F. (2011). Controle de posição aplicado a um robô guia com sistema de visão omnidirecional. In X Simpósio Brasileiro de Automação Inteligente—SBAI 2011.

  • Schmitz, N., & Berns, K. (2011). Perception systems for naturally interacting humanoid robots. In IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), Atlanta, GA, USA (pp. 425–432).

  • Stiefelhagen, R., Fugen, C., Gieselmann, P., Holzapfel, H., Nickel, K., & Waibel, A. (2004). Natural human–robot interaction using speech, head pose and gestures. International Conference on Intelligent Robots and Systems, 3, 2422–2427.

    Google Scholar 

  • Tuzel, O., Porikli, F., & Meer, P. (2007). Human detection via classification on riemannian manifolds. In Computer Vision and Pattern Recognition (pp. 1–8).

  • Viola, P., & Jones, M. (2001). Rapid object detection using a boosted cascade of simple features. In Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2001 (Vol. 1, pp. I-511-I-518).

  • Xavier, J., Pacheco, M., Castro, D., Ruano, A., & Nunes, U. (2005). Fast line, arc/circle and leg detection from laser scan data in a player driver. In Proceedings of the 2005 IEEE International Conference on Robotics and Automation, ICRA 2005 (pp. 3930–3935).

  • Yang, M., Kriegman, D., & Ahuja, N. (2002). Detecting faces in images: A survey. Pattern Analysis and Machine Intelligence, 24(1), 34–58.

    Article  Google Scholar 

Download references

Acknowledgments

The authors would like to thank FAPES (Fundação de Amparo à Pesquisa do Espírito Santo) for the financial suppor through the Project 45443211/2009.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Flávio Garcia Pereira.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Pereira, F.G., Vassallo, R.F. & Salles, E.O.T. Human–Robot Interaction and Cooperation Through People Detection and Gesture Recognition. J Control Autom Electr Syst 24, 187–198 (2013). https://doi.org/10.1007/s40313-013-0040-3

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s40313-013-0040-3

Keywords

Navigation