2016 | OriginalPaper | Buchkapitel
Tipp
Weitere Kapitel dieses Buchs durch Wischen aufrufen
Erschienen in:
Transactions on Edutainment XII
In the augmented reality systems, in order to give users results of visual consistency, the key is to achieve a virtual object and the real scene seamless overlay, combined with complex scenes direction of the light source to achieve a true desktop environment based on augmented reality systems, and in the system implements the seamless integration of the actual situation, including static and dynamic two parts. You first need to get the real scene of three-dimensional computer data in conjunction with the relevant algorithm to achieve, then the virtual object should be consistent with the real scene of human action, that is, action by the user to control the virtual object moves, mainly using Kinect tracking and bones action recognition, combined with interactive 3D engine to achieve.
Bitte loggen Sie sich ein, um Zugang zu diesem Inhalt zu erhalten
Sie möchten Zugang zu diesem Inhalt erhalten? Dann informieren Sie sich jetzt über unsere Produkte:
Anzeige
1.
Zurück zum Zitat Ju, R.: Joint scaled depth and color camera and in augmented reality application. Zhejiang University (2014) Ju, R.: Joint scaled depth and color camera and in augmented reality application. Zhejiang University (2014)
2.
Zurück zum Zitat Zhang, C.: Kinect depth camera indoor three-dimensional scene reconstruction. Dalian University of Technology (2013) Zhang, C.: Kinect depth camera indoor three-dimensional scene reconstruction. Dalian University of Technology (2013)
3.
Zurück zum Zitat Peng, G., Xiangning, C., Bin, L.: Kinect sensor calibration color and depth camera. China Image Graph. 11, 1584–1590 (2014) Peng, G., Xiangning, C., Bin, L.: Kinect sensor calibration color and depth camera. China Image Graph.
11, 1584–1590 (2014)
4.
Zurück zum Zitat Li, X., Guo, W., Li, S., Chen, C., Sun,. L.: A closed algorithm pose accuracy of the depth camera. Estimate Robot, February 2014 Li, X., Guo, W., Li, S., Chen, C., Sun,. L.: A closed algorithm pose accuracy of the depth camera. Estimate Robot, February 2014
5.
Zurück zum Zitat Rong, Z.: Get technology research depth information based on light field camera. North University (2014) Rong, Z.: Get technology research depth information based on light field camera. North University (2014)
6.
Zurück zum Zitat Ho, P., Gui, J., Lin, X.: Combining fast video matting kinect depth map algorithm. Tsinghua University (Natural Science), April 2012 Ho, P., Gui, J., Lin, X.: Combining fast video matting kinect depth map algorithm. Tsinghua University (Natural Science), April 2012
7.
Zurück zum Zitat Luo, Y., Hsieh, Y.: Chang Design and realization of the system kinect sensor-based intelligent wheelchair gesture controlled robot, January 2012 Luo, Y., Hsieh, Y.: Chang Design and realization of the system kinect sensor-based intelligent wheelchair gesture controlled robot, January 2012
8.
Zurück zum Zitat Quan, H., Chen, W.G., Zheng, B., Xu, Z.: Kinect application in a video conference system. Guangxi University (Natural Science) (S1) (2011) Quan, H., Chen, W.G., Zheng, B., Xu, Z.: Kinect application in a video conference system. Guangxi University (Natural Science) (S1) (2011)
9.
Zurück zum Zitat Yu, X., Yang, X., Likai, Y., He, H., Zheng, X., Li, M.J., Yuan, J.J., Huhong, Y., Wu, T., Shi, K., Wang, R., Zhang, Y.-G.: Research and Implementation of B/S mode of the PACS Based on. J. Biomed. Eng. (3) (2004) Yu, X., Yang, X., Likai, Y., He, H., Zheng, X., Li, M.J., Yuan, J.J., Huhong, Y., Wu, T., Shi, K., Wang, R., Zhang, Y.-G.: Research and Implementation of B/S mode of the PACS Based on. J. Biomed. Eng. (3) (2004)
10.
Zurück zum Zitat Dong, S.: Progress in human-computer interaction and challenges. Comput. Aided Des. Comput. Graph. (1) (2004) Dong, S.: Progress in human-computer interaction and challenges. Comput. Aided Des. Comput. Graph. (1) (2004)
11.
Zurück zum Zitat Tong, J., To, X., Tian, H., Pan, Z., Cheung, M.: Use flight time three-dimensional shape of the camera’s non-rigid three-dimensional reconstruction. Comput. Aided Des. Comput. Graph. (3) (2011) Tong, J., To, X., Tian, H., Pan, Z., Cheung, M.: Use flight time three-dimensional shape of the camera’s non-rigid three-dimensional reconstruction. Comput. Aided Des. Comput. Graph. (3) (2011)
12.
Zurück zum Zitat Application of the depth camera on Computer Vision and Graphics (English). Comput. Sci. Explor. (6) (2011) to Xueqin, Pan Zhigeng, child crystals Application of the depth camera on Computer Vision and Graphics (English). Comput. Sci. Explor. (6) (2011) to Xueqin, Pan Zhigeng, child crystals
13.
Zurück zum Zitat Yang, C., Qi, Y., Shen, X.-K., Zhao, Q.-P.: A fast 3D scan data automatic registration method. J. Softw. 21(6), 1438–1450 (2010) CrossRef Yang, C., Qi, Y., Shen, X.-K., Zhao, Q.-P.: A fast 3D scan data automatic registration method. J. Softw.
21(6), 1438–1450 (2010)
CrossRef
14.
Zurück zum Zitat Pan, H., Wang, Q., Xie, B., Xu, S.: ACTUATORS. TOF method data processing method for three-dimensional imaging camera. Zhejiang Univ. (Eng. Sci.) (6) (2010) Pan, H., Wang, Q., Xie, B., Xu, S.: ACTUATORS. TOF method data processing method for three-dimensional imaging camera. Zhejiang Univ. (Eng. Sci.) (6) (2010)
15.
Zurück zum Zitat Zhuyan, J., Zhou, R., Zhang, L.: Scattered Data matching algorithm. Comput. Aided Des. Comput. Graph. (4) (2006) Zhuyan, J., Zhou, R., Zhang, L.: Scattered Data matching algorithm. Comput. Aided Des. Comput. Graph. (4) (2006)
16.
Zurück zum Zitat Giles, J.: Inside the race to hack the kinect. New Sci. 208, 22–23 (2010) Giles, J.: Inside the race to hack the kinect. New Sci.
208, 22–23 (2010)
17.
Zurück zum Zitat Chang, Y.J., Chen, S.F., Huang, J.D.: A kinect-based system for physical rehabilitation: a pilot study for young adults with motor disabilities. Res. Dev. Disabil. 32, 2566–2570 (2011) CrossRef Chang, Y.J., Chen, S.F., Huang, J.D.: A kinect-based system for physical rehabilitation: a pilot study for young adults with motor disabilities. Res. Dev. Disabil.
32, 2566–2570 (2011)
CrossRef
18.
Zurück zum Zitat Zöllner, M., Huber, S., Jetter, H.-C., Reiterer, H.: NAVI – a proof-of-concept of a mobile navigational aid for visually impaired based on the Microsoft Kinect. In: Campos, P., Graham, N., Jorge, J., Nunes, N., Palanque, P., Winckler, M. (eds.) INTERACT 2011, Part IV. LNCS, vol. 6949, pp. 584–587. Springer, Heidelberg (2011) CrossRef Zöllner, M., Huber, S., Jetter, H.-C., Reiterer, H.: NAVI – a proof-of-concept of a mobile navigational aid for visually impaired based on the Microsoft Kinect. In: Campos, P., Graham, N., Jorge, J., Nunes, N., Palanque, P., Winckler, M. (eds.) INTERACT 2011, Part IV. LNCS, vol. 6949, pp. 584–587. Springer, Heidelberg (2011)
CrossRef
19.
Zurück zum Zitat Michael, N., Fink, J., Kumar, V.: Cooperative manipulation and transportation with aerial robots. Auton. Robots 30, 73–86 (2011) CrossRefMATH Michael, N., Fink, J., Kumar, V.: Cooperative manipulation and transportation with aerial robots. Auton. Robots
30, 73–86 (2011)
CrossRefMATH
- Titel
- The Seamless Integration Achievement of the Actual Situation of the Scene
- DOI
- https://doi.org/10.1007/978-3-662-50544-1_11
- Autoren:
-
Jinhui Huang
Haichao Shi
- Verlag
- Springer Berlin Heidelberg
- Sequenznummer
- 11