Skip to main content
Erschienen in: Journal of Intelligent Manufacturing 4/2021

27.06.2020

Skill transfer support model based on deep learning

verfasst von: Kung-Jeng Wang, Diwanda Ageng Rizqi, Hong-Phuc Nguyen

Erschienen in: Journal of Intelligent Manufacturing | Ausgabe 4/2021

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

The paradigm shift toward Industry 4.0 is not solely completed by enabling smart machines in a factory but also by facilitating human capability. Refinement of work processes and introduction of new training approaches are necessary to support efficient human skill development. This study proposes a new skill transfer support model in a manufacturing scenario. The proposed model develops two types of deep learning as the backbone: a convolutional neural network (CNN) for action recognition and a faster region-based CNN (R-CNN) for object detection. A case study using toy assembly is conducted utilizing two cameras with different angles to evaluate the performance of the proposed model. The accuracy for CNN and faster R-CNN for the target job reached 94.5% and 99%, respectively. A junior operator can be guided by the proposed model given that flexible assembly tasks have been constructed on the basis of a skill representation. In terms of theoretical contribution, this study integrated two deep learning models that can simultaneously recognize the action and detect the object. The present study facilitates skill transfer in manufacturing systems by adapting or learning new skills for junior operators.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Anhänge
Nur mit Berechtigung zugänglich
Literatur
Zurück zum Zitat Adamides, G., Katsanos, C., Constantinou, I., Christou, G., Xenos, M., Hadzilacos, T., et al. (2017). Design and development of a semi-autonomous agricultural vineyard sprayer: Human–robot interaction aspects. Journal of Field Robotics, 34(8), 1407–1426. Adamides, G., Katsanos, C., Constantinou, I., Christou, G., Xenos, M., Hadzilacos, T., et al. (2017). Design and development of a semi-autonomous agricultural vineyard sprayer: Human–robot interaction aspects. Journal of Field Robotics, 34(8), 1407–1426.
Zurück zum Zitat Baccouche, M., Mamalet, F., Wolf, C., Garcia, C., & Baskurt, A. (2012). Spatio-temporal convolutional sparse auto-Encoder for sequence classification. In BMVC (pp. 1-12). Baccouche, M., Mamalet, F., Wolf, C., Garcia, C., & Baskurt, A. (2012). Spatio-temporal convolutional sparse auto-Encoder for sequence classification. In BMVC (pp. 1-12).
Zurück zum Zitat Backhaus, J., & Reinhart, G. (2017). Digital description of products, processes and resources for task-oriented programming of assembly systems. Journal of Intelligent Manufacturing, 28(8), 1787–1800. Backhaus, J., & Reinhart, G. (2017). Digital description of products, processes and resources for task-oriented programming of assembly systems. Journal of Intelligent Manufacturing, 28(8), 1787–1800.
Zurück zum Zitat Bejnordi, B. E., Zuidhof, G., Balkenhol, M., Hermsen, M., Bult, P., van Ginneken, B., et al. (2017). Context-aware stacked convolutional neural networks for classification of breast carcinomas in whole-slide histopathology images. Journal of Medical Imaging, 4(4), 044504. Bejnordi, B. E., Zuidhof, G., Balkenhol, M., Hermsen, M., Bult, P., van Ginneken, B., et al. (2017). Context-aware stacked convolutional neural networks for classification of breast carcinomas in whole-slide histopathology images. Journal of Medical Imaging, 4(4), 044504.
Zurück zum Zitat Bhandare, A., Bhide, M., Gokhale, P., & Chandavarkar, R. (2016). Applications of convolutional neural networks. International Journal of Computer Science and Information Technologies, 7(5), 2206–2215. Bhandare, A., Bhide, M., Gokhale, P., & Chandavarkar, R. (2016). Applications of convolutional neural networks. International Journal of Computer Science and Information Technologies, 7(5), 2206–2215.
Zurück zum Zitat Chen, B., Ting, J., Marlin, B., & Freitas, N. (2010). Deep learning of invariant spatio-temporal features from video. In Proceedings of the annual conference of on neural information processing systems (NIPS). Chen, B., Ting, J., Marlin, B., & Freitas, N. (2010). Deep learning of invariant spatio-temporal features from video. In Proceedings of the annual conference of on neural information processing systems (NIPS).
Zurück zum Zitat Ciocca, G., Napoletano, P., & Schettini, R. (2018). CNN-based features for retrieval and classification of food images. Computer Vision and Image Understanding, 176, 70–77. Ciocca, G., Napoletano, P., & Schettini, R. (2018). CNN-based features for retrieval and classification of food images. Computer Vision and Image Understanding, 176, 70–77.
Zurück zum Zitat Dewa, C. K., & Afiahayati, (2018). Suitable CNN Weight Initialization and Activation Function for Javanese Vowels Classification. Procedia Computer Science, 144, 124–132. Dewa, C. K., & Afiahayati, (2018). Suitable CNN Weight Initialization and Activation Function for Javanese Vowels Classification. Procedia Computer Science, 144, 124–132.
Zurück zum Zitat Duan, F., Tan, J. T. C., Tong, J. G., Kato, R., & Arai, T. (2012). Application of the assembly skill transfer system in an actual cellular manufacturing system. IEEE Transactions on Automation Science and Engineering, 9(1), 31–41. Duan, F., Tan, J. T. C., Tong, J. G., Kato, R., & Arai, T. (2012). Application of the assembly skill transfer system in an actual cellular manufacturing system. IEEE Transactions on Automation Science and Engineering, 9(1), 31–41.
Zurück zum Zitat Frank, A. G., Dalenogare, L. S., & Ayala, N. F. (2019). Industry 4.0 technologies: Implementation patterns in manufacturing companies. International Journal of Production Economics, 210, 15–26. Frank, A. G., Dalenogare, L. S., & Ayala, N. F. (2019). Industry 4.0 technologies: Implementation patterns in manufacturing companies. International Journal of Production Economics, 210, 15–26.
Zurück zum Zitat Gu, J., Wang, Z., Kuen, J., Ma, L., Shahroudy, A., Shuai, B., et al. (2018). Recent advances in convolutional neural networks. Pattern Recognition, 77, 354–377. Gu, J., Wang, Z., Kuen, J., Ma, L., Shahroudy, A., Shuai, B., et al. (2018). Recent advances in convolutional neural networks. Pattern Recognition, 77, 354–377.
Zurück zum Zitat He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770–778). He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770–778).
Zurück zum Zitat Hill, T. (2017). Manufacturing strategy: The strategic management of the manufacturing function. London: Macmillan International Higher Education. Hill, T. (2017). Manufacturing strategy: The strategic management of the manufacturing function. London: Macmillan International Higher Education.
Zurück zum Zitat Idrees, H., Zamir, A. R., Jiang, Y. G., Gorban, A., Laptev, I., Sukthankar, R., et al. (2017). The THUMOS challenge on action recognition for videos “in the wild”. Computer Vision and Image Understanding, 155, 1–23. Idrees, H., Zamir, A. R., Jiang, Y. G., Gorban, A., Laptev, I., Sukthankar, R., et al. (2017). The THUMOS challenge on action recognition for videos “in the wild”. Computer Vision and Image Understanding, 155, 1–23.
Zurück zum Zitat Iwahori, Y., Takada, Y., Shiina, T., Adachi, Y., Bhuyan, M. K., & Kijsirikul, B. (2018). Defect Classification of Electronic Board Using Dense SIFT and CNN. Procedia Computer Science, 126, 1673–1682. Iwahori, Y., Takada, Y., Shiina, T., Adachi, Y., Bhuyan, M. K., & Kijsirikul, B. (2018). Defect Classification of Electronic Board Using Dense SIFT and CNN. Procedia Computer Science, 126, 1673–1682.
Zurück zum Zitat Jaderberg, M., Simonyan, K., & Zisserman, A. (2015). Spatial transformer networks. In Advances in neural information processing ssystems (pp. 2017–2025). Jaderberg, M., Simonyan, K., & Zisserman, A. (2015). Spatial transformer networks. In Advances in neural information processing ssystems (pp. 2017–2025).
Zurück zum Zitat Jardim-Goncalves, R., Grilo, A., & Popplewell, K. (2016). Novel strategies for global manufacturing systems interoperability. Journal of Intelligent Manufacturing, 27(1), 1–9. Jardim-Goncalves, R., Grilo, A., & Popplewell, K. (2016). Novel strategies for global manufacturing systems interoperability. Journal of Intelligent Manufacturing, 27(1), 1–9.
Zurück zum Zitat Kiassat, C., & Safaei, N. (2019). Effect of imprecise skill level on workforce rotation in a dynamic market. Computers & Industrial Engineering, 131, 464–476. Kiassat, C., & Safaei, N. (2019). Effect of imprecise skill level on workforce rotation in a dynamic market. Computers & Industrial Engineering, 131, 464–476.
Zurück zum Zitat Koch, P. J., van Amstel, M. K., Dębska, P., Thormann, M. A., Tetzlaff, A. J., Bøgh, S., et al. (2017). A skill-based robot co-worker for industrial maintenance tasks. Procedia Manufacturing, 11, 83–90. Koch, P. J., van Amstel, M. K., Dębska, P., Thormann, M. A., Tetzlaff, A. J., Bøgh, S., et al. (2017). A skill-based robot co-worker for industrial maintenance tasks. Procedia Manufacturing, 11, 83–90.
Zurück zum Zitat Landi, C. T., Villani, V., Ferraguti, F., Sabattini, L., Secchi, C., & Fantuzzi, C. (2018). Relieving operators’ workload: Towards affective robotics in industrial scenarios. Mechatronics, 54, 144–154. Landi, C. T., Villani, V., Ferraguti, F., Sabattini, L., Secchi, C., & Fantuzzi, C. (2018). Relieving operators’ workload: Towards affective robotics in industrial scenarios. Mechatronics, 54, 144–154.
Zurück zum Zitat LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521(7553), 436. LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521(7553), 436.
Zurück zum Zitat Levratti, A., De Vuono, A., Fantuzzi, C., & Secchi, C. (2016, July). TIREBOT: A novel tire workshop assistant robot. In 2016 IEEE international conference on advanced intelligent mechatronics (AIM) (pp. 733–738). IEEE. Levratti, A., De Vuono, A., Fantuzzi, C., & Secchi, C. (2016, July). TIREBOT: A novel tire workshop assistant robot. In 2016 IEEE international conference on advanced intelligent mechatronics (AIM) (pp. 733–738). IEEE.
Zurück zum Zitat Lim, C. H., Kim, M. J., Heo, J. Y., & Kim, K. J. (2018). Design of informatics-based services in manufacturing industries: case studies using large vehicle-related databases. Journal of Intelligent Manufacturing, 29(3), 497–508. Lim, C. H., Kim, M. J., Heo, J. Y., & Kim, K. J. (2018). Design of informatics-based services in manufacturing industries: case studies using large vehicle-related databases. Journal of Intelligent Manufacturing, 29(3), 497–508.
Zurück zum Zitat Lin, T.Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., et al. (2014, September). Microsoft coco: Common objects in context. In European conference on computer vision (pp. 740–755). Cham: Springer. Lin, T.Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., et al. (2014, September). Microsoft coco: Common objects in context. In European conference on computer vision (pp. 740–755). Cham: Springer.
Zurück zum Zitat Liu, M., Ma, J., Lin, L., Ge, M., Wang, Q., & Liu, C. (2017). Intelligent assembly system for mechanical products and key technology based on internet of things. Journal of Intelligent Manufacturing, 28(2), 271–299. Liu, M., Ma, J., Lin, L., Ge, M., Wang, Q., & Liu, C. (2017). Intelligent assembly system for mechanical products and key technology based on internet of things. Journal of Intelligent Manufacturing, 28(2), 271–299.
Zurück zum Zitat Liu, H., & Wang, L. (2017). Human motion prediction for human–robot collaboration. Journal of Manufacturing Systems, 44, 287–294. Liu, H., & Wang, L. (2017). Human motion prediction for human–robot collaboration. Journal of Manufacturing Systems, 44, 287–294.
Zurück zum Zitat Liu, H., & Wang, L. (2018). Gesture recognition for human–robot collaboration: A review. International Journal of Industrial Ergonomics, 68, 355–367. Liu, H., & Wang, L. (2018). Gesture recognition for human–robot collaboration: A review. International Journal of Industrial Ergonomics, 68, 355–367.
Zurück zum Zitat Oztemel, E., & Gursev, S. (2020). Literature review of Industry 4.0 and related technologies. Journal of Intelligent Manufacturing, 31(1), 127–182. Oztemel, E., & Gursev, S. (2020). Literature review of Industry 4.0 and related technologies. Journal of Intelligent Manufacturing, 31(1), 127–182.
Zurück zum Zitat Pei, L., Ye, M., Zhao, X., Dou, Y., & Bao, J. (2016). Action recognition by learning temporal slowness invariant features. The Visual Computer, 32(11), 1395–1404. Pei, L., Ye, M., Zhao, X., Dou, Y., & Bao, J. (2016). Action recognition by learning temporal slowness invariant features. The Visual Computer, 32(11), 1395–1404.
Zurück zum Zitat Peng, X., Wang, L., Wang, X., & Qiao, Y. (2016). Bag of visual words and fusion methods for action recognition: Comprehensive study and good practice. Computer Vision and Image Understanding, 150, 109–125. Peng, X., Wang, L., Wang, X., & Qiao, Y. (2016). Bag of visual words and fusion methods for action recognition: Comprehensive study and good practice. Computer Vision and Image Understanding, 150, 109–125.
Zurück zum Zitat Peruzzini, M., Grandi, F., & Pellicciari, M. (2020). Exploring the potential of Operator 4.0 interface and monitoring. Computers & Industrial Engineering, 139, 105600. Peruzzini, M., Grandi, F., & Pellicciari, M. (2020). Exploring the potential of Operator 4.0 interface and monitoring. Computers & Industrial Engineering, 139, 105600.
Zurück zum Zitat Qi, T., Xu, Y., Quan, Y., Wang, Y., & Ling, H. (2017). Image-based action recognition using hint-enhanced deep neural networks. Neurocomputing, 267, 475–488. Qi, T., Xu, Y., Quan, Y., Wang, Y., & Ling, H. (2017). Image-based action recognition using hint-enhanced deep neural networks. Neurocomputing, 267, 475–488.
Zurück zum Zitat Rawat, W., & Wang, Z. (2017). Deep convolutional neural networks for image classification: A comprehensive review. Neural computation, 29(9), 2352–2449. Rawat, W., & Wang, Z. (2017). Deep convolutional neural networks for image classification: A comprehensive review. Neural computation, 29(9), 2352–2449.
Zurück zum Zitat Ren, S., He, K., Girshick, R., & Sun, J. (2017). Faster r-cnn: Towards real-time object detection with region proposal networks. IEEE Transactions on Pattern Analysis and Machine Intelligence (pp. 1137–1149). Los Alamitos, CA: IEEE Ren, S., He, K., Girshick, R., & Sun, J. (2017). Faster r-cnn: Towards real-time object detection with region proposal networks. IEEE Transactions on Pattern Analysis and Machine Intelligence (pp. 1137–1149). Los Alamitos, CA: IEEE
Zurück zum Zitat Romero, D., Bernus, P., Noran, O., Stahre, J., & Fast-Berglund, Å. (2016, September). The operator 4.0: human cyber-physical systems & adaptive automation towards human-automation symbiosis work systems. In IFIP international conference on advances in production management systems (pp. 677–686). Cham: Springer. Romero, D., Bernus, P., Noran, O., Stahre, J., & Fast-Berglund, Å. (2016, September). The operator 4.0: human cyber-physical systems & adaptive automation towards human-automation symbiosis work systems. In IFIP international conference on advances in production management systems (pp. 677–686). Cham: Springer.
Zurück zum Zitat Ruppert, T., Jaskó, S., Holczinger, T., & Abonyi, J. (2018). Enabling technologies for operator 4.0: A survey. Applied Sciences, 8(9), 1650. Ruppert, T., Jaskó, S., Holczinger, T., & Abonyi, J. (2018). Enabling technologies for operator 4.0: A survey. Applied Sciences, 8(9), 1650.
Zurück zum Zitat Shahroudy, A., Ng, T. T., Gong, Y., & Wang, G. (2018). Deep multimodal feature analysis for action recognition in rgb + d videos. IEEE transactions on Pattern Analysis and Machine Intelligence, 40(5), 1045–1058. Shahroudy, A., Ng, T. T., Gong, Y., & Wang, G. (2018). Deep multimodal feature analysis for action recognition in rgb + d videos. IEEE transactions on Pattern Analysis and Machine Intelligence, 40(5), 1045–1058.
Zurück zum Zitat Shin, S. J., Kim, D. B., Shao, G., Brodsky, A., & Lechevalier, D. (2017). Developing a decision support system for improving sustainability performance of manufacturing processes. Journal of Intelligent Manufacturing, 28(6), 1421–1440. Shin, S. J., Kim, D. B., Shao, G., Brodsky, A., & Lechevalier, D. (2017). Developing a decision support system for improving sustainability performance of manufacturing processes. Journal of Intelligent Manufacturing, 28(6), 1421–1440.
Zurück zum Zitat Simonyan, K., & Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556. Simonyan, K., & Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:​1409.​1556.
Zurück zum Zitat Sun, Y., Wang, X., & Tang, X. (2014). Deep learning face representation from predicting 10,000 classes. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1891–1898). Sun, Y., Wang, X., & Tang, X. (2014). Deep learning face representation from predicting 10,000 classes. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1891–1898).
Zurück zum Zitat Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., et al. (2015). Going deeper with convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1–9). Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., et al. (2015). Going deeper with convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1–9).
Zurück zum Zitat Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., & Wojna, Z. (2016). Rethinking the inception architecture for computer vision. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2818–2826). Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., & Wojna, Z. (2016). Rethinking the inception architecture for computer vision. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2818–2826).
Zurück zum Zitat Traore, B. B., Kamsu-Foguem, B., & Tangara, F. (2018). Deep convolution neural network for image recognition. Ecological Informatics, 48, 257–268. Traore, B. B., Kamsu-Foguem, B., & Tangara, F. (2018). Deep convolution neural network for image recognition. Ecological Informatics, 48, 257–268.
Zurück zum Zitat van Dael, M., Verboven, P., Dhaene, J., Van Hoorebeke, L., Sijbers, J., & Nicolai, B. (2017). Multisensor X-ray inspection of internal defects in horticultural products. Postharvest Biology and Technology, 128, 33–43. van Dael, M., Verboven, P., Dhaene, J., Van Hoorebeke, L., Sijbers, J., & Nicolai, B. (2017). Multisensor X-ray inspection of internal defects in horticultural products. Postharvest Biology and Technology, 128, 33–43.
Zurück zum Zitat Vasconez, J. P., Kantor, G. A., & Cheein, F. A. A. (2019). Human–robot interaction in agriculture: A survey and current challenges. Biosystems Engineering, 179, 35–48. Vasconez, J. P., Kantor, G. A., & Cheein, F. A. A. (2019). Human–robot interaction in agriculture: A survey and current challenges. Biosystems Engineering, 179, 35–48.
Zurück zum Zitat Veeriah, V., Zhuang, N., & Qi, G. J. (2015). Differential recurrent neural networks for action recognition. In Proceedings of the IEEE international conference on computer vision (pp. 4041–4049). Veeriah, V., Zhuang, N., & Qi, G. J. (2015). Differential recurrent neural networks for action recognition. In Proceedings of the IEEE international conference on computer vision (pp. 4041–4049).
Zurück zum Zitat Villani, V., Pini, F., Leali, F., & Secchi, C. (2018). Survey on human–robot collaboration in industrial settings: Safety, intuitive interfaces and applications. Mechatronics, 55, 248–266. Villani, V., Pini, F., Leali, F., & Secchi, C. (2018). Survey on human–robot collaboration in industrial settings: Safety, intuitive interfaces and applications. Mechatronics, 55, 248–266.
Zurück zum Zitat Wang, P., Liu, H., Wang, L., & Gao, R. X. (2018a). Deep learning-based human motion recognition for predictive context-aware human–robot collaboration. CIRP Annals, 67(1), 17–20. Wang, P., Liu, H., Wang, L., & Gao, R. X. (2018a). Deep learning-based human motion recognition for predictive context-aware human–robot collaboration. CIRP Annals, 67(1), 17–20.
Zurück zum Zitat Wang, K. J., Nguyen, P. H., Xue, J., & Wu, S. Y. (2018b). Technology portfolio adoption considering capacity planning under demand and technology uncertainty. Journal of Manufacturing Systems., 47, 1–11. Wang, K. J., Nguyen, P. H., Xue, J., & Wu, S. Y. (2018b). Technology portfolio adoption considering capacity planning under demand and technology uncertainty. Journal of Manufacturing Systems., 47, 1–11.
Zurück zum Zitat Wilson, H. J., & Daugherty, P. R. (2018). Collaborative intelligence: Humans and AI are joining forces. Brighton: Harvard Business Review. Wilson, H. J., & Daugherty, P. R. (2018). Collaborative intelligence: Humans and AI are joining forces. Brighton: Harvard Business Review.
Zurück zum Zitat Yao, G., Lei, T., & Zhong, J. (2019). A review of convolutional-neural-network-based action recognition. Pattern Recognition Letters, 118, 14–22. Yao, G., Lei, T., & Zhong, J. (2019). A review of convolutional-neural-network-based action recognition. Pattern Recognition Letters, 118, 14–22.
Zurück zum Zitat Zeiler, M. D., & Fergus, R. (2014, September). Visualizing and understanding convolutional networks. In European conference on computer vision (pp. 818–833). Cham: Springer. Zeiler, M. D., & Fergus, R. (2014, September). Visualizing and understanding convolutional networks. In European conference on computer vision (pp. 818–833). Cham: Springer.
Zurück zum Zitat Zhang, J., Shao, K., & Luo, X. (2018). Small sample image recognition using improved Convolutional Neural Network. Journal of Visual Communication and Image Representation, 55, 640–647. Zhang, J., Shao, K., & Luo, X. (2018). Small sample image recognition using improved Convolutional Neural Network. Journal of Visual Communication and Image Representation, 55, 640–647.
Zurück zum Zitat Zhao, Z. Q., Zheng, P., Xu, S. T., & Wu, X. (2019). Object detection with deep learning: A review. IEEE Transactions on Neural Networks and Learning Systems, 30(11), 3212–3232. Zhao, Z. Q., Zheng, P., Xu, S. T., & Wu, X. (2019). Object detection with deep learning: A review. IEEE Transactions on Neural Networks and Learning Systems, 30(11), 3212–3232.
Metadaten
Titel
Skill transfer support model based on deep learning
verfasst von
Kung-Jeng Wang
Diwanda Ageng Rizqi
Hong-Phuc Nguyen
Publikationsdatum
27.06.2020
Verlag
Springer US
Erschienen in
Journal of Intelligent Manufacturing / Ausgabe 4/2021
Print ISSN: 0956-5515
Elektronische ISSN: 1572-8145
DOI
https://doi.org/10.1007/s10845-020-01606-w

Weitere Artikel der Ausgabe 4/2021

Journal of Intelligent Manufacturing 4/2021 Zur Ausgabe

    Marktübersichten

    Die im Laufe eines Jahres in der „adhäsion“ veröffentlichten Marktübersichten helfen Anwendern verschiedenster Branchen, sich einen gezielten Überblick über Lieferantenangebote zu verschaffen.