Skip to main content
Erschienen in: Education and Information Technologies 4/2020

14.01.2020

Utilizing crowdsourcing and machine learning in education: Literature review

verfasst von: Hadeel S. Alenezi, Maha H. Faisal

Erschienen in: Education and Information Technologies | Ausgabe 4/2020

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

For many years, learning continues to be a vital developing field since it is the key measure of the world’s civilization and evolution with its enormous effect on both individuals and societies. Enhancing existing learning activities in general will have a significant impact on literacy rates around the world. One of the crucial activities in education is the assessment method because it is the primary way used to evaluate the student during their studies. The main purpose of this review is to examine the existing learning and e-learning approaches that use either crowdsourcing, machine learning, or both crowdsourcing and machine learning in their proposed solutions. This review will also investigate the addressed applications to identify the existing researches related to the assessment. Identifying all existing applications will assist in finding the unexplored gaps and limitations. This study presents a systematic literature review investigating 30 papers from the following databases: IEEE and ACM Digital Library. After performing the analysis, we found that crowdsourcing is utilized in 47.8% of the investigated learning activities, while each of the machine learning and the hybrid solutions are utilized in 26% of the investigated learning activities. Furthermore, all the existing approaches regarding the exam assessment problem that are using machine learning or crowdsourcing were identified. Some of the existing assessment systems are using the crowdsourcing approach and other systems are using the machine learning, however, none of the approaches provide a hybrid assessment system that uses both crowdsourcing and machine learning. Finally, it is found that using either crowdsourcing or machine learning in the online courses will enhance the interactions between the students. It is concluded that the current learning activities need to be enhanced since it is directly affecting the student’s performance. Moreover, merging both the machine learning to the crowd wisdom will increase the accuracy and the efficiency of education.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Literatur
Zurück zum Zitat Alghamdi, E. A., Aljohani, N. R., Alsaleh, A. N., Bedewi, W., & Basheri, M. (2015). CrowdyQ: A virtual crowdsourcing platform for question items development in higher education. Proceedings of the 17th International Conference on Information Integration and Web-Based Applications &Services - IiWAS ‘15, 1–4. https://doi.org/10.1145/2837185.2843852. Alghamdi, E. A., Aljohani, N. R., Alsaleh, A. N., Bedewi, W., & Basheri, M. (2015). CrowdyQ: A virtual crowdsourcing platform for question items development in higher education. Proceedings of the 17th International Conference on Information Integration and Web-Based Applications &Services - IiWAS ‘15, 1–4. https://​doi.​org/​10.​1145/​2837185.​2843852.
Zurück zum Zitat Barbosa, C. E., Epelbaum, V. J., Antelio, M., Oliveira, J., & Moreira de Souza, J. (2013). Crowdsourcing environments in E-learning scenario: A classification based on educational and collaboration criteria. 2013 IEEE International Conference on Systems, Man, and Cybernetics, 687–692. https://doi.org/10.1109/SMC.2013.122. Barbosa, C. E., Epelbaum, V. J., Antelio, M., Oliveira, J., & Moreira de Souza, J. (2013). Crowdsourcing environments in E-learning scenario: A classification based on educational and collaboration criteria. 2013 IEEE International Conference on Systems, Man, and Cybernetics, 687–692. https://​doi.​org/​10.​1109/​SMC.​2013.​122.
Zurück zum Zitat Faisal, M. H., AlAmeeri, A. W., & Alsumait, A. A. (2015). An adaptive e-learning framework: Crowdsourcing approach. Proceedings of the 17th International Conference on Information Integration and Web-Based Applications &Services - IiWAS ‘15, 1–5. https://doi.org/10.1145/2837185.2837249. Faisal, M. H., AlAmeeri, A. W., & Alsumait, A. A. (2015). An adaptive e-learning framework: Crowdsourcing approach. Proceedings of the 17th International Conference on Information Integration and Web-Based Applications &Services - IiWAS ‘15, 1–5. https://​doi.​org/​10.​1145/​2837185.​2837249.
Zurück zum Zitat Harlen, W., Crick, R. D., Broadfoot, P., Daugherty, R., Gardner, J., James, M., & Stobart, G. (2002). A systematic review of the impact of summative assessment and tests on students’ motivation for learning. EPPI-Centre, University of London, 151. Harlen, W., Crick, R. D., Broadfoot, P., Daugherty, R., Gardner, J., James, M., & Stobart, G. (2002). A systematic review of the impact of summative assessment and tests on students’ motivation for learning. EPPI-Centre, University of London, 151.
Zurück zum Zitat Huang, S.-W., Tu, P.-F., Fu, W.-T., & Amamzadeh, M. (2013). Leveraging the crowd to improve feature-sentiment analysis of user reviews. Proceedings of the 2013 International Conference on Intelligent User Interfaces - IUI ‘13, 3. https://doi.org/10.1145/2449396.2449400. Huang, S.-W., Tu, P.-F., Fu, W.-T., & Amamzadeh, M. (2013). Leveraging the crowd to improve feature-sentiment analysis of user reviews. Proceedings of the 2013 International Conference on Intelligent User Interfaces - IUI ‘13, 3. https://​doi.​org/​10.​1145/​2449396.​2449400.
Zurück zum Zitat Kamar, E. (2016). Directions in Hybrid Intelligence: Complementing AI Systems with Human Intelligence. In IJCAI, 4070-4073. Kamar, E. (2016). Directions in Hybrid Intelligence: Complementing AI Systems with Human Intelligence. In IJCAI, 4070-4073.
Zurück zum Zitat Kashi, A., Shastri, S., Deshpande, A. R., Doreswamy, J., & Srinivasa, G. (2016). A score recommendation system towards automating assessment in professional courses. 2016 IEEE Eighth International Conference on Technology for Education (T4E), 140–143. https://doi.org/10.1109/T4E.2016.036. Kashi, A., Shastri, S., Deshpande, A. R., Doreswamy, J., & Srinivasa, G. (2016). A score recommendation system towards automating assessment in professional courses. 2016 IEEE Eighth International Conference on Technology for Education (T4E), 140–143. https://​doi.​org/​10.​1109/​T4E.​2016.​036.
Zurück zum Zitat Kim, J., Sterman, S., Cohen, A. A. B., & Bernstein, M. S. (2017). Mechanical novel: Crowdsourcing complex work through reflection and revision. Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing - CSCW ‘17, 233–245. https://doi.org/10.1145/2998181.2998196. Kim, J., Sterman, S., Cohen, A. A. B., & Bernstein, M. S. (2017). Mechanical novel: Crowdsourcing complex work through reflection and revision. Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing - CSCW ‘17, 233–245. https://​doi.​org/​10.​1145/​2998181.​2998196.
Zurück zum Zitat Kotturi, Y., Kulkarni, C. E., Bernstein, M. S., & Klemmer, S. (2015). Structure and messaging techniques for online peer learning systems that increase stickiness. Proceedings of the Second (2015) ACM Conference on Learning @ Scale - L@S ‘15, 31–38. https://doi.org/10.1145/2724660.2724676. Kotturi, Y., Kulkarni, C. E., Bernstein, M. S., & Klemmer, S. (2015). Structure and messaging techniques for online peer learning systems that increase stickiness. Proceedings of the Second (2015) ACM Conference on Learning @ Scale - L@S ‘15, 31–38. https://​doi.​org/​10.​1145/​2724660.​2724676.
Zurück zum Zitat Krause, M., Garncarz, T., Song, J., Gerber, E. M., Bailey, B. P., & Dow, S. P. (2017). Critique style guide: Improving Crowdsourced design feedback with a natural language model. Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems - CHI ‘17, 4627–4639. https://doi.org/10.1145/3025453.3025883. Krause, M., Garncarz, T., Song, J., Gerber, E. M., Bailey, B. P., & Dow, S. P. (2017). Critique style guide: Improving Crowdsourced design feedback with a natural language model. Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems - CHI ‘17, 4627–4639. https://​doi.​org/​10.​1145/​3025453.​3025883.
Zurück zum Zitat Kulkarni, C., Cambre, J., Kotturi, Y., Bernstein, M. S., & Klemmer, S. R. (2015a). Talkabout: Making Distance matter with small groups in massive classes. Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing - CSCW ‘15, 1116–1128. https://doi.org/10.1145/2675133.2675166. Kulkarni, C., Cambre, J., Kotturi, Y., Bernstein, M. S., & Klemmer, S. R. (2015a). Talkabout: Making Distance matter with small groups in massive classes. Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing - CSCW ‘15, 1116–1128. https://​doi.​org/​10.​1145/​2675133.​2675166.
Zurück zum Zitat Kulkarni, C. E., Bernstein, M. S., & Klemmer, S. R. (2015b). PeerStudio: Rapid peer feedback emphasizes revision and improves performance. Proceedings of the Second (2015) ACM Conference on Learning @ Scale - L@S ‘15, 75–84. https://doi.org/10.1145/2724660.2724670. Kulkarni, C. E., Bernstein, M. S., & Klemmer, S. R. (2015b). PeerStudio: Rapid peer feedback emphasizes revision and improves performance. Proceedings of the Second (2015) ACM Conference on Learning @ Scale - L@S ‘15, 75–84. https://​doi.​org/​10.​1145/​2724660.​2724670.
Zurück zum Zitat Lemley, D., Sudweeks, R., Howell, S., Laws, R. D., & Sawyer, O. (2007). The effects of immediate and delayed feedback on secondary distance learners. Quarterly Review of Distance Education, 8(3), 251. Lemley, D., Sudweeks, R., Howell, S., Laws, R. D., & Sawyer, O. (2007). The effects of immediate and delayed feedback on secondary distance learners. Quarterly Review of Distance Education, 8(3), 251.
Zurück zum Zitat Li, X., Chang, K., Yuan, Y., & Hauptmann, A. (2015). Massive open online proctor: Protecting the credibility of MOOCs certificates. Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing - CSCW ‘15, 1129–1137. https://doi.org/10.1145/2675133.2675245. Li, X., Chang, K., Yuan, Y., & Hauptmann, A. (2015). Massive open online proctor: Protecting the credibility of MOOCs certificates. Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing - CSCW ‘15, 1129–1137. https://​doi.​org/​10.​1145/​2675133.​2675245.
Zurück zum Zitat Luger, S. K. K., & Bowles, J. (2013). An analysis of question quality and user performance in crowdsourced exams. Proceedings of the 2103 Workshop on Data-Driven User Behavioral Modelling and Mining from Social Media - DUBMOD ‘13, 29–32. https://doi.org/10.1145/2513577.2513584. Luger, S. K. K., & Bowles, J. (2013). An analysis of question quality and user performance in crowdsourced exams. Proceedings of the 2103 Workshop on Data-Driven User Behavioral Modelling and Mining from Social Media - DUBMOD ‘13, 29–32. https://​doi.​org/​10.​1145/​2513577.​2513584.
Zurück zum Zitat Pirttinen, N., Kangas, V., Nikkarinen, I., Nygren, H., Leinonen, J., & Hellas, A. (2018). Crowdsourcing programming assignments with CrowdSorcerer. Proceedings of the 23rd Annual ACM Conference on Innovation and Technology in Computer Science Education - ITiCSE 2018, 326–331. https://doi.org/10.1145/3197091.3197117. Pirttinen, N., Kangas, V., Nikkarinen, I., Nygren, H., Leinonen, J., & Hellas, A. (2018). Crowdsourcing programming assignments with CrowdSorcerer. Proceedings of the 23rd Annual ACM Conference on Innovation and Technology in Computer Science Education - ITiCSE 2018, 326–331. https://​doi.​org/​10.​1145/​3197091.​3197117.
Zurück zum Zitat Schenk, E., & Guittard, C. (2009). Crowdsourcing: What can be Outsourced to the Crowd, and Why. In Workshop on open source innovation, Strasbourg, France 72, 3. Schenk, E., & Guittard, C. (2009). Crowdsourcing: What can be Outsourced to the Crowd, and Why. In Workshop on open source innovation, Strasbourg, France 72, 3.
Zurück zum Zitat Tang, T., Smith, R., Rixner, S., & Warren, J. (2016). Data-driven test case generation for automated programming assessment. Proceedings of the 2016 ACM Conference on Innovation and Technology in Computer Science Education - ITiCSE ‘16, 260–265. https://doi.org/10.1145/2899415.2899423. Tang, T., Smith, R., Rixner, S., & Warren, J. (2016). Data-driven test case generation for automated programming assessment. Proceedings of the 2016 ACM Conference on Innovation and Technology in Computer Science Education - ITiCSE ‘16, 260–265. https://​doi.​org/​10.​1145/​2899415.​2899423.
Zurück zum Zitat Taniguchi, A., & Inoue, S. (2015). A method for automatic assessment of user-generated tests and its evaluation. Proceedings of the 2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2015 ACM International Symposium on Wearable Computers - UbiComp ‘15, 225–228. https://doi.org/10.1145/2800835.2800931. Taniguchi, A., & Inoue, S. (2015). A method for automatic assessment of user-generated tests and its evaluation. Proceedings of the 2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2015 ACM International Symposium on Wearable Computers - UbiComp ‘15, 225–228. https://​doi.​org/​10.​1145/​2800835.​2800931.
Zurück zum Zitat Vaish, R., Goel, S., Davis, J., Bernstein, M. S., Gaikwad, S. (Neil) S., Kovacs, G., … Belongie, S. (2017). Crowd research: Open and scalable university laboratories. Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology - UIST ‘17, 829–843. https://doi.org/10.1145/3126594.3126648. Vaish, R., Goel, S., Davis, J., Bernstein, M. S., Gaikwad, S. (Neil) S., Kovacs, G., … Belongie, S. (2017). Crowd research: Open and scalable university laboratories. Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology - UIST ‘17, 829–843. https://​doi.​org/​10.​1145/​3126594.​3126648.
Zurück zum Zitat Vaughan, J. W. (2017). Making better use of the crowd: How crowdsourcing can advance machine learning research. Journal of Machine Learning Research, 18, 193-1. Vaughan, J. W. (2017). Making better use of the crowd: How crowdsourcing can advance machine learning research. Journal of Machine Learning Research, 18, 193-1.
Zurück zum Zitat Watson, P., Ma, T., Tejwani, R., Chang, M., Ahn, J., & Sundararajan, S. (2018). Human-level multiple choice question guessing without domain knowledge: Machine-learning of framing Effects. Companion of the The Web Conference 2018 on The Web Conference 2018 - WWW ‘18, 299–303. https://doi.org/10.1145/3184558.3186340. Watson, P., Ma, T., Tejwani, R., Chang, M., Ahn, J., & Sundararajan, S. (2018). Human-level multiple choice question guessing without domain knowledge: Machine-learning of framing Effects. Companion of the The Web Conference 2018 on The Web Conference 2018 - WWW ‘18, 299–303. https://​doi.​org/​10.​1145/​3184558.​3186340.
Zurück zum Zitat Whitehill, J., & Seltzer, M. (2016). A crowdsourcing approach to collecting tutorial videos—Toward personalized learning-at-scale. ArXiv:1606.09610 [Cs]. Retrieved from http://arxiv.org/abs/1606.09610. Accessed 15 Aug 2019 Whitehill, J., & Seltzer, M. (2016). A crowdsourcing approach to collecting tutorial videos—Toward personalized learning-at-scale. ArXiv:1606.09610 [Cs]. Retrieved from http://​arxiv.​org/​abs/​1606.​09610. Accessed 15 Aug 2019
Zurück zum Zitat Williams, J. J., Kim, J., Rafferty, A., Maldonado, S., Gajos, K. Z., Lasecki, W. S., & Heffernan, N. (2016). AXIS: Generating explanations at scale with Learnersourcing and machine learning. Proceedings of the Third (2016) ACM Conference on Learning @ Scale - L@S ‘16, 379–388. https://doi.org/10.1145/2876034.2876042. Williams, J. J., Kim, J., Rafferty, A., Maldonado, S., Gajos, K. Z., Lasecki, W. S., & Heffernan, N. (2016). AXIS: Generating explanations at scale with Learnersourcing and machine learning. Proceedings of the Third (2016) ACM Conference on Learning @ Scale - L@S ‘16, 379–388. https://​doi.​org/​10.​1145/​2876034.​2876042.
Metadaten
Titel
Utilizing crowdsourcing and machine learning in education: Literature review
verfasst von
Hadeel S. Alenezi
Maha H. Faisal
Publikationsdatum
14.01.2020
Verlag
Springer US
Erschienen in
Education and Information Technologies / Ausgabe 4/2020
Print ISSN: 1360-2357
Elektronische ISSN: 1573-7608
DOI
https://doi.org/10.1007/s10639-020-10102-w

Weitere Artikel der Ausgabe 4/2020

Education and Information Technologies 4/2020 Zur Ausgabe