Impact and value

The COVID-19 pandemic has almost ended the historical debate on whether we should broadly engage students in computer supported inquiry learning (CoSIL). An increasing number of educators has started to realize the critical merit of CoSIL when participating in in-person scientific practices with students is not viable (Sikora et al. 2020). CoSIL allows students to investigate phenomena and figure out problems in virtual environments which are safe, low-cost, and self-paced; however, CoSIL is also highly self-regulated and requires guidance for students to feasibly and deeply engage in virtual activities. Without designed guidance, students are likely to be disrupted and may end up with limited engagement and gains.

The existing need, fortunately, has been partially filled by a large body of research which was synthesized in Zacharia et al. (2015). Zacharia and his colleagues contributed to our understanding of existing CoSIL guidance by examining the forms and efficacy framed under a CoSIL taxonomy (de Jong and Lazonder 2014). Using the taxonomy, Zacharia et al. found 89 guidance tools in the form of performance dashboards, prompts, process constraints, heuristics, scaffolds, and the direct presentation of information, which prevalently existed in all phases of CoSIL. They also found a significant variation in terms of the number of guidance tools available for each of the five phases of CoSIL: orientation, conceptualization, investigation, discussion, and conclusion. That is, the investigation phase was offered a significantly larger number of guidance tools than any of the other phases. Unsurprisingly, variation was also found in amounts of forms of guidance within each phase. They reported that 44 out of 89 guidance tools had a positive effect on student learning and articulated the characteristics of the reviewed guidance.

Zacharia et al.’s (2015) review indicated that various guidance tools had increased the ease of use of CoSIL for students, inspiring the shift from in-person to digital. Challenges were left for teachers: How teachers can continue providing effective support to students, given that CoSIL has provided automatic guidance and allows students to self-regulate their learning. Zacharia et al.’s study may help teachers better understand the characteristics of CoSIL guidance that are associated with a specific phase of inquiry, as well as their efficacy, so that teachers may redefine their pedagogical roles in virtual inquiry settings in a way to accommodate the guidance.

Personalizing automatic guidance using machine learning

A critical step for the “shift to digital” is to move the purpose of technology implementation from ease of use to personalization. I appreciate that Zacharia et al. concluded their study with a remark suggesting newly designed CoSIL should continue to be personalized to support self-regulated learning. Given that students may have diverse backgrounds, pre-knowledge, and levels of learning, personalized guidance is needed to shift learning to digital (Gerard et al. 2016). Personalizing CoSIL indicates that students should be able to receive customized and timely guidance according to their learning so that they can adjust and improve learning performance, no matter what their peers do. Zacharia et al.’s (2015) review found that this remark was unfounded in the literature and they were concerned about the potential extra burden of programming platforms that may move to teachers’ shoulders if personalization were processed.

A recent review has revealed that personalized guidance in CoSIL is very likely to be viable through machine learning (Zhai et al. 2020). Within the 49 reviewed studies that involved machine learning, Zhai et al. found multiple platforms that had been developed to provide personalized feedback according to students’ performance on a virtual inquiry. For example, in their project, Lee et al. (2019) developed a virtual platform HASbot that employs machine learning to automatically score students’ arguments. According to students’ argumentation performance, the system would provide real-time guidance to help students revise arguments. A similar study that focused on the explanation practice (Tansomboon et al. 2017) demonstrated great potential as well. The Tansomboon team connected the Web-Based Inquiry Science Environment (WISE) system with ETS’s c-raterML to provide students with timely guidance for their explanations. They found that personalized guidance of revisiting and planning writing changes significantly improved students’ explanation practice.

It is encouraging to find that after Zacharia et al. (2015) was first published five years ago, machine learning has significantly advanced Zacharia et al.’s work by providing customized CoSIL guidance, shifting the purpose of ease of use to that of personalization.

Limitations and future suggestions

While Zacharia et al. (2015) have made a significant contribution in the line of developing personalized CoSIL guidance, limitations exist. Two strands of work are particularly urgent for continuing to advance the personalization of CoSIL guidance.

First, cognitive evidence is needed to support personalized CoSIL guidance design. Zacharia et al. (2015) claimed that their study “provides evidence as to which inquiry phases of CoSIL implementations need more guidance (e.g., Conceptualization) or which inquiry phases of CoSIL implementations have a reasonable number of guidance tools” (p. 295), which I thought may need additional evidence to validate. The authors had not thoroughly analyzed students’ cognitive needs in phases of CoSIL, which have been deemed as critical to determine the specific scaffolds needed for CoSIL (de Jong and Lazonder 2014). I believe such a claim is critical to design personalized CoSIL guidance yet can be made only if cognitive evidence is collected so that we can justify whether the guidance is appropriate or not.

Second, the degree to which and the means of integration of content with CoSIL, as well as guidance for using it, need further study. While Zacharia et al. (2015) claim that 44 guidance tools were found effective, I was not fully convinced. I thought the effectiveness was a consequence of the integration of content with specific inquiry activity, as well as the guidance, rather than purely that of guidance tools. Research has suggested that integration levels of educational technology (Zhai et al. 2019) significantly impact the effectiveness of technology on learning, and thus a given guidance may be effective in one CoSIL activity but fails in another. Without considering how the integration is processed in the specific CoSIL, we are not likely to design personalized and effective guidance for students.