Elsevier

Learning and Instruction

Volume 52, December 2017, Pages 46-58
Learning and Instruction

How to sequence video modeling examples and inquiry tasks to foster scientific reasoning

https://doi.org/10.1016/j.learninstruc.2017.04.005Get rights and content

Highlights

  • Sequence of examples and tasks affected acquisition of scientific reasoning skills.

  • Example-first groups experienced lower cognitive load than task-first groups.

  • Example-first groups had better learning outcomes than task-first groups.

  • Example-first groups had higher judgments of learning than task-first groups.

  • All groups underestimated their scientific reasoning performance.

Abstract

Scientific reasoning skills can be acquired through technology-enhanced inquiry tasks or video modeling examples showing how to conduct virtual experiments. However, inquiry tasks can be cognitively demanding for novice learners, whereas video modeling examples can induce overconfidence. The present study investigated the effectiveness of both approaches in isolation and combination. We compared the effects of four groups (example-example, example-task, task-example and task-task) on learning outcomes, perceived difficulty and mental effort, judgments of learning, and monitoring accuracy among 107 seventh graders. In line with our hypotheses, watching a video modeling example first led to lower mental effort, better learning outcomes, and higher judgments of learning than solving an inquiry task first. Contrary to our hypotheses, all groups underestimated their performance. Results for mental effort and learning outcomes corroborate research on worked examples, whereas results for judgments of learning and monitoring accuracy indicate an underconfidence-with-practice effect.

Introduction

Scientific reasoning is a vital aspect of international science education standards (National Research Council, 2012, OECD., 2007). It involves the skills implicated in generating hypotheses, designing experiments, and evaluating evidence (C. Zimmerman, 2007). A core component of scientific reasoning is the ability to design controlled experiments and evaluate the resulting evidence with regard to one's hypotheses. This aspect is addressed in the control-of-variables strategy (CVS; Chen & Klahr, 1999). It states that all variables except the one being tested should be held constant across experimental trials to yield conclusive results. However, this strategy can be difficult to apply, especially for younger students (Piekny et al., 2014, Piekny and Maehler, 2013). The CVS does not develop routinely as a consequence of mere exposure to everyday situations that require scientific reasoning; rather it has to be the subject of science teaching (C. Zimmerman, 2007). The present paper deals with the question of how to best convey the CVS by making reference to two prominent teaching approaches, namely, inquiry learning with virtual experiments (de Jong, 2006) as well as example-based learning with video modeling examples (Mulder, Lazonder, & de Jong, 2014). Unfortunately, both approaches are not only associated with specific benefits for learning; they also come along with particular challenges: Pure inquiry learning can be cognitively overwhelming, particularly for novice learners, whereas studying examples can induce illusions of understanding, which might impede learning (Baars, van Gog, de Bruin, & Paas, 2016). In the present study, we contrasted combinations of the two approaches with learning from just one approach to test whether the former would help to balance out the negative side effects of each approach while making use of the benefits. Because combining inquiry tasks with video modeling examples raises the question of how to sequence these learning activities (i.e., presenting examples before or after inquiry tasks), this question was additionally addressed in the paper.

One prominent instructional approach to fostering the acquisition of scientific reasoning is inquiry learning (Lazonder & Harmsen, 2016). During inquiry learning, students “conduct experiments, make observations, or collect information in order to infer the principles underlying a topic or domain” (Lazonder & Harmsen, 2016, p. 2). Inquiry learning is applied in schools to teach science content and also science process skills such as scientific reasoning (Lazonder & Harmsen, 2016). The recent advent of computer simulations allows students to investigate a wide range of scientific phenomena by manipulating variables that would not be easily accessible in physical experiments (de Jong, 2006).

However, unguided inquiry tasks generally are an inefficient way to enhance children's use of the CVS (Alfieri, Brooks, Aldrich, & Tenenbaum, 2011). Novice learners, especially, often do not profit from unguided inquiry learning, which can be seen as an ill-defined problem solving activity (C. Zimmerman, 2007). Problem solving requires learners to handle a large number of information elements simultaneously, which may overwhelm students' limited cognitive resources (Sweller et al., 1998, Tuovinen and Sweller, 1999). This is especially true for novices who lack schemata that would guide their problem solving. Therefore, students need guidance to focus their limited cognitive resources on the most relevant information and acquire problem-solving schemata (Tuovinen & Sweller, 1999).

With appropriate guidance, inquiry learning can be more effective than expository methods, which is consistently shown in meta-analytic studies (e.g., Alfieri et al., 2011, Lazonder and Harmsen, 2016). Different types of learner guidance have been proven effective to learn the CVS, for instance, direct instruction (Klahr & Nigam, 2005), experimentation hints (Kuhn & Dean, 2005), task structuring (Lazonder & Kamp, 2012), and worked examples (Mulder et al., 2014). Mulder et al. (2014) used video modeling examples to show students how to conduct virtual experiments in an inquiry learning environment. Students presented with modeling examples displayed superior inquiry behavior than students who did not receive video modeling examples. This result can be explained with the worked example effect.

According to the worked example effect, it is beneficial for novice learners to study worked examples containing a step-by-step expert solution to a problem before solving a task on their own (Cooper and Sweller, 1987, Sweller and Cooper, 1985). Studying an example instead of solving a problem reduces unnecessary cognitive load. Thus, learners can use their working memory resources to build a problem-solving schema for later problem-solving situations (Cooper & Sweller, 1987). The worked example effect has been shown in diverse contexts such as algebra (Sweller & Cooper, 1985), programming (Kalyuga, Chandler, Tuovinen, & Sweller, 2001) and scientific reasoning (Mulder et al., 2014).

However, studying examples can give learners an illusion of understanding (Baars et al., 2016, Baars et al., 2014, Renkl and Atkinson, 2002). An illusion of understanding is evident if students' predictions about their future test performance (judgments of learning, JoLs) are higher than their actual test performance (overconfidence; Thiede, Anderson, & Therriault, 2003). Illusions of understanding can be a result of a foresight bias (Koriat & Bjork, 2005), which occurs when predictions about one’s future test performance with regard to certain material are made in the presence of that material. If students, for instance, are asked to make a JoL in the presence or immediately after studying a worked example, they might interpret their current processing as learning even though the current processing is based on the worked example that will not be available during a later test. Thus, learners may overestimate their future test performance. Illusions of understanding can have detrimental effects on learning outcomes, since they may lead learners to terminate studying too early (Dunlosky & Rawson, 2012). During schema acquisition, for example, overconfident learners might terminate studying before a schema is constructed or before all relevant elements of a schema have been encoded and incorporated. Thus, overconfidence might prevent or impair the acquisition of a problem-solving schema through inaccurate regulation processes. Illusions of understanding might be even more likely to occur when using video modeling examples to convey scientific reasoning skills. Dynamic visualizations like videos are commonly associated with entertainment. Therefore, students may underestimate the effort necessary to understand what is being conveyed through a dynamic visualization (underwhelming effect; Lowe, 2004).

Thus, both approaches – inquiry learning and example-based learning – have advantages (authentic inquiry activities vs. helping learners build problem-solving schemata) and disadvantages (high cognitive load vs. illusions of understanding) for novice learners. This raises the question of whether there are benefits of combining these two approaches compared to learning from just one approach. Moreover, when combining examples and inquiry tasks, one needs to consider the sequence in which the two are presented.

The question of how to sequence examples and inquiry tasks pertains to the more general question of how to sequence direct instruction and problem-solving activities. Research regarding this question has resulted in mixed evidence.

On the one hand, there is research speaking in favor of presenting instruction (such as examples) before problems (such as inquiry tasks). For instance, two studies have investigated the effectiveness of examples only, examples followed by tasks (example-task pairs) and tasks followed by examples (task-example pairs) compared with tasks only (Leppink et al., 2014, Van Gog et al., 2011). Both studies found an advantage for presenting examples first. Van Gog et al. (2011) found that secondary education students (age M = 16.22) who learned to troubleshoot electrical circuits via example-task pairs or examples only indicated lower cognitive load and showed better learning outcomes (better problem-solving skills) than students who learned with task-example pairs or tasks only. Moreover, students who learned with example-task pairs did not differ from students who learned with examples only. Similarly, students who learned with task-example pairs did not differ from students who learned with tasks only. Leppink et al. (2014) replicated the advantage of studying an example over solving a task first in a different domain (application of Bayes’ theorem) and with an older age group (university freshman). Thus, research on worked examples speaks in favor of presenting an example first followed by either a task or another example.

Moreover, several studies on inquiry learning underscore that presenting instruction before inquiry has a positive effect on learning outcomes (Barzilai and Blau, 2014, Lazonder et al., 2010, Wecker et al., 2013). Barzilai and Blau (2014), for example, compared the effectiveness of providing a scaffold including examples before or after an inquiry activity to an inquiry activity without scaffolds. Results showed that learners who studied the scaffold before the inquiry exhibited higher problem-solving performance in a posttest than learners who either studied scaffolds after the inquiry or not at all (Barzilai & Blau, 2014). Taken together, research on worked examples and on instruction and inquiry suggest to provide instruction (e.g., examples) before problems (e.g., inquiry tasks).

However, there is also research speaking in favor of presenting problems before examples. Problem-example pairs might enable students to recognize deficiencies in their own performance, which might direct their attention to those aspects during studying the subsequent example (Van Gog et al., 2011). This sequence has been extensively investigated in a second research line, namely research on preparation for future learning (Schwartz & Martin, 2004) and productive failure (Kapur, 2012). A study by Arena and Schwartz (2014), for example, investigated if a video game prepared students for future formal instruction. The video game required players to infer the shape of probability distributions in order to perform well. The formal instruction consisted of a written text including several examples about probability distributions. Results showed that students who first played the game and then read the passage learned more than participants who only read the passage (Arena & Schwartz, 2014).

Research on productive failure has also yielded evidence for advantages of presenting problems before examples. In this approach, students are presented with a problem with a rich database and asked to devise several solutions (Kapur, 2012). Because students get no hints about relevant features for problem solution in the database, they are most often unable to create the canonical solution. The struggle to find solutions is thought to trigger a general awareness of their knowledge gaps and prepare them for the following instruction phase. In the first part of the instruction phase, the teacher demonstrates the limitations of typical student solutions before modeling the canonical solution (Loibl & Rummel, 2014). This sequence of problem-solving prior to instruction has been shown to result in better conceptual understanding of learners than instruction prior to problem-solving (Kapur, 2012, Loibl and Rummel, 2014). Loibl and Rummel (2014) showed that problem solving prior to instruction indeed triggered a global awareness of knowledge gaps that was beneficial for learning when the instruction that followed compared typical student solutions and contrasted them to the canonical solution.

Finally, research on self-regulated learning suggests that solving problems before receiving instruction can be helpful, because learners might be more likely to become aware of possible knowledge gaps. From a self-regulated learning perspective, this helps learners to monitor their current level of understanding and in turn regulate their actions accordingly (B. J. Zimmerman, 2002). Therefore, experiencing difficulties when solving a problem is expected to enhance monitoring accuracy in learners and motivate them to study a subsequent example more closely (Hausmann, van de Sande, & VanLehn, 2008). Studying an example first, in contrast, might have detrimental effects on learning since it can give learners an illusion of understanding and make them overconfident (Baars et al., 2014, Baars et al., 2016, Renkl and Atkinson, 2002), which might lead them to terminate further study activities prematurely. However, solving a task after studying a worked example has been shown to reduce overconfidence in learners (Baars et al., 2014). Solving a problem after having studied a worked example allows learners to test the schema that they have acquired through example study. This might provide them with relevant cues for making accurate judgments about their future test performance (Baars et al., 2014). This effect is similar to the delayed-generation-effect, which has been found for learning from expository text (e.g., Thiede, Griffin, Wiley, & Redford, 2009). Monitoring accuracy can be improved by focusing learners’ attention on their comprehension of a text prior to making a judgment of learning by asking them to use generation strategies such as summarizing texts (Thiede & Anderson, 2003). Therefore, solving a task after a video modeling example might result in more accurate monitoring than watching a second video modeling example.

The present study investigated the effects of presenting video modeling examples in a simulation-based inquiry learning environment before, after, or instead of an inquiry task. The learning environment consisted of two virtual physics experiments on the topic of energy. When the experiments were presented as video modeling examples, learners watched a video that showed how two models solved an inquiry task. When the experiments were presented as inquiry tasks, learners had to solve the same inquiry task as the models on their own. Learners received either an example or inquiry task in a first training phase followed by an example or inquiry task in a second training phase. We compared the four resulting instructional conditions of this 2 × 2 design with regard to mental effort and subjective difficulty, learning processes and learning outcomes, JoLs, and monitoring accuracy. The design of the present study was a conceptual replication of the study design by Van Gog et al. (2011) and Leppink et al. (2014, Experiment 2). However, our study differed on three dimensions. First, we used a younger age group (children). It is possible that children need more guidance than adolescents or university freshmen (Lazonder & Harmsen, 2016). Second, we used a different domain (scientific reasoning) as well as video modeling examples instead of text-based worked examples. Scientific reasoning, in contrast to troubleshooting electrical circuits and Bayes’ theorem, is not a highly structured cognitive task, where one step is carried out after another. Rather, scientific reasoning requires considering information from previous steps and also adjusting outcomes from previous steps in the light of new evidence (Hilbert et al., 2008, Mulder et al., 2014). In video modeling examples, the model can demonstrate these reiterations and shifts between different reasoning activities as well as explain his or her thoughts or heuristics for solving a task (Hilbert et al., 2008), which is why we used video modeling examples rather than text-based worked examples. Third, we included judgments of learning in our design to assess potential negative side effects of video modeling examples.

In line with research on the worked example effect and on sequencing instruction and inquiry (Barzilai and Blau, 2014, Lazonder et al., 2010, Leppink et al., 2014, Van Gog et al., 2011, Wecker et al., 2013), we hypothesized that watching a video modeling example first would be more effective (better learning outcomes; Hypothesis 1a) and efficient (lower mental effort and perceived difficulty; Hypothesis 1b) than solving an inquiry task first. Moreover, we investigated whether the instructional conditions would affect learners’ JoLs. In line with research on self-regulated learning and on preparation for future learning as well as productive failure (Arena and Schwartz, 2014, Baars et al., 2014, Baars et al., 2016, Kapur, 2012, Loibl and Rummel, 2014), we expected that the example-first groups would be overconfident in their JoLs in the first training phase (Hypothesis 2a), whereas the task-first groups would give more accurate JoLs (Hypothesis 2b). Moreover, we hypothesized that solving an inquiry task after having watched a video modeling example would lead to more accurate monitoring after the second training phase (Hypothesis 2c), whereas studying a second video modeling example would lead to overconfidence (Hypothesis 2d).

Section snippets

Participants and design

Participants were 107 seventh grade students from two schools in Southern Germany (61 female, age M = 12.46 years, SD = 0.56). Participants were enrolled in their first physics course. Because data collection took place at the beginning of the school term, students should have been novices concerning the topic of energy in physics. Participation in the study was voluntarily and written informed consent from parents and children was obtained. All participants engaged in two training phases, each

Results

First, we checked if there were scientific reasoning experts among the participants (cf., Chen & Klahr, 1999). We categorized participants who answered 100% of the items of the scientific reasoning pretest correctly as scientific reasoning experts. There were nine participants categorized as scientific reasoning experts. After checking that these experts were randomly distributed across conditions (χ2 (3) = 4.41, p = .22), they were excluded from further analyses.

Second, as a check on

Discussion

In the present study, we investigated the effects of presenting video modeling examples in simulation-based inquiry learning provided before, after, or instead of inquiry tasks. Learners were provided with a video modeling example or an inquiry task in a first and a second training phase. We analyzed effects on learning processes and learning outcomes, perceived difficulty and mental effort, judgments of learning (JoLs), and monitoring accuracy. Results indicated an advantage for providing

Acknowledgements

This research was funded by the LEAD Graduate School & Research Network [GSC1028], a project of the Excellence Initiative of the German federal and state governments. Juliane M. Kant was a doctoral student at the LEAD Graduate School & Research Network. We would like to thank our research assistants Anika Staib and Kathrin Raab, as well as Julia Kollmer and Dorothea Dalig for assistance during data collection and data coding. Thanks to Keri Hartman and Krista DeLeeuw for proofreading and

References (55)

  • R. Lowe

    Interrogation of a dynamic visualization during learning

    Learning and Instruction

    (2004)
  • Y.G. Mulder et al.

    Using heuristic worked examples to promote inquiry-based learning

    Learning and Instruction

    (2014)
  • F. Schmidt-Weigand et al.

    The role of spatial descriptions in learning from multimedia

    Computers in Human Behavior

    (2011)
  • K.W. Thiede et al.

    Summarizing can improve metacomprehension accuracy

    Contemporary Educational Psychology

    (2003)
  • T. Van Gog et al.

    Effects of worked examples, example-problem, and problem-example pairs on novices' learning

    Contemporary Educational Psychology

    (2011)
  • C. Zimmerman

    The development of scientific thinking skills in elementary and middle school

    Developmental Review

    (2007)
  • L. Alfieri et al.

    Does discovery-based instruction enhance learning?

    Journal of Educational Psychology

    (2011)
  • D.A. Arena et al.

    Experience and explanation: Using videogames to prepare students for formal instruction in statistics

    Journal of Science Education and Technology

    (2014)
  • M. Baars et al.

    Effects of problem solving after worked example study on primary school children's monitoring accuracy

    Applied Cognitive Psychology

    (2014)
  • M. Baars et al.

    Effects of problem solving after worked example study on secondary school children's monitoring accuracy

    Educational Psychology

    (2016)
  • Camtasia Studio. (Version 8.5) [Computer software]. Okemos, Michigan: TechSmith...
  • Z. Chen et al.

    All other things being equal: Acquisition and transfer of the control of variables strategy

    Child Development

    (1999)
  • J. Cohen

    Statistical power analysis for the behavioral sciences

    (1988)
  • G.A. Cooper et al.

    Effects of schema acquisition and rule automation on mathematical problem-solving transfer

    Journal of Educational Psychology

    (1987)
  • B. Finn et al.

    The role of memory for past test in the underconfidence with practice effect

    Journal of Experimental Psychology. Learning, Memory, and Cognition

    (2007)
  • Gizmos

    [Online simulations]

    (2017)
  • T. van Gog et al.

    Timing and frequency of mental effort measurement: Evidence in favour of repeated measures

    Applied Cognitive Psychology

    (2012)
  • Cited by (0)

    View full text