The last couple of years, COVID-19 policy measures also affected university teaching and led to a boom in asynchronous online learning (e.g., Guo,
2020; Koh & Daniel,
2022; Lowenthal et al.,
2020). Because of such measures, university lecturers had to rather quickly transition their courses into an online format (Schreiber,
2022). One method of choice for many lecturers was to rely on prerecorded video lectures (e.g., Pilkington & Hanif,
2021; Sokolová et al.,
2022; Trifon et al.,
2021; van der Keylen et al.,
2020). This format usually incorporates filming PowerPoint slides and implementing a voice-over recording by the lecturers (sometimes with the lecturer visible in a small box next to the slides). The video file is then distributed by the university’s learning platform and is ready to be digested by students in an asynchronous online learning setting. This asynchronous online learning brings the potential advantages of allowing learners to work through the video lecture at a time and location of their choice. However, it also carries risks, such as interruptions and lack of learning engagement.
Hence, there is a need for measures to encourage students to process such video lectures sufficiently, and many thereof have been researched, such as live tutorial sessions (e.g., Pilkington & Hanif,
2021), or online reflection tasks (e.g., Geraniou & Crisan,
2019). In this paper, we focus on a pragmatic and simple method: enhancing a prerecorded video lecture with adjunct questions, which we call prompts throughout this paper. More specifically, for the current study we analyzed, whether and how a video lecture can actually be enhanced by implementing prompts that should encourage students to deeply process the material. Unlike previous research however, we used an unsupervised asynchronous online learning scenario. Furthermore, we investigate the role of the students’ lack of engagement, such as indicated by the self-reported number of perceived interruptions, on learning outcomes. Our aim was to use an actual authentic university lecture in the field to ensure an optimum of ecological validity.
The risks of asynchronous video lectures: interruptions and lack of engagement
Asynchronous video lectures enable learners to choose their time and location of learning, but also carry some risks, such as interruptions and lack of students’ engagement. First, the risks of getting interrupted by other people, devices, or noises may be bigger in an unsupervised place of one’s own choice than in the university hall or classroom (e.g., Benson,
1988; Blasiman et al.,
2018; Chhetri,
2020). After all, the lecturer’s presence might have at least some corrective influence to assure a quiet learning environment. However, interruptions by digital devices are a ubiquitous phenomenon that afflicts many learners, gaining notoriety under the phrase
digital distraction (Flanigan & Titsworth,
2020; McCoy,
2020) and the euphemism
media multitasking (Hwang et al.,
2014; May & Elder,
2018). Students simply cannot withstand succumbing to use their mobile devices (such as smart phones, tablets, and laptops) for off-task purposes, which they do to an excessive extent. Already 10 years ago, Burak’s (
2012) surveys revealed that over 50% of students engaging in texting—while in fact sitting in class. In online courses, around 70% admitted to texting. Since then, many researchers have contributed various and current (objective) data about students’ digital off-task behavior while sitting in class. For instance, Kim et. al. (
2019) found that first-year college students spent over 25% of the time operating their smartphones: every 3–4 min, their smartphone distracted them for over a minute. Moreover, texting students often send and receive between 15 and 20 text messages (Dietz & Henrich,
2014; Pettijohn et al.,
2015) during a given class period. Finally, laptop users operate their devices 40–60% of the time for off-task reasons (Ravizza et al.,
2017). With such a high amount of digital distraction and off-task behavior—while actually sitting in class supervised by a lecturer (!)—those numbers are presumably even higher while sitting at home supposedly following an asynchronous online lecture. Unfortunately but obviously, such behavior is detrimental to the learners’ academic capacities (e.g., May & Elder,
2018).
Second, there is the risk of lack of students’ engagement, especially in reference to prerecorded video lectures (e.g., Kuznekoff,
2020). There are various reasons for this phenomenon (e.g., Lange et al.,
2022). A plausible explanation is offered by Erickson et. al. (
2020), who argue that video lectures encourage a more passive learning situation than face-to-face learning in class. However, it is well established that learners’ mental activity is key for learning—not so much their behavioral activity (such as visible open learning activities), let alone mental passivity. This tenet is condensed in the
Active Processing Stance (Renkl,
2015; Renkl & Atkinson,
2007). Many other researchers argue likewise. For instance, Chi and Wylie (
2014) introduced four different modes of learners’ engagement in their
ICAP framework:
Interactive,
Constructive,
Active, and
Passive. They emphasize the benefits of learning activities that go beyond the passive reception of a video lecture that is “watching the video without doing anything else” (p. 221) besides active manipulating the video, such as via play-/pause-buttons. According to this framework, learners can benefit from constructive learning activities such as explaining the concepts seen in the video. Fiorella and Mayer (
2016) present similar arguments for the efficacy of encouraging constructive cognitive processing. They discuss the benefits of generative learning strategies in light of their
SOI framework, which lays emphasis on the cognitive processes
Selecting,
Organizing, and
Integrating.
Summing up, for an asynchronous video lecture, it is essential to ensure that learners mentally engage with the learning material and perform a constructive learning activity that extends beyond passive watching. In particular, in this study, we focus on a certain well established and exhaustively researched constructive learning activity, namely self-explaining. Yet the risks of interruptions must not be ignored. Previous research also identified detrimental effects of interruptions on learning activities, such as self-explaining (e.g., Hefter,
2021).
Prompts for self-explanations
As mentioned above, we are focusing on self-explaining as the key learning activity to encourage students to benefit from a given video lecture. Note that
self-explaining is not the present participle that describes self-explanatory learning material.
Self-explaining is the gerund; it is an ongoing activity that simply means generating an explanation to oneself. Decades of research revealed that the endeavor of self-explaining is a powerful learning strategy, a constructive cognitive endeavor (e.g., Bichler et al.,
2022; Bisra et al.,
2018; Lachner et al.,
2021). It means mentally engaging with the learning content, generating inferences, and connecting it with prior knowledge (Chi,
2021; Wylie & Chi,
2014). Usually, students are encouraged by prompts to produce those self-explanations (e.g., Hefter et al.,
2022b; Roelle & Renkl,
2020). Many studies have identified the quality with which students generated those self-explanations as an essential predictor for learning outcomes. Not only in an immediate posttest (Berthold et al.,
2009; Hefter,
2021) but also in delayed posttests (Hefter et al.,
2014,
2022a), self-explanation quality has mediated the effect of digital interventions on learning outcomes.
Hence, it is very plausible that promoting self-explanations is a highly recommended measure to enhance asynchronous online learning (Schreiber,
2022). However, there is still the legitimate question as to how instructors should ideally enhance their video lectures to promote such beneficial self-explanations. More specifically, does being given the simple opportunity to take notes already suffice to promote self-explanations? Do students need to be explicitly induced instead, such as via a prompt? If so, which kind of prompt? Bisra et. al. (
2018) argued that an opportunity to generate a spontaneous self-explanation might be more effective than a prompt, because it is more adapted to learners’ individual knowledge gaps. On the other hand, many studies revealed that learners seldom spontaneously engage in self-explaining and do need prompts (or training) to do so (e.g., Berthold & Renkl,
2010). Prompts can also help learners focus on the learning material’s central concepts and principles (
Focused Processing Stance, e.g., Berthold & Renkl,
2010). Furthermore, the kind of prompt remains an open question to an instructor who wants to enhance a video lecture. There are many different ways to categorize the various types of potential prompts.
In the context of learning from examples or models, a typical differentiation is between principle-based and content-based prompts (e.g., Hefter et al.,
2014; Schworm & Renkl,
2007). The principle-based prompts are usually called
learning-domain prompts and focus on the to-be-learned principles and concepts that underlie the presented learning material (e.g., argumentation principles). By contrast, content-based prompts, usually called
exemplifying-domain prompts, focus on the part of the learning material that exemplifies the principles (e.g., a discussion about the dinosaurs’ extinction exemplifies argumentation principles). For learning outcomes related to the actual concepts, principle-based prompts have revealed advantages over content-based prompts (Hefter et al.,
2014; Schworm & Renkl,
2007).
In the context of learning from explanations, active and constructive prompts can be distinguished (e.g., Roelle et al.,
2015). This differentiation is based on the active–constructive–interactive framework by Chi (
2009)—a predecessor of the aforementioned ICAP framework (Chi & Wylie,
2014). Active prompts, called
engaging prompts, should encourage “learners to actively think about the content of instructional explanations” (Roelle et al.,
2015, p. 3). By contrast, constructive prompts, called
inference prompts and tested in combination with reduced explanations should encourage learners to generate something new that goes beyond the originally presented information. However, as Roelle et. al. (
2015) aptly discuss, such a constructive endeavor might not only encourage beneficial inferences—it can also lead to learners failing to generate correct inferences. Furthermore, from a more practical point of view, instructors have to think and decide about important aspects, such as whether and how to reduce the number of explanations in their learning material to provide opportunity for inferences, or whether and how to implement additional support measures to deal with potential errors, such as remedial explanations, revision prompts, feedback, or adaption.
Concisely, based on these considerations, we pragmatically focus on two types of prompts, which should be effective for learning from prerecorded online lectures and can also be implemented without the need to alter previous learning material or to implement additional support measures: principle-based prompts and elaboration-based prompts. Principle-based prompts should encourage the learner to think and write about the principles and concepts to be learned. Elaboration-based prompts may be considered similar to the content-based prompts (from the example-based learning content) and the inference prompts (from the explanation-based content). However, they should not require any alteration of the learning material, and simply encourage the learner to think and write about an example situation in which such principles can be applied.
Hypotheses
In the present study, we supplemented a prerecorded video lecture on the topic Cognitive Apprenticeship with different kinds of prompts and compared those types in an online field experiment. Against the previously discussed background, we aimed to investigate the instructional efficacy of different types of prompts (i.e., note prompts, principle-based self-explanation prompts, elaboration-based self-explanation prompts, and both) on students’ learning processes and learning outcomes. Furthermore, we examined the roles of learners’ engagement in the form of interruptions (i.e., self-reports on how often they were interrupted by other people or events/incidents) and persistence (i.e., actual participation in a delayed posttest).
Referring to learning processes and as a manipulation check, we predicted that…
-
Principle-based self-explanation prompts would foster principle-based self-explanations (Hypothesis 1a).
-
Elaboration-based self-explanation prompts would foster elaboration-based self-explanations (Hypothesis 1b).
-
Combined principle- and elaboration-based self-explanation prompts would foster both principle-based and elaboration-based self-explanations (Hypothesis 1c).
Referring to learning outcomes, we predicted that…
-
The lecture would foster learning outcomes regardless of the type of prompt (within-subjects comparison; Hypothesis 2).
-
Principle-based self-explanation prompts would foster learning outcomes (between-subjects comparison; Hypothesis 3).
Referring to learners’ engagement during online learning, we predicted that…
-
Self-explanation quality would contribute positively to learning outcomes (Hypothesis 4a).
-
Interruptions would contribute negatively to learning outcomes (Hypothesis 4b).
-
Persistence would contribute positively to learning outcomes (Hypothesis 4c).