Skip to main content
Erschienen in: Education and Information Technologies 1/2017

Open Access 14.09.2015

Clickers and formative feedback at university lectures

verfasst von: Kjetil Egelandsdal, Rune Johan Krumsvik

Erschienen in: Education and Information Technologies | Ausgabe 1/2017

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Lecturing is often criticized for being a monological and student passive way of teaching. However, digital technology such as Student Response Systems (SRS) can be used to reconstruct the traditional lecturing format. During a series of five two-hour lectures in qualitative methods for first year psychology students, we used SRS to conduct 4–6 interventions per lecture. In each intervention, the students were asked a subject-related question, discussed possible answer with peers and answered individually using a handheld remote control (a “clicker”). The student answers were then displayed in histogram and followed up by the lecturer in situ. The purpose of the study was to find out whether students experienced receiving formative feedback supporting their self-monitoring from these interventions, and how the feedback was perceived. Using a Mixed Methods Research Design, data were collected in a “live survey” at the last lecture (n: 173) and three focus groups after the lecture series (n 1: 6, n 2: 6, n 3: 2). Our findings show that most students experienced receiving formative feedback supporting their self-monitoring from the interventions. In particular, they experienced an increased awareness of their own understanding of the subject matter (feedback), what is important to learn in the subject (feed up) and what they should focus on further (feed forward). Although about 90 % of the students experienced receiving formative feedback from some part of the intervention, only slightly above half of the student group perceived the peer discussions as useful for this purpose. The focus groups related this to the students not always having peers to discuss with and variations in the quality of the discussions.

1 Introduction

Lecturing is a widely used method of teaching at universities. However, lectures have often been criticized as ineffective when it comes to promoting student learning (Mazur 2009; Wieman 2007). Several studies support this critique, showing significantly smaller learning gains from lecturing compared to more student active approaches (Deslauriers et al. 2011; Hake 1998; Hrepic et al. 2007; Knight and Wood 2005). The blame is commonly put on limits in the human attention span and short-term memory required to retain information from lectures, or a lack of opportunities for the students to engage with the subject and with peers to construct new knowledge and reflect upon their own understanding. However, the use of digital technology such as Student Response Systems1 (SRS) can be used to reconstruct the traditional lecture to meet such challenges.
SRS are digital tools that allow large groups of students to answer multiple-choice questions, where each person has a wireless remote control –a “clicker” – with which selections can be made anonymously. Research on the use of SRS indicates that breaking up the traditional lecture into several mini-lectures followed by a subject-related question can enhance student attention (Cain et al. 2009; Krumsvik and Ludvigsen 2012; Rush et al. 2010; Sun 2014), increase student engagement and promote student learning (Blasco-Arcas et al. 2013; Boscardin and Penuel 2012; Kay and LeSage 2009; Keough 2012; Lantz 2010; Mayer et al. 2009; Nelson et al. 2012; Oigara and Keengwe 2013; Smith et al. 2009; Smith et al. 2011). In addition, Krumsvik and Ludvigsen (2012) and Ludvigsen et al. (2015) conclude that SRS can be used to facilitate a formative assessment practice for promoting feedback to students and lecturers on student understanding. However, there is a need to study further how students experience the feedback from this practice, in terms of different kinds of feedback, different feedback sources and student characteristics.
Building on the previous work of Krumsvik and Ludvigsen (2012) and Ludvigsen et al. (2015) we used this technology to create 4–6 interventions during a series of five two-hour lectures in the subject of “Qualitative Methods” at the University of Bergen. In each intervention, the students were (a) asked a question from the topic of the mini-lecture, (b) discussed possible answers in pairs and (c) answered the question individually using the clickers. The student answers were then (d) displayed in a histogram on a big screen, and (e) the lecturer followed up the answers by asking the students to explain their reasoning in plenary and provided them with feedback based on their response.
The questions consisted of conceptual questions and case questions with and without the use of video. The conceptual questions asked the students to define key concepts, e.g. “What is a research design?” The case questions asked the students to analyze cases based on short texts and video clips, e.g. the students were shown a video about adolescents’ physical activities and diets and were asked, “Which research method do you consider most appropriate to study such a case?” Some of the case questions had more than one correct alternative answer, in order to stimulate reflection and discussion beyond trying to figure out which button to press.
Informed by Black and Wiliam’s (2009) theory of formative assessment, the primary goal of the interventions was to create situations that provided information about the students’ understanding for both the teacher and students themselves. In this paper, such evidence and advice on improvement is understood as feedback. A substantial amount of research shows that feedback can have a big impact on student learning (Black and Wiliam 1998; Evans 2013; Hattie and Timperley 2007; Kluger and DeNisi 1996; Shute 2008). On the other hand, attempts at communicating feedback do not always result in learning, and in some cases, the feedback communicated might inhibit learning instead of promoting it. This can partly be explained by the content and the timing of the feedback message (Hattie and Timperley 2007; Kluger and DeNisi 1996; Shute 2008), but ultimately depends on how the information is received and used (Carless et al. 2010; Hattie and Gan 2011; Higgins et al. 2001; Nicol and Macfarlane-Dick 2006; Sadler 2010).
Thus, when discussing feedback, we believe it is important to distinguish between (a) “feedback as something given” and (b) “feedback as an experience.” In the first case, feedback is intentionally expressed by someone commenting on a performance or product. In the second case, feedback is an event where something external informs the student about her understanding or skills regardless of intentions. E.g., reading a book or talking to someone can potentially tell a student something about her understanding of a topic, just as playing a tennis match against another player can tell her something about her abilities in tennis. Focusing on the latter kind of feedback, we have used the term ‘formative feedback’ to conceptualize feedback that actually becomes a learning experience.
We assumed that students can potentially experience two kinds of formative feedback from the interventions: (a) feedback that contributes directly to a better subject understanding, and (b) feedback that contributes to self-monitoring. The first kind of feedback concerns information that helps the students understand something new about the subject matter. The second kind of feedback concerns information that helps raise the students’ awareness of their own understanding, and thus, supports the self-monitoring of their learning process.
This paper focuses on the latter kind of formative feedback (supporting self-monitoring). From the interventions, three potential feedback sources are considered: (a) feedback from the questions and display of answers, (b) feedback from the peer discussions, and (c) feedback from the follow-up phase (from the lecturer and peers in the plenary session). Considering these sources, the aim of the study has been to answer two research questions:
1)
Do students experience receiving formative feedback that supports their self-monitoring from interventions using SRS, and how do students perceive the feedback?
 
2)
Which parts of the interventions, if any, do students experience receiving formative feedback from, and how do the students perceive the different parts?
 
Although all students to a certain degree monitor their learning process, not everyone does this effectively (Nicol and Macfarlane-Dick 2006). The research of Kruger and Dunning (1999) has shown that people who are unskilled in various domains tend to overestimate their own abilities. In the context of higher education, such metacognitive “errors” can lead to using an insufficient amount of time on studying, or to choosing a focus that is counterproductive. However, there is a potential for reducing such errors by facilitating learning activities that increase the students’ awareness of (a) what they need to learn, (b) what they actually do understand, and (c) how to improve (Black and Wiliam 1998; Evans 2013; Hattie and Timperley 2007; Yorke 2003). In terms of feedback, Hattie and Timperley (2007) have conceptualized these strands of information as (a) feed up, (b) feedback and (c) feed forward. These concepts are considered interdependent, because in order to assess what the next step in the studying process should be (feed forward); it is necessary to know both where the student is (feedback) and where s/he is supposed to end up (feed up). In this study, these concepts serve as a theoretical framework for investigating the students’ experience of feedback.

2 Research design

2.1 Design and methods

To investigate the research questions, a sequential Mixed Methods Design was used. In sequential designs, the “conclusions based on the results of the first strand lead to the formulation of design components for the next strand” (Teddlie and Tashakkori 2009, p. 153). This study consisted of two strands for collecting data: a “live survey” and focus groups. The quantitative survey data was primarily intended to answer the first part of the research questions, while the qualitative data from the focus groups was intended to answer the second part (Table 1).
Table 1
Design of the study
Aim of the study
To explore an instructional design consisting of feedback interventions using a Student Response System (SRS) at university lectures.
Research question
1. Do students experience receiving formative feedback that supports their self-monitoring from interventions using SRS, and how do students perceive the feedback?
2. Which parts of the interventions, if any, do the students experience receiving formative feedback from, and how do the students perceive the different parts?
Data collection
“Live-survey” (using SRS at last lecture)
Sequential mixed methods design
Focus groups
Population
Psychology students at the University of Bergen attending lectures in Qualitative Methods as part of a one-year introductory program
Sample
Survey: 173 students
Focus groups: Two groups of six students and one group of two students.
Analysis in SPSS
Descriptive, Pearson correlation, Reliability (Cronbach’s alpha) and Multiple regression analysis.
Analysis in NVivo
Theoretically driven analysis. Categorization of meaning.
The survey was developed using a five-point Likert scale and conducted “live” at the last of the lectures using SRS. As illustrated in Table 2 below, the students’ experiences of formative feedback was operationalized and indexed using Hattie and Timperley’s (2007) concepts: feed up, feedback and feed forward. These categories contained questions about whether the lecturer’s use of clickers had increased the students’ awareness of (a) “what is important to learn” and “what is important for the exams” (feed up); (b) “what they had understood so far” and “revealed misunderstandings” (feedback); and (c) “what they needed to focus on further” and “how to allocate their study time” (feed forward). The students were informed that they should take the whole interventions into consideration when answering the questions.
Table 2
Survey questions
Formative feedback supporting self-monitoring (index variable, 6 items)
 
Compared to other lectures in your program, to what extent did the lecturers use of clickers...
 a) Feed up
‐ make you aware of what is important to learn?
‐ make you aware of what is important for the exams?
 b) Feedback
‐ make you aware what you have understood so far?
‐ reveal misunderstandings?
 c) Feed forward
‐ make you aware of what you need to focus on further?
‐ help you assess how much time you should use on different parts subject?
Motivational beliefs and attitudes (no indexing)
 Self-efficacy
‐ I believe I will learn the subject matter in PSYK102a.
‐ I believe I will get a good grade in PSYK102.
 Intrinsic goal orientation
‐ In a subject like PSYK102, I prefer course material that arouses my curiosity, even if is difficult to learn
‐ In a subject like PSYK102, I study the most interesting parts of the course material regardless of whether it pays off on the exam or not.
 Extrinsic goal orientation
‐ My main focus in PSYK102 is to get a good grade so I can get a high grade point average.
‐ It is important for me to get a good grade in PSYK102 so I can show my abilities to future employers.
 Attitudes towards the subject matter
‐ I generally find the subject matter in PSYK102 interesting.
‐ I think it is useful for me to learn the subject matter in PSYK102.
 Control of learning beliefs
‐ If I study purposefully, I will learn the subject matter in PSYK102.
‐ If I do not understand the subject matter in PSYK102, it is because I have not studied hard enough.
aPSYK102 is the course code
The survey included variables such as gender, age, grade average, credit points, attitude towards lectures (lectures matter for my learning), effort (hours studying a week), and lecture attendance. In addition, ten items for different motivational beliefs and attitudes were included in five different categories: self-efficacy, intrinsic and extrinsic goal orientation, attitude towards the subject matter and control of learning beliefs (see Table 2 above). The items were constructed to cover different aspects of each category, e.g. the variable for self-efficacy contained items for both self-efficacy for learning and performance. In the last part of the survey, the students were asked from which parts of the lecture, if any, they experienced receiving feed up, feedback and feed forward. The focus groups were interviewed a few days after the survey. As a major part of the discussion, the students were presented with results from the survey and asked to comment on the findings based on their experiences.

2.2 Participants and context

The population studied was psychology students at the University of Bergen attending lectures in Qualitative Methods as a part of a one-year introductory program. The grade average for being accepted in this program is five, which is fairly high.2 Thus, the student group is quite homogenous when it comes to previous academic achievements. The learning environment in this program has been previously described as highly competitive and the workload as extensive (Nordmo and Samara 2009).
Five lectures were held for two student groups over a relatively short time span, two and a half weeks, at the beginning of the semester. Attending the lectures was not mandatory and the attendance declined slightly over the five lectures. The average attendance at the lectures was 218 students. A total of 173 students participated in the “live survey” making the response rate 100 %3 for the last lectures and 79 % considering the average attendance. Two groups of six students and one group of two students participated in the focus groups.

2.3 Validity and limitations

Drafts of the survey were discussed with two fellow researchers, and adjusted in light of their feedback. A final draft was then piloted with three students from the population who were asked to answer the survey and write down their thoughts and reactions to the questions. Afterwards a researcher went through the survey with the students and discussed their interpretation of the questions. Some questions were reconstructed after the pilot.
Although studying an instructional design in a real educational context strengthens the ecological validity, it also increases the complexity. Factors such as student and subject characteristics, student preparations, teaching skills of the lecturer, the questions used, the quality of the peer discussions, etc. can potentially affect how the students experience the feedback interventions. In other words, studying the same instructional design in different contexts might give varying results. However, this is the reality for both practitioners and researchers, and thus, not a problem that can be eliminated completely, but a challenge that needs to be taken into consideration. We addressed this challenge by using a Mixed Methods Design combining quantitative and qualitative methods. This allowed us to consider both how the interventions were experienced in general, and how some of the students perceived the implementation in the particular context. In addition, we believe that the immediacy of conducting the survey “live” at the last lecture with a 100 % response rate and getting the students’ reactions to the survey data in the focus groups strengthened the validity.

2.4 Analysis

The qualitative (focus group) and quantitative (survey) data were analyzed separately. The qualitative data were first fully transcribed, then analyzed by meaning categorization in NVivo using the different feedback types (Fig. 1) and feedback sources (Fig. 2) as an overarching framework. The quantitative data were analyzed in SPSS. The analysis conducted was descriptive, Pearson Correlation, reliability (Cronbach’s alpha) and Linear Multiple Regression analysis. The significance level was set to 0.05.

3 Results

3.1 Formative feedback contributing to self-monitoring

In the focus groups, the students stated that the interventions made them to pay attention and think about what they had learned, helped reduce cognitive load, enabled them to test whether they are able to use new concepts and confirmed what they had and had not understood.
…one becomes a lot more activated cause when you give a specific answer, not just like, say, inside your head, “I think it is so,” when you actually have answered, and it (the answer) is wrong or right, then you are much more aware of what you know and not, than if you just had thought about it yourself. (1)
As illustrated in the excerpt (1) above, some students experienced that discussing and answering questions during the lecture challenged them to become more aware of their learning progress, as opposed to reflecting over this internally as in a traditional lecture.
Results from the survey showed that students in general experienced receiving a greater extent of formative feedback from the interventions, compared with other lectures in their program (N: 168, M: 3.76, SD: 0.54). The Cronbach’s alpha for “formative feedback” (α: 0.78, items: 6) showed a good internal consistency for this construct. However, as the histogram (Fig. 3) below illustrates, there is a certain discrepancy between the student answers to the different feedback questions. Presented with these findings in the focus groups, the students shared their interpretations and experiences in light of the (1) feed up-, (2) feedback- and (3) feed forward-questions.

3.1.1 Feed up: Where am I going?

In the category of feed up, 80 % (N: 172; M: 3.98) of the students answered that compared to other lectures, the interventions had provided them with a greater extent of information about what was important to learn in the subject, while 47 % (N: 170; M: 3.34) answered that the interventions provided them with a greater extent of information about what they needed to know to do well on their exam. In the focus groups, the students gave four main interpretations for the discrepancy between the student answers to the two “feed up” questions:
1)
Troubles trusting the alignment between the lecture and the exams: Lecturers in general might have their own preferences of what’s important, which might not correlate with the focus of the exam.
 
2)
The information from the interventions is useful, but insufficient: While the use of clickers can help the students understand what is important to learn in the subject, the exam is essay-based, and thus, additional knowledge and skills are required.
 
3)
The studentsfocus is not yet on the exam: Not all students have started thinking about the exam at the beginning of the semester, and thus, the students are primarily focusing on the learning during the lectures, not on the exam.
 
4)
Relevance: Some students might not experience the interventions as relevant for the exam.
 

3.1.2 Feedback: How am I going?

In the category of feedback, 95 % (N: 171; M: 4.47) of the students answered that the interventions provided them with a greater extent of information about how well they had understood the subject matter, compared to other lectures; while 76 % (N: 172; M: 3.89) of the students answered that the interventions had also revealed misunderstandings to a greater extent. Presented with these results the students gave five main interpretations based on their experiences:
1)
Either understanding or no understanding, not misunderstandings: Students might experience that they either understood or did not understand the subject matter in question, thus, no misunderstandings were revealed.
 
2)
Expansion of knowledge without misunderstanding: Several alternative answers might be correct, so even though an answer might not be the best one, it is still correct; thus, some students might not experience having misunderstood, even though their knowledge was expanded.
 
3)
Unclear answers: What the correct or incorrect answer was might have been unclear for the students through the lecturer’s explanation.
 
4)
The studentsconcept of what amisunderstandingis might differ, e.g.:
… it will probably depend on how you experience having misunderstood in a way. If 80 % have given the same incorrect answer as you, you may feel that you have not misunderstood, that it is a general thing then, that everybody has in common. Then you do not feel like you have had a big misunderstanding. (2)
 
5)
Unconscious transformation of understanding: A student’s understanding might be transformed through the peer discussions from misunderstanding to understanding without the student recognizing it, e.g.:
Several times during the (peer) discussions, I have notice that although I sometimes think that I have the right answer, as we have discussed I have felt that the opinion of the others has become my own opinion through the discussion, and then I have answered correctly (with the clicker) and thought, “Yes, I understand this then”, even though I did not really think the same thing in the beginning; so I trick myself a bit. (3)
 

3.1.3 Feed forward: Where to next?

In the category of feed forward 73 % (N: 171; M: 3.89) of the students answered that compared to other lectures, the feedback interventions provided them with a greater extent of information about what they needed to study further in the subject, while only 26 % (N: 173; M: 3.02) of the students answered that the interventions provided them with a greater extent of information about how much time they should spend on different parts of the subject area. In the focus groups, the students gave three interpretations as to why so few students experienced that the interventions had helped them with “time allocation” compared to “focus”:
1)
Time allocation depends on the students studying strategies: Students may have different strategies for how to distribute their time, e.g. the lectures might be considered less important than looking at previous exams, or the students may not have studying strategies that are time-oriented in such a way.
 
2)
Time allocation depends on where the students are in their process of studying, e.g. early in the semester there are a lot of things the students need to work on, and a student might work more systematically towards the end of a semester than in the beginning.
 
3)
Time allocation is difficult because you never know how long understanding something will take.
 

3.1.4 Statistical analysis

The Pearson Correlation (Table 3) show significant positive correlations between the students’ experience of formative feedback from the interventions and the variables “attitude towards lectures” (M: 4.29; SD: 0.81), “prefers the subject matter that makes me curious” (M: 4.42; SD: 0.82) “the subject is useful to learn” (M: 4.40; SD: 0.77), “the subject is interesting” (M: 3.40, SD: 1.05), “if I work hard enough I will learn” (M: 4.13; SD: 0.92) and “if I study purposefully, I will learn” (M: 4.71, SD: 0.53). However, the correlations are quite weak, which might be due to low variance in the student answers.
Table 3
Pearson correlation for formative feedback
 
Attitude towards lectures: Lectures matters for my learning.
Intrinsic goal orientation: Prefers the subject matter that makes me curious.
Attitudes towards the subject: The subject is useful to learn.
Attitudes towards the subject: The subject is interesting.
Control of learning beliefs: If I work hard enough I will learn.
Control of learning beliefs: If I study purposefully, I will learn.
Formative feedback
Correlation
0.20
0.17
0.23
0.25
0.18
0.22
Sig. (2-tailed)
0.00
0.03
0.00
0.00
0.02
0.00
N
167
166
168
167
167
167
The regression coefficients in Table 4 show a significant correlation for “formative feedback” with four variables (N: 166, Adjusted R square: 0.15; F: 7.972; p < .00). The variables for attitudes towards the subject (“The subject is useful to learn” and “The subject is interesting”) and the variable for “attitude towards lectures” correlated positively, while the variable for effort (“hours of studying”) correlates negatively. “Effort” also correlates negatively with “attitude towards lectures” in the Pearson Correlation (Table 5) to a weak degree and positively with attitude towards the subject matter: “The subject is useful to learn” to weak degree and “the subject is interesting” to a mild degree.
Table 4
Multiple linear regression analysis for formative feedback
 
Unstandardized coefficients
Standardized coefficients
T
Sig.
B
Std. error
Beta
(Constant)
2.33
0.34
 
6.92
0.00
Attitudes towards the subject: The subject is useful to learn.
0.13
0.05
0.18
2.36
0.02
Attitudes towards the subject: The subject is interesting.
0.14
0.04
0.28
3.45
0.00
Attitude towards lectures: Lectures matter for my learning.
0.15
0.05
0.21
2.83
0.01
Effort (hours studying).
−0.10
0.05
−0.18
−2.26
0.03
Table 5
Pearson correlation for effort (hour studying)
 
Attitude towards lectures: Lectures matter for my learning.
Attitudes towards the subject: The subject is useful to learn.
Attitudes towards the subject: The subject is interesting.
Effort (hours of studying).
Correlation
−0.16
0.20
0.34
Sig. (2-tailed)
0.37
0.01
0.00
N
171
168
167
Variables such as “gender”, “age”, and “grade average”, “credit points”, “lecture attendance” showed no significant correlation with the students’ experience of formative feedback in the Pearson Correlation and the regression analysis.

3.2 Formative feedback from different parts of the interventions

Figure 4 illustrates the students’ experience of receiving feed up, feedback and feed forward from different parts of the interventions. The alternatives with the highest frequency were a combination of all the parts of the intervention (32–34 %) and a combination of the clicker Q & A’s and the follow-up by the lecturer (19–28 %). Few students (4–10 %) experienced that other parts of the lecture had been more useful for receiving formative feedback than the interventions, and even fewer (2–3 %) experienced no parts of the lecture as useful for receiving formative feedback. In total, fewer students experienced the peer discussion (54–58 %) as useful for receiving formative feedback than the clicker Q & A’s (66–72 %) and the lecturer’s follow-up (70–77 %).

3.2.1 The peer discussion

In the focus groups, students mentioned two main challenges related to the peer discussions. First, they sometimes experienced sitting alone without anyone to discuss with. Second, they experienced that the quality of the discussion depended on who they were sitting with, and gave the following negative examples: (a) if the peers did not have an opinion on the question, there was not much to discuss; (b) the peer might not be willing to discuss; (c) some students found it uncomfortable to discuss with someone they did not know, and (d) the discussions were sometimes superficial, simply talking about what the correct answer was, not debating why. In addition, some students experienced that lectures early in the morning led to poorer discussions, and that if they had not read before the lecture, they did not have much to discuss. On the other hand, when the peer discussions were working, the students experienced that the discussions help them reflect upon their own understanding.
My favorite section is the discussions, because then I have to think. Then I kind of start thinking: “Okay, what is it that I do not understand now, why do I say that, and why do I give this answer?” (4)
If you are alone and you are thinking that you have read and know exactly what the answer is, and then press and you are done and then do something else and do not use the time to reflect, then the clickers lose their function in a way, if you do not get to discuss it properly. Cause then it will be just like “yes”, and you will not think anything more about it, and sometimes it will happen that you answer wrong too, and then you’ll think, “Oh yeah, but I knew that really, I did not misunderstand it, I just pressed wrong.” (5)
As illustrated in the two excerpts above, having to discuss questions with peers invited the students to express their understanding verbally. This challenged the students to test out and visualize their understanding in a way that might have made a stronger impression than reflecting on a question in silence.

3.2.2 The clicker questions

In general, students experienced that the clicker questions gave them an idea of what was important to understand in the subject and made them reflect on what they had and had not understood:
…the “clickers” give us a chance to be completely honest with ourselves, because there is an option that says “I do not know” that we can choose, and it becomes an “aha” experience for us. There is no one who raises their hand to say “no, I do not know the answer to this.” (6)
The students stated that they appreciated that some of the questions had several options that were correct, or slightly correct, because this created good discussions, made them more focused on processing all the options and more aware of the nuances.
The students are split in their view of how difficult the questions were. Some students found the questions suitable when they had prepared for the lecture; others found the questions difficult but possible to answer even though they had not read, and some found the questions easy to answer if they had read in advance. One student stated that he preferred the most challenging questions, while another student believed that it is important for the motivation that the questions are not too difficult.
The students seemed to find the video case questions particularly interesting. Several students stated that they appreciated that these questions let them test what they had learned on a concrete practical case, in order to get a conception of what they had understood. In addition, the students found the video cases useful for understanding the subject matter and providing a brake in the flow of information. Several students also stated that these parts of the lecture were easiest to remember. A couple of students, however, found the videos easier to remember than the lesson tied to them. Others expressed that the video cases helped them remember and understand the lessons. One student claimed that, either way, the use of video cases might be useful when she was studying later, because then she would have something to relate the literature to. Another student pointed out that how the students remember the video cases depends on their current knowledge and how familiar they are with the concepts used. In relation to this, a student reflected that using a combination of conceptual and case questions is important, because having a conceptual understanding is a precondition for interpreting the cases successfully.

3.2.3 The follow-up phase

In general, the students appreciated the lecturer adapting his explanations to the student answers, getting an explanation for the different alternatives (not just the correct ones), being presented with different ways of thinking and possible misunderstandings, receiving feedback that can be related to the their own understanding, and getting an explanation even though they had answered correctly.
Some students emphasized that when there are several correct, or slightly correct, answers, it is important that the lecturer’s follow-up is clear in order to avoid confusion. A couple of students would like the lecturer’s answers to be clearer on what the correct answer was, and why, in the follow-up phase, while another student pointed out that that a simple correct answer is not always possible. He suspects that the transition from high school to the university might challenge some of the students’ epistemic beliefs:
…I think many (students) are previously very used to receiving very specific answers, cause we are first-year students, and many of us are at least fairly new students and are used to getting a sort of definitive answer, but here there is not always a definitive answer. (8)
Some of the students expressed that they appreciated being invited to give their explanations in the follow-up phase, and that listening to other students in this phase offered new perspectives beyond what was discussed in the peer groups. Some students stated that they do not like to give answers in plenary, but appreciated that other students did this. Several students experienced that it was sometimes difficult to hear when other students spoke in the auditorium, and that they wished the lecturer would repeat what was said.

4 Discussion

4.1 Students’ experience of formative feedback

The survey data show that the students in general experienced receiving formative feedback that supported their self-monitoring to a greater extent from the interventions than from other lectures in their program. The students interviewed in the focus groups added to this by stating that the interventions made them reflect upon what they had learned, and enabled them to test out and receive feedback on their understanding.
The regression analysis showed that students who had a positive attitude towards lectures and the subject and who had spent fewer hours studying (effort) tended to experience receiving formative feedback from the interventions to a greater extent than their peers. These variables predict fifteen percent of the students’ experience of the feedback. In the Pearson Correlation, “effort” also showed a weak negative relationship with “attitude towards lectures”. Considering that the lectures are not mandatory, one could suspect that this correlation would have been stronger if all students in the program had been present.
It seems likely that students with many hours of studying in “qualitative methods” understood the subject better than their peers who had studied less. Hence, the difficulty of the questions used in the interventions might not have met the learning needs of the “high effort students”. In addition, these students might also have been more aware of their own understanding, and thus depended less on external support. Kruger and Dunning (1999) found that when people’s skills in a domain increase, so does the ability to self-assess one’s competence. So even though effort correlates positively with the students’ attitudes towards the subject matter, the interventions might be experienced as less useful by “high effort students”, because they do not tell these students more than they already know. Students with fewer hours of studying, combined with an interest in the subject, are however more likely to find the interventions useful, because they might be better adapted to their learning needs.
The survey data also show that more students experienced receiving feedback than feed up and feed forward from the interventions. This discrepancy might not be surprising given the way the interventions are structured. Being asked key questions in the subject requires the students to test their current level of understanding, and raises their awareness of what they have and have not understood (feedback). When it comes to feed up and feed forward, this kind of information requires additional inferences. Understanding what is important to learn (feed up) from the interventions needs to be connected to an understanding of how the feedback information relates to the learning goals in the subject; and understanding how to improve (feed forward) is connected to the students’ assessment of what they have understood so far (feedback), what is important to learn (feed up) and their ability to interpret and use this information. There is also a discrepancy between the students’ answers in the two items within each feedback category, (4.1.1) feed up, (4.1.2) feedback and (4.1.3) feed forward.

4.1.1 Feed up: Where am I going?

Almost twice as many students report receiving a greater extent of information about what was important to learn in the subject (80 %) than what is important for the exam (47 %), compared to other lectures in their program. However, if it is true that the interventions raised the students’ awareness about what is important to learn in the subject (as 80 % of the students experienced), this feedback should also raise the students’ awareness of what is important to learn for the exams. On the other hand, as pointed out by the students, the interventions might not be sufficient for this purpose. It is generally harder to know what an exam will require than to identify what is important to learn in a subject. Hence, there is an element of uncertainty tied to the second question, where sources other than lectures might be considered more relevant.

4.1.2 Feedback: How am I going?

The survey showed a small discrepancy between the students’ experience of the interventions in terms of revealing misunderstandings (76 %) and raising their awareness of their own understanding (95 %), compared to other lectures. Based on the focus groups, this discrepancy might be due to differences in the students’ conception of misunderstandings (as illustrated in excerpts 1 and 2), or simply because the interventions either confirmed or revealed a lack of understanding among some of the students. The lectures conducted at the beginning of the semester might have contributed to these experiences. The students’ understanding of the subject might not be well developed or strongly held, so the grounds for misunderstanding something might be weak. In addition, these experiences might be related to the characteristics of the subject. In a subject like qualitative methods, where several perspectives and positions are possible, the students might find misunderstandings less transparent. Regardless, 76 % of the students experienced that the interventions had revealed misunderstandings, which is quite high.

4.1.3 Feed forward: Where to next?

The discrepancy between student answers in the category of feed forward is quite large; over two thirds of the students experienced that the interventions raised their awareness of what to focus on further, while only 26 % experienced receiving information on how much time they should spend on different parts of the subject matter. As pointed out in the focus groups, how the students assess their time allocation is likely to be related to their studying strategies and where they are in their studying process. In addition, time allocation is generally difficult, because it is hard to assess how long it will take to understand something. In other words, these issues are closely related to how the students use the information from the lectures to regulate their learning processes. When studying, a number of factors come into play, and information from sources besides lectures might be considered more important. Students also have different ways of organizing their studying, which are more or less strategic and time conscious. So, although such interventions can contribute to raising the students’ awareness of what they need to focus on further, the way this information is used will vary (see Ludvigsen et al. 2015 for findings on this).

4.2 Students’ experience of feedback from different parts of the intervention

Most students experienced receiving formative feedback from some part of the interventions (87–95 %), and about one third of the students experienced receiving formative feedback from all three parts of the interventions.
Peer discussions (54–58 %) seem to be the least favored part of the interventions for experiencing formative feedback, compared to the clicker questions and display of answers (66–72 %) and the follow-up by the lecturer (70–76 %). The focus group interviews revealed two main challenges connected to the peer discussions. The first challenge is that students sometimes experienced sitting alone and not having anyone to discuss with during the lectures. The other challenge is that the quality of the discussion depends on who they are discussing with. Students have different levels of knowledge and motivation when entering a discussion, and thus the quality of the discussions will vary. So although peer discussions might be beneficial when they are working well, the feedback received from such situations is more random than, e.g. feedback from the lecturer, because different peer combinations lead to different feedback situations. On the other hand, the focus groups showed that some students experienced that the peer discussion helped them reflect upon their own understanding because they had to provide an explanation for their answers, not just press a button (excerpts 4 and 5).
When it comes to the questions, the students seemed to find the video case questions particularly useful, because they got to test what they had learned on concrete cases. In addition, the students experienced the “video interventions” as easier to remember than other parts of the lecture. The major challenge in this part of the intervention seems to be adapting the difficulty of the questions to different student needs.
Considering the follow-up phase, the students expressed that they appreciated the lecturer adapting his explanations to the student answers and that all answers were followed up (not just the correct one). Some students also found it useful to listen to other students’ explanations in this phase, because this offered new perspectives, beyond what was discussed in the peer groups. The challenges connected to this phase seemed to be that the students sometimes found it difficult to hear when other students spoke in the auditorium, and that some students wished the lecturer’s explanations would be clearer on what the correct answer was, and why, when there were several correct, or slightly correct, answers to a question.

5 Conclusion

Results from the study show that most students did experience receiving formative feedback from the interventions. In general they experienced receiving a greater extent of formative feedback from the interventions, compared with other lectures in their program (M: 3.76), and about nine out of ten students experienced receiving feed up, feedback and feed forward from some part of the intervention. The interventions were perceived as particularly useful for increasing the students’ awareness of how well they had understood the subject matter (M: 4.47). However, the findings also suggest that the interventions were generally experienced as useful for identifying what is important to learn (M: 3.98), what to focus on further (M: 3.89) and revealing misunderstandings (M: 3.89).
Two thirds of the students experienced receiving formative feedback from either a combination of all parts of the interventions or from a combination of the use of questions and answers and the lecturer’s follow-up. Peer discussions seem to be the least favored part of the interventions when it comes to receiving formative feedback, which seems to be connected to the students not always having peers to discuss with and variations in the quality of the discussions. Nevertheless, over fifty percent of the students did experience receiving formative feedback from the peer discussions, which indicates that many of the students found this part of the intervention useful as well.
All in all, in the light of such positive findings, a critical reflection is in order. As known from previous research (Boscardin and Penuel 2012; Kay and LeSage 2009; Keough 2012; Lantz 2010), students tend to enjoy using “clickers”. Thus, one might suspect that positive attitudes towards the technology combined with a certain novelty effect might have led the students to answer more positively to questions of how the feedback was received. On the other hand, this might be a “chicken-and-egg” situation because the students’ experience of feedback might also affect the students’ attitudes towards the “clickers”. E.g. Krumsvik and Ludvigsen (2012) found that students valued the opportunities for immediate feedback that the “clickers” provided, and Han (2014) concluded that students’ perception of the use of SRS is related to formative feedback and assessment approaches.
The results might also have been affected by the “Hawthorn effect”: being the subject of research might have inclined the students to experience the interventions more positively. A third explanation might lie in the fact that the lectures are not mandatory; students who did not experience them as useful might have chosen not to attend them, and were therefore excluded from our survey. In addition, the focal point of this study was whether students experience receiving formative feedback from the interventions, not whether the students actually did receive formative feedback. Thus, there might be a discrepancy between the students’ perception and their actual learning experiences.
Taking this into account, it is important to also note that the study yielded positive results despite being implemented in a real educational context under less than optimal conditions; five lectures were held over a short period of two and a half weeks; the learning environment is highly competitive which might have affected the peer discussions negatively; and the students had to use parts of the last lecture to answer survey questions, which might have led to frustration among some of them. In addition, different challenges with the interventions were also identified, such as that the level of difficulty of the questions might have suited some students better than others; not all students were engaged in high quality peer discussions; and some had difficulties hearing what other students were saying in the follow-up phase. Hence, the instructional design generated positive results despite several practical challenges.
We suggest that future research on SRS within a framework of formative feedback should study further how the feedback is actually used by the students in their studying and if, and how, the interventions contribute to student learning – both short and long term effects. In addition, the mixed findings on peer discussions compared to other parts of the interventions suggest that there is a need to study different parts of the intervention separately to identify learning effects.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Fußnoten
1
Also known as Electronic Voting Systems, Audience Response Systems, Personal Response Systems and Classroom Communication Systems.
 
2
Norwegian high school grades are given on a scale from 1 to 6, where 6 is highest.
 
3
Not all students answered every question.
 
Literatur
Zurück zum Zitat Black, P., & Wiliam, D. (1998). Inside the black box: raising standards through classroom assessment. Phi Delta Kappan, 80(2), 139–144. Black, P., & Wiliam, D. (1998). Inside the black box: raising standards through classroom assessment. Phi Delta Kappan, 80(2), 139–144.
Zurück zum Zitat Black, P., & Wiliam, D. (2009). Developing the theory of formative assessment. Educational Assessment, Evaluation and Accountability, 21(1), 5–31.CrossRef Black, P., & Wiliam, D. (2009). Developing the theory of formative assessment. Educational Assessment, Evaluation and Accountability, 21(1), 5–31.CrossRef
Zurück zum Zitat Blasco-Arcas, L., Buil, I., Hernandez-Ortega, B., & Sese, F. J. (2013). Using clickers in class. The role of interactivity, active collaborative learning and engagement in learning performance. Computers & Education, 62, 102–110.CrossRef Blasco-Arcas, L., Buil, I., Hernandez-Ortega, B., & Sese, F. J. (2013). Using clickers in class. The role of interactivity, active collaborative learning and engagement in learning performance. Computers & Education, 62, 102–110.CrossRef
Zurück zum Zitat Boscardin, C., & Penuel, W. (2012). Exploring benefits of audience-response systems on learning: a review of the literature. Academic Psychiatry, 36(5), 401–407.CrossRef Boscardin, C., & Penuel, W. (2012). Exploring benefits of audience-response systems on learning: a review of the literature. Academic Psychiatry, 36(5), 401–407.CrossRef
Zurück zum Zitat Cain, J., Black, E. P., & Rohr, J. (2009). An audience response system strategy to improve student motivation, attention, and feedback. American Journal of Pharmaceutical Education, 73(2). Cain, J., Black, E. P., & Rohr, J. (2009). An audience response system strategy to improve student motivation, attention, and feedback. American Journal of Pharmaceutical Education, 73(2).
Zurück zum Zitat Carless, D., Salter, D., Yang, M., & Lam, J. (2010). Developing sustainable feedback practices. Studies in Higher Education, 36(4), 395–407.CrossRef Carless, D., Salter, D., Yang, M., & Lam, J. (2010). Developing sustainable feedback practices. Studies in Higher Education, 36(4), 395–407.CrossRef
Zurück zum Zitat Deslauriers, L., Schelew, E., & Wieman, C. (2011). Improved learning in a large-enrollment physics class. Science Education International, 322(6031), 862–864. Deslauriers, L., Schelew, E., & Wieman, C. (2011). Improved learning in a large-enrollment physics class. Science Education International, 322(6031), 862–864.
Zurück zum Zitat Evans, C. (2013). Making sense of assessment feedback in higher education. Review of Educational Research, 83(1), 70–120.CrossRef Evans, C. (2013). Making sense of assessment feedback in higher education. Review of Educational Research, 83(1), 70–120.CrossRef
Zurück zum Zitat Hake, R. R. (1998). Interactive-engagement versus traditional methods: a six-thousand-student survey of mechanics test data for introductory physics courses. American Journal of Physics, 66(1), 64–74.CrossRef Hake, R. R. (1998). Interactive-engagement versus traditional methods: a six-thousand-student survey of mechanics test data for introductory physics courses. American Journal of Physics, 66(1), 64–74.CrossRef
Zurück zum Zitat Han, J. H. (2014). Unpacking and repacking the factors affecting students’ perceptions of the use of classroom communication systems (CCS) technology. Computers & Education, 79, 159–176.CrossRef Han, J. H. (2014). Unpacking and repacking the factors affecting students’ perceptions of the use of classroom communication systems (CCS) technology. Computers & Education, 79, 159–176.CrossRef
Zurück zum Zitat Hattie, J., & Gan, M. (2011). Instruction based on feedback. In R. E. Mayer, & P. A. Alexander (Eds.), Handbook of research on learning and instruction (pp. 249–271). New York: Routledge. Hattie, J., & Gan, M. (2011). Instruction based on feedback. In R. E. Mayer, & P. A. Alexander (Eds.), Handbook of research on learning and instruction (pp. 249–271). New York: Routledge.
Zurück zum Zitat Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77(1), 81–112.CrossRef Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77(1), 81–112.CrossRef
Zurück zum Zitat Higgins, R., Hartley, P., & Skelton, A. (2001). Getting the message across: the problem of communicating assessment feedback. Teaching in Higher Education, 6(2), 269–274.CrossRef Higgins, R., Hartley, P., & Skelton, A. (2001). Getting the message across: the problem of communicating assessment feedback. Teaching in Higher Education, 6(2), 269–274.CrossRef
Zurück zum Zitat Hrepic, Z., Zollman, D. A., & Rebello, N. S. (2007). Comparing students’ and experts’ understanding of the content of a lecture. Journal of Science Education and Technology, 16(3), 213–224.CrossRef Hrepic, Z., Zollman, D. A., & Rebello, N. S. (2007). Comparing students’ and experts’ understanding of the content of a lecture. Journal of Science Education and Technology, 16(3), 213–224.CrossRef
Zurück zum Zitat Kay, R. H., & LeSage, A. (2009). Examining the benefits and challenges of using audience response systems: a review of the literature. Computers & Education, 53(3), 819–827.CrossRef Kay, R. H., & LeSage, A. (2009). Examining the benefits and challenges of using audience response systems: a review of the literature. Computers & Education, 53(3), 819–827.CrossRef
Zurück zum Zitat Keough, S. M. (2012). Clickers in the classroom: a review and a replication. Journal of Management Education, 36(6), 822–847.CrossRef Keough, S. M. (2012). Clickers in the classroom: a review and a replication. Journal of Management Education, 36(6), 822–847.CrossRef
Zurück zum Zitat Kluger, A. N., & DeNisi, A. (1996). The effects of feedback interventions on performance: a historical review, a meta-analysis, and a preliminary feedback intervention theory. Psychological Bulletin, 119(2), 254–284.CrossRef Kluger, A. N., & DeNisi, A. (1996). The effects of feedback interventions on performance: a historical review, a meta-analysis, and a preliminary feedback intervention theory. Psychological Bulletin, 119(2), 254–284.CrossRef
Zurück zum Zitat Knight, J. K., & Wood, W. B. (2005). Teaching more by lecturing less. Cell Biology Education, 4(4), 298–310.CrossRef Knight, J. K., & Wood, W. B. (2005). Teaching more by lecturing less. Cell Biology Education, 4(4), 298–310.CrossRef
Zurück zum Zitat Kruger, J., & Dunning, D. (1999). Unskilled and unaware of it: how difficulties in recognizing one’s own incompetence lead to inflated self-assessments. Journal of Personality and Social Psychology, 77(6), 1121–1134.CrossRef Kruger, J., & Dunning, D. (1999). Unskilled and unaware of it: how difficulties in recognizing one’s own incompetence lead to inflated self-assessments. Journal of Personality and Social Psychology, 77(6), 1121–1134.CrossRef
Zurück zum Zitat Krumsvik, R. J., & Ludvigsen, K. (2012). Formative E-assessment in plenary lectures. Nordic Journal of Digital Literacy, 7(01), 36–54. Krumsvik, R. J., & Ludvigsen, K. (2012). Formative E-assessment in plenary lectures. Nordic Journal of Digital Literacy, 7(01), 36–54.
Zurück zum Zitat Lantz, M. E. (2010). The use of ‘clickers’ in the classroom: teaching innovation or merely an amusing novelty? Computers in Human Behavior, 26(4), 556–561.CrossRef Lantz, M. E. (2010). The use of ‘clickers’ in the classroom: teaching innovation or merely an amusing novelty? Computers in Human Behavior, 26(4), 556–561.CrossRef
Zurück zum Zitat Ludvigsen, K., Krumsvik, R., & Furnes, B. (2015). Creating formative feedback spaces in large lectures. Computers & Education, 88, 48–63.CrossRef Ludvigsen, K., Krumsvik, R., & Furnes, B. (2015). Creating formative feedback spaces in large lectures. Computers & Education, 88, 48–63.CrossRef
Zurück zum Zitat Mayer, R. E., Stull, A., DeLeeuw, K., Almeroth, K., Bimber, B., Chun, D., Bulger, M., Campbell, J., Knight, A., & Zhang, H. (2009). Clickers in college classrooms: fostering learning with questioning methods in large lecture classes. Contemporary Educational Psychology, 34(1), 51–57.CrossRef Mayer, R. E., Stull, A., DeLeeuw, K., Almeroth, K., Bimber, B., Chun, D., Bulger, M., Campbell, J., Knight, A., & Zhang, H. (2009). Clickers in college classrooms: fostering learning with questioning methods in large lecture classes. Contemporary Educational Psychology, 34(1), 51–57.CrossRef
Zurück zum Zitat Nelson, C., Hartling, L., Campbell, S., & Oswald, A. E. (2012). The effects of audience response systems on learning outcomes in health professions education. A BEME systematic review: BEME Guide No. 21. Medical Teacher, 34(6), E386–E405.CrossRef Nelson, C., Hartling, L., Campbell, S., & Oswald, A. E. (2012). The effects of audience response systems on learning outcomes in health professions education. A BEME systematic review: BEME Guide No. 21. Medical Teacher, 34(6), E386–E405.CrossRef
Zurück zum Zitat Nicol, D., & Macfarlane-Dick, D. (2006). Formative assessment and self-regulated learning: a model and seven principles of good feedback practice. Studies in Higher Education, 31(2), 199–218.CrossRef Nicol, D., & Macfarlane-Dick, D. (2006). Formative assessment and self-regulated learning: a model and seven principles of good feedback practice. Studies in Higher Education, 31(2), 199–218.CrossRef
Zurück zum Zitat Nordmo, I., & Samara, A. (2009). The study experiences of the high achievers in a competitive academic environment: a cost of success? Issues in Educational Research, 19(3), 255–270. Nordmo, I., & Samara, A. (2009). The study experiences of the high achievers in a competitive academic environment: a cost of success? Issues in Educational Research, 19(3), 255–270.
Zurück zum Zitat Oigara, J., & Keengwe, J. (2013). Students’ perceptions of clickers as an instructional tool to promote active learning. Education and Information Technologies, 18(1), 15–28.CrossRef Oigara, J., & Keengwe, J. (2013). Students’ perceptions of clickers as an instructional tool to promote active learning. Education and Information Technologies, 18(1), 15–28.CrossRef
Zurück zum Zitat Rush, B. R., Hafen, M., Biller, D. S., Davis, E. G., Klimek, J. A., Kukanich, B., Larson, R. L., Roush, J. K., Schermerhorn, T., Wilkerson, M. J., & White, B. J. (2010). The effect of differing audience response system question types on student attention in the veterinary medical classroom. Journal of Veterinary Medical Education, 37(2), 145–153.CrossRef Rush, B. R., Hafen, M., Biller, D. S., Davis, E. G., Klimek, J. A., Kukanich, B., Larson, R. L., Roush, J. K., Schermerhorn, T., Wilkerson, M. J., & White, B. J. (2010). The effect of differing audience response system question types on student attention in the veterinary medical classroom. Journal of Veterinary Medical Education, 37(2), 145–153.CrossRef
Zurück zum Zitat Sadler, D. R. (2010). Beyond feedback: developing student capability in complex appraisal. Assessment & Evaluation in Higher Education, 35(5), 535–550.CrossRef Sadler, D. R. (2010). Beyond feedback: developing student capability in complex appraisal. Assessment & Evaluation in Higher Education, 35(5), 535–550.CrossRef
Zurück zum Zitat Shute, V. J. (2008). Focus on formative feedback. Review of Educational Research, 78(1), 153–189.CrossRef Shute, V. J. (2008). Focus on formative feedback. Review of Educational Research, 78(1), 153–189.CrossRef
Zurück zum Zitat Smith, M. K., Wood, W. B., Adams, W. K., Wieman, C., Knight, J. K., Guild, N., & Su, T. T. (2009). Why peer discussion improves student performance on in-class concept questions. Science, 323(5910), 122–124.CrossRef Smith, M. K., Wood, W. B., Adams, W. K., Wieman, C., Knight, J. K., Guild, N., & Su, T. T. (2009). Why peer discussion improves student performance on in-class concept questions. Science, 323(5910), 122–124.CrossRef
Zurück zum Zitat Smith, M. K., Wood, W. B., Krauter, K., & Knight, J. K. (2011). Combining peer discussion with instructor explanation increases student learning from in-class concept questions. CBE Life Sciences Education, 10(1), 55–63.CrossRef Smith, M. K., Wood, W. B., Krauter, K., & Knight, J. K. (2011). Combining peer discussion with instructor explanation increases student learning from in-class concept questions. CBE Life Sciences Education, 10(1), 55–63.CrossRef
Zurück zum Zitat Sun, J. C.-Y. (2014). Influence of polling technologies on student engagement: an analysis of student motivation, academic performance, and brainwave data. Computers & Education, 72, 80–89.CrossRef Sun, J. C.-Y. (2014). Influence of polling technologies on student engagement: an analysis of student motivation, academic performance, and brainwave data. Computers & Education, 72, 80–89.CrossRef
Zurück zum Zitat Teddlie, C., & Tashakkori, A. (2009). Foundations of mixed methods research. London: SAGE Publications Ltd. Teddlie, C., & Tashakkori, A. (2009). Foundations of mixed methods research. London: SAGE Publications Ltd.
Zurück zum Zitat Wieman, C. (2007). Why not try a scientific approach to science education? Change: The Magazine of Higher Learning, 39(5), 9–15.CrossRef Wieman, C. (2007). Why not try a scientific approach to science education? Change: The Magazine of Higher Learning, 39(5), 9–15.CrossRef
Zurück zum Zitat Yorke, M. (2003). Formative assessment in higher education: moves towards theory and the enhancement of pedagogic practice. Higher Education, 45(4), 477–501.CrossRef Yorke, M. (2003). Formative assessment in higher education: moves towards theory and the enhancement of pedagogic practice. Higher Education, 45(4), 477–501.CrossRef
Metadaten
Titel
Clickers and formative feedback at university lectures
verfasst von
Kjetil Egelandsdal
Rune Johan Krumsvik
Publikationsdatum
14.09.2015
Verlag
Springer US
Erschienen in
Education and Information Technologies / Ausgabe 1/2017
Print ISSN: 1360-2357
Elektronische ISSN: 1573-7608
DOI
https://doi.org/10.1007/s10639-015-9437-x

Weitere Artikel der Ausgabe 1/2017

Education and Information Technologies 1/2017 Zur Ausgabe