Skip to main content
Top

Open Access 07-05-2024

Feedback Through Digital Application Affordances and Teacher Practice

Authors: Nilay Muslu, Marcelle A. Siegel

Published in: Journal of Science Education and Technology

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

Assessment feedback is an essential way to promote student learning. Students and teachers may benefit from educational technologies during the feedback process. The purpose of this study was to identify the feedback dimensions that were fulfilled by iPad applications (apps) and to compare teacher practice to the affordances of apps. Typological data analysis was used to perform this qualitative case study. We analyzed seven apps (QR Code Reader, Schoology, Kahoot!, Nearpod, Socrative, ZipGrade, and The Physics Classroom) that a high school physics teacher used to provide feedback in a technology-enhanced classroom. Data sources included classroom video recordings and the websites of these apps. To facilitate the analysis of the data, we enhanced the feedback dimensions identified by Hatzipanagos and Warburton (2009). Our analysis highlighted the diverse capabilities of these apps with regard to supporting the following dimensions of effective feedback: dialogue, visibility, appropriateness, community, power, learning, timeliness, clearness, complexity, reflection, and action. We found that through additional discussion and interactions with students, the teacher could support dimensions that an app did not support. This study not only underscores the critical interplay between technological tools and teacher practices with regard to crafting effective feedback mechanisms but also offers practical recommendations for educators seeking to optimize technology-enhanced feedback in classroom settings. Future research is encouraged to explore the technology implementation experiences of less experienced teachers. Examining teachers working at various school levels and from various countries can offer valuable insights.
Notes

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Crisp (2007) stated that the importance of feedback has been emphasized in policy documents and standards (e.g., the Next Generation Science Standards [NGSS] (NGSS Lead States, 2013)) beyond the level discussed in the assessment literature (e.g., Evans, 2013; Evans & Waring, 2011; Hattie & Timperley, 2007; Li & De Luca, 2014; Shute, 2008; Winstone & Boud, 2022). Feedback is an essential aspect of formative assessment and has strong influences on learning and achievement (Black & Wiliam, 1998; Clark, 2012; Hattie & Timperley, 2007; Havnes et al., 2012; Ruiz-Primo & Li, 2013; Yan et al., 2021). Feedback is a vital step. It allows students and instructors to communicate. This communication helps identify student needs and improve learning. No general agreement has been reached regarding the definition of effective feedback (Evans, 2013; Shute, 2008). Feedback can be implemented in a variety of ways, and its effectiveness changes based on the student, context, and purpose of feedback (Evans, 2013; Hattie & Timperley, 2007). However, certain general attributes of feedback can help match feedback to student needs. In this study, we expand the feedback dimensions identified by Hatzipanagos and Warburton (2009) to analyze effective feedback.
Research has indicated that technology can assist teachers during the feedback process, helping them meet students’ needs (Maeng, 2017). Technology can support feedback in a variety of ways: immediate feedback (e.g., Buckley et al., 2010; Zhang & Yu, 2021), personalized feedback (e.g., Penuel & Yarnall, 2005), collaborative learning communities (e.g., Lai & Ng, 2011), and feedback to the instructor (e.g., Feldman & Capobianco, 2008). An anytime-anywhere approach within technology improves communication between teachers and students (Evans, 2013), thereby promoting the feedback process.
Since feedback is an essential and widely discussed phenomenon in the literature and relevant standards, this empirical study explored the role of technology in the process of providing feedback. Technology-based feedback can be provided through a variety of mediums: internet applications, interactive multimedia, electronic games, and mobile devices (Evans, 2013). In this study, feedback was provided via mobile devices, specifically through the use of different applications (i.e., computer programs, also known as “apps”) on an iPad. We explored the potential of iPad apps to support the feedback dimensions within a high school physics course. Specifically, we compared the affordances of apps to teacher practices.

Feedback

A major aim of feedback is to improve students’ learning (Black & Wiliam, 1998; Jones & Blankenship, 2014; Ruiz-Primo & Li, 2013; Siegel et al., 2006; Winstone & Boud, 2022). Researchers have identified a gap between students’ current performance and the desired learning goal; accordingly, feedback should facilitate the narrowing of this gap (Lizzio & Wilson, 2008; Nicol & Macfarlane-Dick, 2006; Sadler, 1989). Although researchers have agreed that feedback is an important part of assessment, the definitions of feedback have varied widely (Black & Wiliam, 2009; Evans, 2013; Li & De Luca, 2014; Shute, 2008). On one hand, Kepner defined feedback as “any procedure used to inform a learner whether an instructional response is right or wrong” (as cited in Jones & Blankenship, 2014, p. 2). On the other hand, Li and De Luca (2014) used the term “assessment feedback” to refer to the comments or grades that instructors use to improve student learning. While the effectiveness of feedback has been debated, Hatzipanagos and Warburton (2009) were able to define certain common attributes.

Feedback Attributes

Hatzipanagos and Warburton (2009) summarized the attributes of feedback based on the existing feedback literature. In their paper, feedback attributes were grouped into eight categories. We expanded upon these categories by referring to recent literature on feedback. Moreover, we organized these dimensions into two meta-categories: “Strategies of feedback” and “Impact of feedback.” The dimensions included in the “Strategies of Feedback” meta-category focus on strategies and approaches related to providing feedback. Conversely, the dimensions included in the “Impact of Feedback” meta-category emphasize the outcomes and effects of feedback on students and the educational process.
Table 1 is based on Hatzipanagos and Warburton’s (2009) view of “feedback as a dialogue.” This viewpoint is grounded in the idea that feedback is an active and participative process. According to these authors, “In formative feedback, dialogue forms the mechanism by which the learner monitors, identifies, and then is able to ‘bridge’ the gap in the learning process” (p. 46). In alignment with their views, we believe that learning is a social activity. From our perspective, assessment cannot be separated from the learning process. Therefore, it is inherently social. The interconnection between assessment and learning is integral, reflecting the understanding that these aspects are intricately linked. Participation is pivotal in social activities. Thus, feedback must support communication among students and the teacher. Hatzipanagos and Warburton (2009) underlined the importance of communication by stating “Communication is part of the mechanism by which the learner identifies and then bridges the gap between the current learning achievements and the goals by the tutor” (p 47). Feedback enables students to understand their own learning progress within the community. We believe that feedback should foster the growth of learning communities and empower students to take responsibility for their own learning. It should enable students to reflect on the feedback they receive and take corresponding action.
Table 1
Dimensions of feedback.
Adapted from Hatzipanagos and Warburton (2009)*
Meta-categories
Dimension
Identified attributes of feedback
Strategies of feedback
Dialogue
1. Feedback is provided sufficiently often and features adequate detail
2. Supports peer/tutor dialogue
3. Allows students to be active and respond to feedback
4. Supports questioning
5. Shares assessment criteria with students
Visibility
1. Discern student-learning needs/prior knowledge
2. Be able to “spot” unpredicted achieved outcomes
Appropriateness
Feedback is
1. understandable to students
2. linked to learning outcomes (constructive alignment)
3. linked to the assessment criteria
Learning
1. Focuses on learning rather than on marks or students
Timeliness
1. Quantity and timing of feedback
2. Feedback is sufficiently prompt to be useful to students
Clearness
1. Feedback should use simple language to ensure that students understand the context without struggling to understand complex terms
2. Give clear signals regarding good practices
Complexity
1. Feedback should be sufficiently complex to allow students to consider the issue; it should not provide the correct answer
Impact of feedback
Community
1. Supports learning communities
2. Supports peer assessment
Power (autonomy and ownership)
1. Supports management of students’ own learning (self-regulated learning)
2. Improves students’ confidence levels
3. Increases students’ responsibility and autonomy
Reflection
1. Encourages reflection on student work
2. Compares students’ actual performance with a standard and takes action
3. Provides information to instructor to help shape teaching (reflection in action/on action)
4. Develops self-awareness skills
Action (student action-1, teacher action-2&3)
1. Students receive feedback and act upon it
2. The teacher helps students set personal goals
3. Feedback helps the teacher modify the teaching
Additional sources: Shute (2008), Evans (2013), Hattie and Timperley (2007), Nicol and Macfarlane-Dick (2006), Izci et al. (2020)
Although feedback has generally been defined in the literature as referring to situations in which a teacher provides feedback to students, other directions also exist: students can provide feedback to teachers, to their peers, or themselves. The impacts of self and peer feedback on students’ learning cannot be underestimated (Hatzipanagos & Warburton, 2009). These uses of feedback play a pivotal role in fostering student responsibility and increasing engagement in their learning (Hatzipanagos & Warburton, 2009; McConnell, 2006; Sadler, 1989; To, 2022). These utilizations also contribute to the development of students’ self-assessment skills. When students receive feedback from their peers, this process improves their dialogue as well as promotes the exchange of diverse perspectives. This situation thus empowers students by enabling them to take more responsibility and rely on external mediation beyond the student–teacher relationship.
To reach students with feedback, such feedback must be appropriate for students to meet their needs. According to Kluger and DeNisi (1996), feedback has the greatest effect when the corresponding goals are specific and challenging and when the level of task complexity is low. However, while these attributes related to active participation should be emphasized, the importance of the timing and visibility dimensions of feedback should not be underestimated. Some researchers have claimed that providing immediate feedback has a significant effect on student learning (Black & Wiliam, 1998; Hattie & Timperley, 2007; Zhang & Yu, 2021). However, Mathan and Koedinger (2002) argued that the timing of feedback depends on the nature of the assessment task and students’ capacities. The visibility dimension focuses on monitoring students to identify their dynamic understanding and learning progress. Through such monitoring, the teacher can facilitate the creation of a shared understanding among community members (Radinsky et al., 2010). Thus, this dimension is essential for effective feedback and serves as an initial step in fostering communication between students and the teacher.

Feedback and Technology

Technology can help a teacher during the feedback process in a variety of ways (Maeng, 2017). Research on technology-based feedback (also known as e-assessment feedback) has been increasing (Evans, 2013). Such feedback can be provided through a variety of mediums, including mobile devices and internet platforms. Technology-based feedback is diverse. It can be synchronous or asynchronous, can be generated by the teacher or a computer, and can support either individual or group learning.
Technology-based feedback can provide opportunities that would otherwise be impossible due to various factors, including time constraints, geographical limitations, and the large number of students (Gilbert et al., 2011). Technology facilitates the establishment of an environment that can support a learning community (Lai & Ng, 2011), helps teachers collect data (e.g., Feldman & Capobianco, 2008), provides immediate feedback (Buckley et al., 2010; Zhang & Yu, 2021; Balta & Tzalfilkou, 2019), provides personalized feedback (e.g., Buckley et al., 2010; Penuel & Yarnall, 2005), and facilitates self-assessment and peer assessment (Foo, 2021; Hickey et al., 2009; Ng & Lai, 2012; Yarnall et al., 2006).
Technology-based feedback impacts student motivation and engagement (De Nisi & Kluger, 2000; Zhang & Yu, 2021), and the degree of such impact varies (Evans, 2013). Gilbert et al. (2011), in their Synthesis Report of Assessment and Feedback with Technology Enhancement (SRAFTE), reported that the success of technology depends on how it is implemented rather than on the specific technology itself. Thus, engagement and the improvement of student learning depend on the implementation of specific technologies. Therefore, in this study, we explored both the affordances of apps and teacher practices.
This study offers a unique perspective on technology-enhanced feedback by examining both the affordances of feedback apps and teachers’ feedback practices. It also extends the work of Hatzipanagos and Warburton (2009) by incorporating recent feedback literature to enhance our understanding of feedback attributes. The purpose of this study is to explore the potential of application affordances in promoting feedback attributes.
The specific research questions guiding our study are as follows:
How are defined feedback dimensions fulfilled by iPad applications used in the classroom? Namely,
(a)
to what extent do iPad apps fulfill the feedback dimensions?
 
(b)
to what extent does the use of iPad by the teacher fulfill the feedback dimensions?
 

Methods

In order to conduct this qualitative case study, we examined the potential of technology to support feedback. Specifically, we investigated whether iPad applications (“apps”) could enhance feedback attributes. Throughout the study, we maintained detailed records of our research procedures. Additionally, we conducted weekly meetings to discuss methodological decisions related to the process of data collection and analysis. We also addressed emerging issues in the field and validated our coding to ensure trustworthiness (Guba & Lincoln, 1989).

Research Participants

Data were collected from the classroom of a high school physics teacher. The district Science Coordinator recommended this teacher due to her reputation as an innovative educator. She actively incorporated iPads into her teaching practices. We employed a purposeful sampling approach to account for “the key constituencies relevant to the subject matter” (Ritchie et al., 2003, p. 79). This approach allowed us to gain in-depth insights into phenomena by selecting samples that could provide the most information (Creswell & Poth, 2016; Merriam, 1998).
The participant teacher, who is referred to pseudonymously as Amy, has been teaching since 1997. She has taught courses in physical science, physics, and honors physics. During her career, she has achieved National Board Certification and earned the title of Professional Development Classroom Teacher. Amy has received several local and statewide awards. She has also been honored at the national level with the prestigious Presidential Award for Excellence in Mathematics and Science Teaching. She holds both a master’s and a bachelor’s degree in science education.
Amy taught in a junior high school prior to the high school in which she worked while serving as a research participant in this study. Although she had previously used some technology in her teaching, she started incorporating iPads and technology more extensively in the high school setting. One year before the school’s opening, teachers were chosen. During this preparatory period, which was referred to as “year zero,” the school provided each teacher with an iPad. Over the course of that year, the teachers began acquiring technology skills and worked to prepare the school for its eventual opening. Amy was accepted as a department chair; thus, she attended additional workshops and conferences to extend her knowledge of the use of technology in the classroom.
This study was conducted at a public high school in the midwestern United States that featured a diverse student population. The student–teacher ratio was 18:1. This school was founded as a technology-immersed school. Before admitting students, teachers underwent training in both technology and iPad use. Teachers met quarterly during this transition year and were encouraged to use iPads in class.
For this study, the first author participated in two of Amy’s classrooms during the spring and fall semesters. Both of the classrooms were honors physics. In the first classroom, during the spring semester, Amy taught units on Newton’s Laws and Waves, while in the second classroom, she taught uniform motion. The classes were representative of the school’s student population in terms of gender ratio, socioeconomic status, and racial-ethnic composition.

Researchers’ Role

During the fall and spring semesters, the first author participated in Amy’s classrooms as she taught Newton’s Law, Waves and Uniform Motion. Throughout this period, the first author conducted classroom observations and recorded all of the courses. This study was conducted as part of the first author’s dissertation. As part of another study in her dissertation, she interviewed both students and the teacher and investigated students’ work. Consequently, she became very familiar with the students and the classroom environment. This closeness may thus have influenced her interpretation of the teacher’s practices.
The second author is an experienced researcher in science education and teachers’ assessment practices. She provided support throughout the study. The first and second authors engaged in weekly meetings throughout the study to discuss the study design, data collection, data analysis, and results. The second author provided valuable insights, reviewed the coding, and validated the results.

Data Sources

The data sources included classroom video recordings and the websites associated with the relevant apps. In this study, the apps that Amy preferred to use in her classroom were evaluated. All these apps were used in the classroom for teaching and assessment purposes. These apps were QR Code Reader, Schoology, Kahoot!, Nearpod, Socrative, ZipGrade, and The Physics Classroom (Appendix 1). These apps and the corresponding websites were used to understand the affordances of the apps. Each app was downloaded and then used with available data. Each of the associated websites was visited and analyzed.
As data sources, eighteen classes were recorded. Normal classes were 85 min long, while two short classes were 45 min long (for 24 h in total). The researcher took pictures and field notes during the classroom observations. To understand participants, their behaviors, and the corresponding context in depth, scholars have recommended capturing a comprehensive picture of classroom observations (Glesne, 2006; Yin, 2018). In this study, classroom observations (videotapes and field notes) provided information regarding the teacher’s feedback practices.

Data Analysis

Typological data analysis was used for this study. This type of analysis is used when a study has a narrow focus. Data were collected for specific purposes, and the categories for the data were predetermined (Hatch, 2002).
We used an enhanced version of the feedback dimensions identified by Hatzipanagos and Warburton (2009) for analysis. To assess the affordances of each app, the first author visited the website of each app to understand the app’s features. Our primary goal was to determine which mobile apps aligned with the feedback dimensions. The first author installed and personally tested each app; subsequently, detailed memos were generated. These memos were used to code the affordances of the apps.
To analyze teacher practices, classroom video recordings were reviewed and categorized by the apps. After categorization, videos pertaining to each app were analyzed to determine their levels of alignment with the feedback attributes associated with the dimensions. We established specific categorization criteria (Table 2): “not applicable” (0), “poor” (1), “potential” (2), and “good” (3). While we coded the affordances of the apps based on their support for the attributes associated with each dimension, teacher practices were assessed based on any teacher activities involving app usage. For example, since students could not send questions to the teacher (or to each other) via QR Code Reader, this app was coded as not supporting the “questioning” attribute associated with the dialogue dimension. Another example is that the teacher encouraged students to share information with their peers while using Kahoot!, despite the fact that this app did not support “questioning.” As a result, we coded the teacher’s practice as supporting peer assessment, which is associated with the community dimension.
Table 2
Categorization criteria for the feedback dimensions
Meta-categories
Dimension
Number of feedback attributes
Poor
Potential
Good
Not applicable
Strategies of feedback
Dialogue
5
1 or fewer attributes met
2 or 3 attributes met
4 or more attributes met
When the dimension is not affected by the app
Visibility
2
No attributes met
1 attribute met
All attributes met
When the dimension is not affected by the app
Appropriateness
3
No attributes met
1 or 2 attributes met
All attributes met
When the dimension is not affected by the app
Learning
1
No attributes met
½ of attributes met (providing the correct answer and explanation)
All attributes met
When the dimension is not affected by the app
Timeliness
2
No attributes met
1 attribute met
All attributes met
When the dimension is not affected by the app
Clear
1
No attributes met
½ of the attributes met
All attributes met
When the dimension is not affected by the app
Complexity
1
No attributes met
½ of the attributes met (facilitating reflection before providing the correct answer)
All attributes met
When the dimension is not affected by the app
Impact of feedback
Community
2
No attributes met
1 attribute met
All attributes met
When the dimension is not affected by the app
Power
3
No attributes met
1 or 2 attributes met
All attributes met
When the dimension is not affected by the app
Reflection
4
No attributes met
1 or 2 attributes met
3 or more attributes met
When the dimension is not affected by the app
Action
3
No attributes met
1 or 2 attributes met
All attributes met
When the dimension is not affected by the app

Trustworthiness

To establish the trustworthiness of this study, several strategies were used. To establish credibility, the strategies of prolonged engagement, triangulation, peer debriefing, and member-checking were utilized. Prolonged engagement in the research environment allowed the researchers to understand the context in depth. A thorough examination of the data was facilitated by the triangulation of data collected from several sources. Additionally, member checking was employed, which involved sharing and confirming the initial results of the study with the teacher. Peer debriefing with a colleague who had experience in classroom assessment and in-service teacher education also enhanced the dependability of the study. Furthermore, providing thick and rich descriptions of the teacher’s practices and the researchers’ roles alongside detailed examples contributed to the confirmability and transferability of the study (Creswell & Poth, 2016; Lincoln & Guba, 1985). Finally, the study employed an expanded version of the feedback dimensions identified by Hatzipanagos and Warburton (2009). This approach facilitated logical inferences and clear reasoning throughout the data analysis process (Brantlinger et al., 2005).

Findings

Our findings regarding assessment feedback in relation to the affordances of feedback apps and teachers’ feedback practices are presented below. First, we provide the findings regarding application affordances, next, teacher practices with apps, and finally, a comparison of these two sets of findings. The results are depicted in the figures, which were created using Excel software for Microsoft Office.
Figure 1 displays two meta-categories and eleven feedback dimensions related to the affordances of apps. An examination revealed variations in the affordances of apps, which were rated as “good,” “potential,” “poor,” and “not applicable.”
In the strategies of feedback meta-category, visibility received the most “good” ratings (5 of 7), followed by timeliness (4 of 7) and learning (4 of 7). Dialogue and learning were the only two dimensions that received a “potential” rating, and only 1 of 7 apps received this rating. Notably, complexity was the only dimension that did not receive either a “good” or “potential” rating. The dialogue dimension received the most “poor” ratings (5 of 7), followed closely by complexity (4 of 7). Most apps were rated as “not applicable” in the dimensions of clearness (6 of 7) and appropriateness (5 of 7).
In the impact of feedback meta-category, power received the most “good” ratings (3 of 7), while other dimensions received “good” ratings for only 1 of 7 apps. A total of 4 of 7 apps were rated as “potential” in the action and reflections dimensions, while only 1 of 7 apps received “potential” ratings in the community and power dimensions. The community dimension received the most “poor” ratings (5 of 7), followed by power (3 of 7). None of the dimensions included in the impact feedback meta-category were rated as “not applicable.”
Teacher practices related to the use of apps to provide feedback were also analyzed (Fig. 2).
Figure 2 demonstrates two meta-categories and eleven dimensions of feedback on teacher practices. In the strategies of feedback meta-category, all apps (7 of 7) were rated as “good” in the appropriateness and clearness dimensions, but only 1 of 7 apps received a “good” rating in the complexity dimension. Additionally, in other dimensions, 5 of 7 apps were rated as “good.” For three dimensions (dialogue, learning, and timeliness), 2 of 7 apps were rated as “potential,” while only 1 of 7 apps was rated as “potential” in the visibility and complexity dimensions. The complexity dimension received the most “poor” ratings (5 of 7), followed by visibility (1 of 7). None of the dimensions associated with the strategies of feedback meta-category was rated as “not applicable.”
In the impact of feedback meta-category, 6 of 7 apps were rated as “good” in the reflection and action dimensions. Following those, 4 of 7 apps received “good” ratings in the power dimension, while the community dimension received the fewest “good” ratings (2 of 7). The dimension that received the most “potential” ratings was community (3 of 7), followed by power (2 of 7). In the remaining dimensions, 1 of 7 apps were rated as “potential.” The community dimension received the most “poor” ratings (2 of 7), followed by power (1 of 7). None of the dimensions associated with the impact of feedback meta-category was rated as “not applicable.”
Subsequently, we compared the affordances of the apps (Fig. 1) to teacher practices (Fig. 2). While the affordances of the apps exhibited diversity, teacher practices were predominantly rated as “good” across all feedback dimensions. Using our meta-categories, we explored the differences in each dimension between the affordances of the apps and teacher practices.
In the strategies of feedback meta-category, increased “good” ratings were observed for dialogue (1 to 5), appropriateness (0 to 7), learning (4 to 5), timeliness (4 to 5), clearness (1 to 7), and complexity (0 to 1). The number of apps receiving “good” ratings remained the same (5) only in the visibility dimension. It is also important to highlight the fact that while clearness and appropriateness frequently received rankings of “not applicable” for the affordances of apps (i.e., 6 of 7 for clearness and 5 of 7 for appropriateness), these dimensions were rated as “good” for the teacher practice for all apps. Dialogue was frequently rated as “poor” for the affordances of apps (5 of 7); however, it was frequently rated as “good” for teacher practice (5 of 7). Complexity was frequently rated as “poor” concerning both the affordances of apps (4 of 7) and teacher practices (5 of 7).
In the impact of feedback meta-category, increased “good” ratings were observed for community (1 to 2), power (3 to 4), reflection (1 to 6), and action (1 to 6). It is also important to highlight the fact that while reaction and action were frequently rated as “potential” (4 of 7) for the affordances of apps, these dimensions were rated as “good” (6 of 7) for teacher practice. Community received ratings of “poor” (5 of 7), “potential” (1 of 7), and “good” (1 of 7) for the affordances of apps. Its ratings changed to “poor” (2 of 7), “potential” (3 of 7), and “good” (2 of 7) for teacher practice.
In addition to the previous analysis, we evaluated each app across all feedback dimensions for both app affordances (Fig. 3) and teacher practices (Fig. 4). This process enabled us to understand which apps performed well in terms of the two meta-categories and the eleven feedback dimensions.
Figure 3 illustrates the affordances associated with the app ratings for seven apps across the two meta-categories and the eleven feedback dimensions. An examination of each app revealed that only The Physics Classroom was applicable to every dimension, while all other apps were rated as “not applicable” in at least one dimension. To summarize each app briefly, QR Code Reader received “good” ratings in the learning dimension of the strategies of feedback meta-category and in the power dimension of the impact of feedback meta-category. Schoology received “good” ratings in the dialogue, visibility, and learning dimensions of the strategies of feedback meta-category and all dimensions of the impact of feedback meta-category. Kahoot! achieved “good” ratings in the visibility and timeliness dimensions of the strategies of feedback meta-category, but it was not rated as “good” in any dimensions of the impact of feedback meta-category. Nearpod received “good” ratings in the visibility, learning, and timeliness dimensions of the strategies of feedback meta-category, and it received ratings of “potential” in all dimensions of the impact of feedback meta-category. Socrative received “good” ratings in the visibility and timeliness dimensions and a “potential” rating in the learning dimension of the strategies of feedback meta-category, while it received ratings of “potential” in the reflection and action dimensions of the impact of feedback meta-category. ZipGrade received “good” ratings in the visibility dimension of the strategies of feedback meta-category, while it received ratings of “potential” in the reflection and action dimensions of the impact of feedback meta-category. Finally, The Physics Classroom received “good” ratings in the appropriateness, learning, timeliness, and clearness dimensions of the strategies of feedback meta-category as well as a rating of “good” in the power dimension and ratings of “potential” in the reflection and action dimensions of the impact of feedback meta-category.
Figure 4 presents the ratings of teacher practices for the seven apps across the two meta-categories and the eleven dimensions of feedback. Notably, all the apps were applicable to every feedback dimension in the context of teacher practices. In the summary of the ratings for each app, QR Code Reader stands out due to the fact that it received “good” ratings in the dialogue, appropriateness, learning, and clearness dimensions, alongside “potential” ratings in the visibility and timeliness dimensions of the strategies of feedback meta-category. Furthermore, with regard to the QR Code Reader, the power, reflection, and action dimensions of the impact of the feedback meta-category were all rated as “good.” Schoology received “good” ratings across all dimensions of the strategies of feedback meta-category while also earning “good” ratings in the power, reflection, and action dimensions of the impact of feedback meta-category. For Kahoot!, “good” ratings were observed in the visibility, appropriateness, timeliness, and clearness dimensions of the strategies of the feedback meta-category, whereas the dialogue and learning dimensions received “potential” ratings. Furthermore, Kahoot! also received a rating of “good” in the community dimension and ratings of “potential” in the reflection and action dimensions of the impact of feedback meta-category. Nearpod received “good” ratings in all dimensions of the strategies of feedback meta-category, with the exception of the complexity dimension, which was rated as “potential.” It also received “good” ratings in all dimensions of the impact of feedback meta-category, with the exception of the power dimension, which was rated as “potential.” Socrative received “good” ratings in all dimensions of the strategies of feedback meta-category with the exception of complexity, which received a “poor” rating. It also received ratings of “good” in all dimensions of the impact of feedback meta-category, with the exception of the community dimension, which was rated as “potential.” ZipGrade received “good” ratings in the visibility, appropriateness, and clearness dimensions as well as “potential” ratings in the dialogue, learning, and timeliness dimensions of the strategies of feedback meta-category. It also received “good” ratings in the reflection and action dimensions and “potential” ratings in the community and power dimensions of the impact of feedback meta-category. Finally, The Physics Classroom received “good” ratings in the dialogue, appropriateness, learning, timeliness, and clearness dimensions of the strategies of feedback meta-category. In contrast, the power, reflection, and action dimensions of the impact of feedback meta-category were rated as “good,” while the community dimension was rated as “potential.”
Finally, we compared the affordances of the apps (Fig. 3) to teacher practices (Fig. 4). Our analysis revealed that the changes in the strategies of feedback meta-category from app affordances to teacher practices took two forms: either they exhibited the same rating or the ratings improved. For example, in Kahoot!, the complexity dimension was rated as “poor” for both the affordances app and teacher practices. Similarly, in Socrative, the visibility dimension was rated as “good” for both the affordances app and teacher practices. Some improvements were dramatic, such as in the rating of the clearness dimension for Nearpod, which improved from “not applicable (0)” to “good (3).” Some less significant improvements were also observed, such as in QR Code Reader, in which the visibility dimension ratings increased from “poor (1)” to “potential (2).”
The findings regarding the impact of feedback meta-category indicated that the shifts in ratings from app affordances to teacher practices were largely consistent with those observed in the previous meta-category. Teacher practices either received the same rating as the affordances of the app or the ratings improved. However, Schoology was an exception since the community dimension received a “good” rating (3) for affordances of the app but a “poor” rating (1) for teacher practices. One notable difference in this meta-category was that the ratings did not indicate significant improvement. For example, for ZipGrade, the rating of the power dimension improved from “poor (1)” to “potential (2),” while for QR Code Reader, the rating of the action dimension improved from “poor (1)” to “good (3).”

Examples of the Feedback Dimensions

In the previous section, we quantitatively examined the differences between the affordances of apps and teacher practices for addressing the feedback dimensions. In this section, we provide examples along with detailed and vivid explanations (see Table 3).
Table 3
Examples of the feedback dimensions
Meta-categories
Dimensions
Affordances app
Teacher practices
Strategies of feedback
Dialogue
Schoology included a discussion board. On this board, questions or comments could be posted. This feature enabled students to respond to or ask questions of their peers or teacher
Socrative included multiple-choice questions and provided feedback to individual students in each case. The teacher asked students to perform the Socrative exercise until they either achieved a perfect score or reached satisfaction. The teacher encouraged students to ask her questions when they were confused about the feedback and to revisit their notes and calculations to correct their mistakes before repeating the exercise. The teacher provided whole-class feedback, explained common quiz mistakes, and demonstrated corrections while allowing students to ask questions throughout this process
Visibility
ZipGrade scanned student responses and sent all class results to the teacher. Thus, the teacher had access to statistical information and individual student’s responses
While students were performing self-assessment using QR Code Reader, the teacher walked around the classroom and collected information by asking students what they got wrong, how to correct their mistakes, and whether they had questions for her
Appropriateness
The Physics Classroom website enabled students to check their responses using the app. Clicking on a link provided the correct responses. Some terms were hyperlinked to provide an explanation. The Physics Classroom provided learning outcomes and assessment criteria
The teacher embedded Nearpod in class lectures. The teacher shared learning outcomes. The provided feedback matched the language used in the lectures
Learning
QR Code Reader enabled students to access an answer sheet that featured an explanation of the answers to questions. This app emphasized explanation rather than grades
Kahoot! simply informed students of the accuracy of their answers. When the teacher used Kahoot!, the teacher first organized students into groups and then asked the groups to discuss the question separately before providing an answer, which was then discussed with the whole class. Although the app focused on grades, the teacher added value to it by increasing its emphasis on learning
Timeliness
Kahoot! provided students immediate feedback regarding the accuracy of each answer
Via Schoology, the teacher responded to students’ questions as soon as those questions were received. The app accelerates the feedback process. Students did not need to wait to receive face-to-face interaction
Clearness
The Physics Classroom used simple and consistent language within its explanation and assessment section
This dimension was very teacher-dependent. The teacher used simple, understandable, and consistent language for all feedback
Complexity
This dimension was completely teacher-dependent; no app supported this dimension
While using Nearpod, the teacher asked students to assess their thoughts regarding the example before she explained it
Impact of feedback
Community
Schoology featured a discussion board tool. On this board, questions or comments could be posted. Students and the teacher could reply to each other’s comments and engage in discussion. This feature helped establish a shared understanding within the classroom
Nearpod helped the teacher observe all students’ responses, select examples, and then share them with all students. The teacher additionally allowed students to discuss why the response was good or not and how to improve it. This approach helped establish a shared understanding within the classroom
Power
The Physics Classroom enabled students to engage in self-assessment. Thus, it increased student responsibility for their own learning, thus providing them with opportunities to improve their confidence
During the Socrative quiz, the teacher asked students to revisit their incorrect responses and use different sources to find the correct answer. The teacher encouraged students to ask questions of the teacher. This approach encouraged students to take responsibility for their learning, which was not afforded by the app
Reflection
Schoology assisted students to reflect on their work through a discussion board in which the teacher or students could be asked to explain their reasoning. Teacher collected information regarding students from the discussion board
The teacher asked students to reflect on the feedback provided by The Physics Classroom and discuss it with their partners
Action
In Schoology, students could respond to feedback using a discussion board. The ability to observe all the responses helped the teacher provide both individual and whole-class feedback. Thus, the teacher helped students set personal goals and modified the teaching process based on the needs of the entire class
The teacher encouraged students to retake the Socrative quiz. Between quizzes, the teacher asked students to revisit the sources and submit questions to the teacher verbally. The teacher used the results to collect information from the whole class, which led to opportunities to modify the process of class instruction

Discussion and Conclusion

This study provides a nuanced analysis of technology-enhanced feedback, focusing on both the functionalities of feedback apps and teachers’ feedback methods. By drawing on the contemporary feedback literature and extending the foundational work of Hatzipanagos and Warburton (2009), this research aims to deepen our understanding of feedback attributes. Through empirical exploration, this research contributes valuable knowledge regarding the intersection of technology and pedagogy, offering implications for both educators and future research in the field of educational technology. The findings shed light on which feedback dimensions were supported by the apps and how teacher practices can enhance them. We discuss our findings in two sections: strategies for providing feedback and the guiding impact of feedback (Table 4).
One central emphasis of this study lies in the importance of investigating strategies for providing feedback. Our findings reveal that the visibility, timeliness, and learning dimensions were well supported by most of the apps in terms of both app affordance and teacher practices.
In particular, “visibility,” received high ratings from most of the apps. Specifically, Schoology, Kahoot!, Socrative, Nearpod, and ZipGrade were effective in addressing the visibility dimension. This dimension highlights the need for teachers to closely monitor their students. Teachers can use these apps to monitor both individual students and the entire class, thereby contributing to the enhanced visibility of student progress.
Additionally, a majority of the apps facilitated the delivery of information to teachers and the provision of timely feedback to students. The “timeliness” dimension, which stresses the importance of providing feedback promptly if it is to be valuable, was also addressed effectively by most of the apps. Specifically, Kahoot!, Socrative, Nearpod, and The Physics Classroom were effective with respect to the timeliness dimension. These apps can help teachers provide immediate or frequent feedback to individual students or groups, thus satisfying the requirement for timely responses.
Regarding the “learning” dimension, which emphasizes the task of fostering learning rather than simply assigning grades, most apps did not prioritize providing grades to students. Specifically, QR Code Reader, Schoology, Nearpod, and The Physics Classroom were effective with regard to the learning dimension. These findings align with previous research, which has indicated that technology can indeed facilitate immediate feedback (as shown by Buckley et al., 2010; West et al., 2021; Zhang & Yu, 2021), which has been widely recognized as having a positive impact on student learning (Black & Wiliam, 1998; Hattie & Timperley, 2007; Zhang & Yu, 2021).
However, the study revealed some significant challenges pertaining to app affordances in the context of feedback strategies. To provide effective feedback, it is critical to use clear and consistent language to provide context and detail, to challenge students to think critically, and to align learning objectives with assessment criteria with the goal of obtaining a broader perspective (Fu et al., 2022; Hatzipanagos & Warburton, 2009; Izci et al., 2020; Khajeloo et al., 2022; Nicol & Macfarlane‐Dick, 2006). The apps, however, had limitations in these areas. For example, they exhibited only limited performance in the complexity and dialogue dimensions, and they could not be evaluated in the clearness and appropriateness dimensions. Nevertheless, teacher practices improved the performance of the apps in these aspects. For instance, the limitation of facilitating meaningful interactions and discussions between students and the teacher was overcome by the teacher by providing opportunities for students to interact orally. However, it is worth noting that “complexity” remained a challenge in terms of both app affordances and teacher practices, with only Nearpod performed well with respect to teacher practice. This finding could be due to the fact that these seven apps may not have been tailored to students’ specific levels, as complexity emphasizes feedback pertaining to appropriate challenges to support students’ thinking processes (Izci et al., 2020).
In terms of the guiding impact of feedback, the findings showed that most apps had the potential to support the reflection and action dimensions. Specifically, Nearpod, Socrative, ZipGrade, and The Physics Classroom had potential in these dimensions. Namely, these apps have the potential to assist students in the process of developing self-awareness skills and to encourage them to reflect on their work and make modifications (Hatzipanagos & Warburton, 2009; McConnell, 2006; Shute, 2008) as well as to allow teachers to modify their teaching (Feldman & Capobianco, 2008). Technology provides opportunities to promote active and continuous formative assessment (Conejo et al., 2016) as students reflect on their work and modify their future work. Teacher practices improved the performance of all the apps in these dimensions. For example, modifying instruction cannot be supported solely by an app because it involves a decision-making process. However, apps can help teachers make decisions by providing them with information regarding students’ learning processes.
The main challenge in this context pertained to the community dimension, thus suggesting challenges with regard to fostering a sense of community or collaboration using these apps. Although in most apps, the community dimension slightly improved, thereby highlighting the positive impact of teacher involvement, Schoology’s community dimension was an exception. While this app supported the community dimension well in terms of its affordances, its support decreased in teacher practices. This divergence might indicate that although the app exhibited strong community-building features on paper, these features were not effectively leveraged in the classroom.
In this study, we explored the potential of iPad app affordances with regard to providing effective feedback. Our data highlight the importance of recognizing variability in terms of app affordances and the pivotal role played by educators in shaping the feedback process. Teacher practices play a crucial role in enhancing the feedback experience, with most dimensions showing improvements in the context of teacher practices as compared to app affordances. Educators have the potential to significantly improve the feedback process when engaging with educational apps, thus highlighting their role in optimizing feedback for students. Gilbert et al. (2011) asserted that in the context of technology-enhanced feedback, technology is merely an enabler; furthermore, these authors claimed that success lies in pedagogy. Evans (2013) highlighted the pivotal role of teachers in designing and implementing feedback. Although more research is needed to confirm this claim, our study provides evidence that is consistent with previous research. This finding highlights the fact that teacher practice plays a crucial role in enhancing the affordances of iPad apps with regard to providing effective feedback.
Our findings are in alignment with those reported by Mimouni (2022), who also asserted that supporting multimedia tools with an instructional approach can increase the corresponding effect on students’ learning. Therefore, teachers should be supported in their attempts to introduce these apps into teaching and to emphasize the proper use apps to provide effective feedback. Our study provided data regarding the potential of apps to address the effective feedback dimensions alongside detailed examples. Teachers’ knowledge of the feedback process in their classroom practices as well as their skills and experience in presenting this feedback to their students are crucial. This proficiency is particularly essential in a technology-supported environment and serves as an impetus for supporting students’ learning outcomes.
These findings highlight the significance of considering not only the perceived potential of apps but also their tangible utility within the classroom. The effectiveness of apps may vary based on their practical implementation. Educators are encouraged to explore and experiment with different apps to identify those aligning best with their specific teaching objectives and student needs. Additionally, schools can establish guidelines or committees to evaluate and select apps that align with their educational objectives and student population. Collaborating with educational technology specialists or consulting reputable sources for app recommendations can also facilitate the selection process. Ultimately, a thorough understanding of students’ learning goals, instructional needs, and technological capabilities is essential for maximizing the benefits of app integration for feedback in the classroom. This study emphasizes the importance of continuous professional development and training. This approach ensures that educators can leverage the full potential of educational technology, thereby maximizing its impact on feedback and learning outcomes.
Another pillar that can support this process is that app developers have information regarding feedback and teachers’ needs for feedback and can thus develop apps that are capable of meeting these needs. This analysis highlights areas where further improvements in app design are needed, especially concerning complexity and the cultivation of a sense of community among users. This underscores the need for app developers to focus on making complex concepts more accessible and understandable through their platforms. Furthermore, the prevalence of “not applicable” ratings for “clearness” and “appropriateness” dimensions suggests that these aspects may not be adequately addressed by current apps. These dimensions are critical for creating a conducive learning environment and should be areas of improvement for app developers. Additionally, teachers can benefit when app providers furnish specific information about the strengths and specifics of their app in relation to the dimensions of feedback. Such information can help teachers select the most suitable app for providing feedback to their students.
In summary, the study encourages a balanced approach, where apps are viewed as complementary tools that can lead to significant improvements and innovations in pedagogy. As we progress in the realm of educational technology, it is crucial to recognize that apps can enhance learning experiences. However, their effectiveness is most pronounced when integrated with thoughtful pedagogy. The study communicates a clear message: apps, when they are employed by skilled and dedicated educators, have the potential to transform education and empower students to reach their full potential. This synergy between technology and teaching is the future of education, and this future is filled with promise and possibilities.

Limitations

In this study, we investigated the potential of iPad app affordances for feedback as well as the teacher’s practices when utilizing these apps for feedback. It is critical to emphasize that Amy is an experienced teacher who has received a national award and works in a school that actively supports the integration of technology into the classroom. These factors are crucial when explaining how teachers can implement technology in their classrooms effectively.
Another limitation is that our study focused on the seven apps that the teacher used in her classroom. Different apps can be used for feedback purposes and may work equally well. Therefore, teachers must pay attention to the dimensions while selecting apps. We hope that the examples we provided in Table 3 can help visualize ways of using these dimensions. It is not necessary for apps to address all the dimensions; teachers may choose an app based on their specific needs. For instance, if a teacher wants to provide timely feedback, they might use apps such as Kahoot!. On the other hand, a teacher may choose to use more than one app to provide feedback and strengthen it. We did not explore this issue since it was not the focus of our research.

Directions for Future Research

While our study exhibits only limited generalizability due to its case study design, we believe that it represents an important step. It helps us understand teacher practices and app affordances in the context of feedback. Our investigation highlighted the practices of an experienced teacher with a supportive institutional backdrop. Future studies should examine the experiences of teachers with varying levels of comfort and experience with technology. A longitudinal study comparing the learning outcomes of students receiving feedback from novice and veteran teachers using the same technology could offer insights into the training and support needed at different stages of a teacher’s career. Besides, the success of technology integration in our study was partially attributed to substantial technical and institutional support. Future research should explore the spectrum of technology adoption and implementation success in environments where such support is minimal or absent. This could also yield valuable information that could enable us to identify the challenges and strategies for overcoming them in less supportive contexts.
Furthermore, future investigations should aim to conduct comparative studies across various education levels as well as across diverse cultural landscapes. Such a broader approach could reveal the collective principles underlying technology-enhanced feedback as well as context-specific practices that are effective in unique educational ecosystems. Our study did not focus on students; thus, we did not collect information regarding students. Future studies can explore the effect of apps on students’ success and understanding. Moreover, exploring how students with different ages and learning preferences respond to various feedback mechanisms can guide the development of more adaptive and inclusive feedback tools.
Despite the limitations of our study, we believe that our findings contribute to the literature on technology-enhanced feedback and highlight the importance of teachers’ active involvement in the design and implementation of effective feedback practices using app affordances.

Declarations

Ethical Approval

All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki Declaration and its later amendments or comparable ethical standards. The study was approved by the Institutional Review Board (IRB) of the University of Missouri (project no. 2005568).
Informed consent was obtained from all individual participants included in the study and from the parents. Patients signed informed consent regarding publishing their data and photographs.

Conflict of Interest

The authors declare no competing interests.
The authors also confirm that the content of the manuscript has not been published or submitted for publication to any other outlets.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://​creativecommons.​org/​licenses/​by/​4.​0/​.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendix

Appendix 1. Information regarding the apps

QR Code Reader: The teacher created QR codes using a QR code-generating website, posted them to Schoology, and placed several printed copies in various locations in the classroom. Students used the QR Code Reader app on their iPads to scan the QR code to reach a predetermined website or document. While the app was not specifically designed for assessment, the teacher frequently used it to provide students with answer keys. In our analysis, we used QR Code Reader to provide an answer key via the app.
Schoology: Schoology was created as a learning management system with a focus on education. It is accessible as both an app or a website, compatible with various computing platforms. It allows each account (teacher or student) to enroll in any number of classes. Schoology is a platform on which a teacher can keep all documents and share them with students. Quizzes can be administered electronically using Schoology. By using the app’s discussion board feature, students can communicate and discuss course topics as a group. Students can use Schoology to exchange private messages with the teacher and each other.
Kahoot!: Kahoot! is a website that is used to create multiple-choice educational games. Teachers either create their own multiple-choice quizzes or reuse ones created by other Kahoot! users. Afterward, they create a virtual room for students to join. Students join the room using the room number and choose their own names. After all the students have joined the room, the teacher starts the quiz. Students receive immediate feedback from the app regarding the correctness of their answers. Kahoot! times the students’ work and ranks students based on their correctness and answer speed. It also reports answer choices in terms of percentages of student responses for each question.
Nearpod: Nearpod is an app that enables teachers to share presentations on students’ mobile devices or desktop computers. This app can assess students using either multiple-choice or open-ended questions. When teachers use this app for presentation, students can view what the teacher is sharing on their iPad. When students use the app to answer questions, teachers can view all students’ responses. For multiple-choice questions, teachers receive both individual student responses and statistical information of class responses. After the teacher receives all the responses, the teacher can use a student response and share it using the app to provide a good or a bad example. Nearpod allows the teacher to present her own computer screen to students, thus enabling the response statistics to be shared with students.
Socrative: Socrative is an app that enables students to take quizzes on their mobile devices. Similar to Kahoot!, the teacher either creates or borrows multiple-choice quizzes. For each question, the teacher can choose to provide feedback only on the correctness of the answer or add their own detailed explanation. Based on the teacher’s choice, the app can provide immediate feedback to students. The app enables students to work at their own pace. The teacher can view students’ responses as they submit answers to each question as well as statistical information regarding the whole class. The teacher can obtain the results in three different ways: downloading an Excel document, receiving an email, or saving the file on Google Drive.
ZipGrade: ZipGrade is a grading app that helps teachers hasten the grading process. The ZipGrade website provides answer sheets for teachers. These answer sheets include spaces for students to write their names and date and mark their responses for multiple-choice questions. The teacher simply adds the answer sheet and then scans the students’ answer sheet. The app provides immediate feedback to teachers for both individual students and the whole class.
The Physics Classroom: The Physics Classroom is a website whose corresponding app is Minds on Physics. The website functions as a source for teachers and includes simulations, content information, and quizzes. It also enables students to review their knowledge. Our participant teacher chose this website for students to use to review their knowledge. Thus, we analyzed only the Physics Tutorial and The Review Session sections of the website for app affordances. The teacher used only the Physics Tutorial; therefore, we used it for the analysis of teacher practices. Both sections include a list of all the topics in physics, from which students can choose a topic to review. Physics Tutorial divides each topic into a series of lessons. Physics Tutorial first provides a short review of the lesson and presents questions. Students can view the correct responses by clicking the “See Answer” button. The Review Session provides an opportunity for students to learn based on review questions that are linked to related learning material in Physics Tutorial.

Appendix 2

Table 4
Analysis for feedback dimensions
Source
QR Code Reader
Schoology
Kahoot!
Nearpod
Socrative
ZipGrade
The Physics Classroom website
Meta-categories
Dimensions
A
T
A
T
A
T
A
T
A
T
A
T
A
T
Strategies of feedback
Dialogue
1
3
3
3
1
2
2
3
1
3
1
2
1
3
Visibility
1
2
3
3
3
3
3
3
3
3
3
3
1
1
Appropriateness
0
3
0
3
0
3
0
3
0
3
1
3
3
3
Learning
3
3
3
3
1
2
3
3
2
3
1
2
3
3
Timeliness
0
2
0
3
3
3
3
3
3
3
1
2
3
3
Clearness
0
3
0
3
0
3
0
3
0
3
0
3
3
3
Complexity
1
1
0
3
1
1
0
2
0
1
1
1
1
1
Impact of feedback
Community
1
1
3
1
1
3
2
3
1
2
1
2
1
2
Power
3
3
3
3
1
1
2
2
1
3
1
2
3
3
Reflection
1
3
3
3
1
2
2
3
2
3
2
3
2
3
Action
1
3
3
3
1
2
2
3
2
3
2
3
2
3
A affordances of applications, Teacher practice
Literature
go back to reference Brantlinger, E., Jimenez, R., Klingner, J., Pugach, M., & Richardson, V. (2005). Qualitative studies in special education. Exceptional Children, 71, 195–207.CrossRef Brantlinger, E., Jimenez, R., Klingner, J., Pugach, M., & Richardson, V. (2005). Qualitative studies in special education. Exceptional Children, 71, 195–207.CrossRef
go back to reference Buckley, B. C., Gobert, J. D., Horwitz, P., & O’Dwyer, L. M. (2010). Looking inside the black box: Assessing model-based learning and inquiry in BioLogica™. International Journal of Learning Technology, 5(2), 166–190.CrossRef Buckley, B. C., Gobert, J. D., Horwitz, P., & O’Dwyer, L. M. (2010). Looking inside the black box: Assessing model-based learning and inquiry in BioLogica™. International Journal of Learning Technology, 5(2), 166–190.CrossRef
go back to reference Creswell, J. W., & Poth, C. N. (2016). Qualitative inquiry and research design: Choosing among five approaches (4th Ed). Sage publications. Creswell, J. W., & Poth, C. N. (2016). Qualitative inquiry and research design: Choosing among five approaches (4th Ed). Sage publications.
go back to reference De Nisi, A., & Kluger, A. N. (2000). Feedback effectiveness: Can 360 degree appraisals be improved? Academy of Management Executives, 14(1), 129–139. De Nisi, A., & Kluger, A. N. (2000). Feedback effectiveness: Can 360 degree appraisals be improved? Academy of Management Executives, 14(1), 129–139.
go back to reference Gilbert, L., Whitelock, D., & Gale, V. (2011). Synthesis report on assessment and feedback with technology enhancement. Southampton, UK: Electronics and Computer Science EPrints. Gilbert, L., Whitelock, D., & Gale, V. (2011). Synthesis report on assessment and feedback with technology enhancement. Southampton, UK: Electronics and Computer Science EPrints.
go back to reference Glesne, C. (2006). Becoming qualitative researchers. Pearson. Glesne, C. (2006). Becoming qualitative researchers. Pearson.
go back to reference Guba, E. G., & Lincoln, Y. S. (1989). Fourth generation evaluation. Sage. Guba, E. G., & Lincoln, Y. S. (1989). Fourth generation evaluation. Sage.
go back to reference Hatch, J. A. (2002). Doing qualitative research in education settings. SUNY Press. Hatch, J. A. (2002). Doing qualitative research in education settings. SUNY Press.
go back to reference Jones, I. S., & Blankenship, D. (2014). What do you mean you never got any feedback? Research in Higher Education Journal, 24, 1–9. Jones, I. S., & Blankenship, D. (2014). What do you mean you never got any feedback? Research in Higher Education Journal, 24, 1–9.
go back to reference Khajeloo, M., Birt, J. A., Kenderes, E. M., Siegel, M. A., Nguyen, H., Ngo, L. T., Mordhorst, B. R., & Cummings, K. (2022). Challenges and accomplishments of practicing formative assessment: A case study of college biology instructors’ classrooms. International Journal of Science and Mathematics Education, 20, 237–254. https://doi.org/10.1007/s10763-020-10149-8 Khajeloo, M., Birt, J. A., Kenderes, E. M., Siegel, M. A., Nguyen, H., Ngo, L. T., Mordhorst, B. R., & Cummings, K. (2022). Challenges and accomplishments of practicing formative assessment: A case study of college biology instructors’ classrooms. International Journal of Science and Mathematics Education, 20, 237–254. https://​doi.​org/​10.​1007/​s10763-020-10149-8
go back to reference Lincoln, Y. S., & Guba, E. G. (1985). Naturalistic inquiry. Sage. Lincoln, Y. S., & Guba, E. G. (1985). Naturalistic inquiry. Sage.
go back to reference Mathan, S. A., & Koedinger, K. R. (2002). An empirical assessment of comprehension fostering features in an intelligent tutoring system. Paper presented at the Intelligent Tutoring Systems, Springer, Berlin, Heidelberg. Mathan, S. A., & Koedinger, K. R. (2002). An empirical assessment of comprehension fostering features in an intelligent tutoring system. Paper presented at the Intelligent Tutoring Systems, Springer, Berlin, Heidelberg.
go back to reference McConnell, D. (2006). E-learning groups and communities. SRHE/ University Press. McConnell, D. (2006). E-learning groups and communities. SRHE/ University Press.
go back to reference Merriam, S. B. (1998). Qualitative research and case study applications in education (2nd ed.). Jossey-Bass. Merriam, S. B. (1998). Qualitative research and case study applications in education (2nd ed.). Jossey-Bass.
go back to reference Ng, E. M. W., & Lai, Y. C. (2012). An explatory study on using Wiki to foster student teachers’ learner-centered learning and self and peer assessment. Journal of Information Technology Education: Innovations in Practice, 11, 71–84. Ng, E. M. W., & Lai, Y. C. (2012). An explatory study on using Wiki to foster student teachers’ learner-centered learning and self and peer assessment. Journal of Information Technology Education: Innovations in Practice, 11, 71–84.
go back to reference NGSS Lead States. (2013). Next Generation Science Standards: For States, By States. The National Academies Press. NGSS Lead States. (2013). Next Generation Science Standards: For States, By States. The National Academies Press.
go back to reference Penuel, W. R., & Yarnall, L. (2005). Designing handheld software to support classroom assessment: Analysis of conditions for teacher adoption. The Journal of Technology, Learning and Assessment, 3(5), 4–45. Penuel, W. R., & Yarnall, L. (2005). Designing handheld software to support classroom assessment: Analysis of conditions for teacher adoption. The Journal of Technology, Learning and Assessment, 3(5), 4–45.
go back to reference Ritchie, J., Lewis, J., & Elam, G. (2003). Designing and selecting samples. Qualitative research practice: A guide for social science students and researchers, 77–108. Ritchie, J., Lewis, J., & Elam, G. (2003). Designing and selecting samples. Qualitative research practice: A guide for social science students and researchers, 77–108.
go back to reference Shute, V. J. (2008). Focus on formative feedback.Review of educational research, 78(1), 153–189.CrossRef Shute, V. J. (2008). Focus on formative feedback.Review of educational research, 78(1), 153–189.CrossRef
go back to reference Siegel, M. A., Hynds, P., Siciliano, M., & Nagle, B. (2006). Using rubrics to foster meaningful learning. In M. McMahon, P. Simmons, & R. Sommers (Eds.), Assessment in science: Practical experiences and education research (pp. 89–106). National Science Teachers Association Press. Siegel, M. A., Hynds, P., Siciliano, M., & Nagle, B. (2006). Using rubrics to foster meaningful learning. In M. McMahon, P. Simmons, & R. Sommers (Eds.), Assessment in science: Practical experiences and education research (pp. 89–106). National Science Teachers Association Press.
go back to reference Yin, R. K. (2018). Case study research: Design and methods (6th ed). Sage. Yin, R. K. (2018). Case study research: Design and methods (6th ed). Sage.
Metadata
Title
Feedback Through Digital Application Affordances and Teacher Practice
Authors
Nilay Muslu
Marcelle A. Siegel
Publication date
07-05-2024
Publisher
Springer Netherlands
Published in
Journal of Science Education and Technology
Print ISSN: 1059-0145
Electronic ISSN: 1573-1839
DOI
https://doi.org/10.1007/s10956-024-10117-9

Premium Partners