Skip to main content
main-content
Top

Hint

Swipe to navigate through the articles of this issue

01-12-2021 | Research article | Issue 1/2021 Open Access

International Journal of Educational Technology in Higher Education 1/2021

Investigating feedback implemented by instructors to support online competency-based learning (CBL): a multiple case study

Journal:
International Journal of Educational Technology in Higher Education > Issue 1/2021
Authors:
Huanhuan Wang, Ahmed Tlili, James D. Lehman, Hang Lu, Ronghuai Huang
Important notes

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Highlights

  • Under competency-based learning, the instructors implemented 11 types of feedback to support students in developing learning competencies.
  • Results indicated that instructors used feedback to facilitate the learning process, which is effective, but most of the used feedback was at the self-level.
  • The feedback that can help students regulate their learning process and make emotional connections was rarely used by instructors, despite that such feedback is potentially very effective to facilitate learning.
  • The patterns of feedback use were relatively consistent across learning tasks with different complexity.

Introduction

With the rapid growth of online learning in higher education (Seaman et al. 2018), concerns like mixed effects on learning outcomes (Nguyen 2015), poor retention rates (Bawa 2016), and insufficient learner feedback (Sunar et al. 2015) have emerged. To enhance online learning, many universities have adopted competency-based learning (CBL) (Besser and Newby 2019). In CBL, a variety of supports, including feedback, are offered by instructors to help learners. High-quality feedback working with repetitive practice can help competency development (Eppich et al. 2015) and increase interaction between learners and instructors.
However, while feedback is intensively used in online CBL, several studies reported that learners are not satisfied with the received feedback (HEA 2019; Mulliner and Tucker 2017; Williams et al. 2008). Consistently, previous literature reported that feedback can have differing effects on learning, ranging from strong to weak positive effects (Ivers et al. 2012), neutral (Fong et al. 2019) or even negative effects (Hu et al. 2016). Feedback is crucial to competency development and used extensively in online CBL, but the lack of consistent positive effects on learning is a concern. Additionally, feedback in the context of CBL is not well-understood (Tekian et al. 2017). Therefore, it is crucial to investigate how feedback is actually used and how the practice of giving feedback may be improved.
Given the rising adoption of online learning, digital feedback data is more accessible than ever before, making it possible to analyze what and how feedback is implemented and identify improvement opportunities. This study aimed to examine the practice of implementing feedback by instructors on learners' submitted assignments supporting online CBL. The findings may help to identify and share useful strategies for feedback implementation and identify opportunities for improving this practice and mitigating learners' unsatisfactory perceptions of the received feedback.

Literature review

Competency-based learning

Competency-based learning (CBL) is conceptualized as an instructional approach that organizes learning activities in a non-linear manner so that learners can study each component of the instruction to achieve the pre-defined competencies (Chang 2006). As one of the central elements of CBL, competency refers to the ability to apply the learned knowledge or skills in practical situations (Henri et al. 2017), and each of the competencies have clear learning objectives (Gervais 2016).
Under the competency-based approach, instructors define the competencies to be learned and the associated assessment methods. Learners are required to learn, perform the assigned tasks, and then demonstrate specific knowledge or skills. Instructors then assess learners' completed work to determine whether learners have mastered those particular knowledge or skills (O'Sullivan and Bruce 2014). Grades could be one of the assessment results that reflect the degree to which specific competencies are mastered (Gervais 2016). Throughout the learning process, learners receive various supports from instructors. Such supports could be feedback, hints, prompts, etc. This study specifically focused on instructors' feedback to guide learners on working on specific learning tasks and developing specific competencies.

Instructional feedback in CBL

Feedback is conceptualized as information given regarding learning performance, intended to adjust the learner's cognition, motivation, and/or behavior to improve their performance (Duijnhouwer et al. 2012). Feedback is one of the most powerful factors that affect learning in a variety of instructional environments (Hattie and Timperley 2007; Hattie and Gan 2011). It can not only help learners to be aware of the gaps between the desired and current knowledge, understanding or competencies, but also support them in acquiring the knowledge and competencies (Narciss 2013) and regulating their own learning process (Chou and Zou 2020). Nicol and Macfarlane-Dick ( 2006) also considered feedback as a tool that can motivate learners.
There are two forms of feedback, namely formative and summative. Formative feedback refers to information communicated to learners to adjust their learning thinking or behavior (Shute 2007). This can support assessment and instruction, help to close the gaps in learners' understanding, and motivate learners (Brookhart 2008). Summative feedback, on the other hand, is usually provided after summative assessments at the end of a learning module (Harrison et al. 2013). It assesses how well a student finally completes a learning task for grading (White and Weight 2000). Summative feedback is essential for learners to understand the gaps between their performance, the ultimate learning objective to achieve, and what they need to further work on to address their learning weakness (Harrison et al. 2013).
In CBL, feedback is one of the essential elements of mastery learning, as competencies are developed based on the given feedback (Guskey 2007). Learners perceive that feedback is important to them, because it intertwines with instruction (Hattie and Timperley 2007) and confirm learners' understanding and learning. Feedback can also help learners to master specific skills, apply what they have learned, develop competencies, extend thoughts, and demonstrate achievements (Besser and Newby 2019). Under CBL, learners receive iterative formative assessments and feedback and is also provided with opportunities to try again when practicing skills (Besser and Newby 2019). Such feedback can reinforce what learners were expected to learn, confirm what learners learned well, and point out where learners need to spend more effort and learn better (Guskey 2007).

Varying effects of feedback and task complexity

Although feedback is crucial to support competency development, learners are not always satisfied with the received feedback, especially at the higher education level (Radloff 2010; Mulliner and Tucker 2017). Previous studies also reported the feedback could have positive and negative effects on learning (Hattie and Timperley 2007; Kluger and DeNisi 1996). Some types of feedback are powerful as they provide information about tasks and help learners perform more effectively. In contrast, feedback such as praise and rewards are less effective since they do not provide useful information (Hattie and Timperley 2007).
The impact of feedback can be influenced by task complexity, which refers to the extent to which a task is easy or difficult to perform (van de Ridder et al. 2015). Interacting with feedback, complexity can affect decision accuracy (Zhai and Gao 2018) and outcomes. For tasks with low complexity, feedback may increase performance and be more effective (Hattie and Timperley 2007; van de Ridder et al. 2015). For complex tasks, feedback tends to deplete the resources needed for task performance (Kluger and DeNisi 1996). Zhai and Gao ( 2018) further found that for tasks with different complexity, the quantity of feedback affects the generated effects in a different way. For complex tasks, giving some but not too much feedback is helpful. In contrast, for non-complex tasks, giving more feedback is helpful (Mascha and Smedley 2007).

Research gaps and questions

Despite that several studies have validated the potential positive effects of feedback, the 2019 National Student Survey conducted in the UK stated that learners are not always satisfied with the received feedback. This might be because the received feedback is not specific enough to be helpful or not delivered timely (OfS 2019), resulting in poor quality of feedback implementation. Therefore, examining the practices of feedback implementation can help to identify the solutions to mitigate learners' dissatisfaction. However, few studies particularly investigated this practice to support online CBL. Specifically, exploring the types and effectiveness-related features of the implemented feedback in online CBL is still in its infancy. Recently, Besser and Newby ( 2019, 2020) identified several types of feedback used in online CBL. For example, they mentioned that feedback was used to confirm or deny learners' performance, describe the requirements and criteria of learning tasks, foster interaction between instructors and learners, point out the gaps between performance and goals, help learners self-regulate their learning, and help learners to transfer skills. However, this study did not further provide an in-depth analysis of the feedback's effectiveness-related features and how feedback was implemented across different learning tasks, calling for more investigation.
Based on the described research gap above, this study further examined how instructors implemented feedback in online CBL. It is worthwhile to investigate how feedback is implemented considering the potential powerful effects of feedback on learning and learner-perceived dissatisfaction reported in different studies. Targeting these gaps, this study sought to answer the following research questions.
  • RQ1. What types of feedback are implemented by instructors to support online CBL?
  • RQ2. What are the effectiveness-related features of the feedback implemented by instructors to support online CBL?
  • RQ3. How does feedback implementation vary across learning tasks with different complexity?

Theoretical framework used to examine feedback implementation

Hattie and Timperley ( 2007) proposed a feedback model that aims to identify the properties and circumstances that make feedback effective for better learning outcomes. This model is appropriate for analyzing the effectiveness of the implemented feedback in CBL, the context of this study, because feedback is usually intensively used in CBL to promote learning mastery (Besser and Newby 2019). In addition, the effectiveness of feedback is particularly critical for the success of learning mastery (Hattie and Timperley 2007), even under CBL. Therefore, this model was used as the analysis framework of this study. According to this model, feedback at the self-level provides a personal evaluation of the learner and is the least effective since its content is unrelated to learning tasks; feedback at the regulation level focuses on helping learners develop self-evaluation skills or confidence. This feedback can be effective by encouraging learners to continue learning by affecting learners' self-efficacy, self-regulatory proficiencies, and self-beliefs; feedback at the process level focuses on the process of completing a task and can be effective by helping learners process information; feedback at the task level indicates how well a task is being accomplished. It is effective when the task information subsequently is useful for processing information or for enhancing self-regulation. Therefore, it is conditionally effective.

Methodology

Study design

An exploratory multiple case study approach (Yin 2017) was used in this study. Using this approach, it is possible to explore the types and the features of feedback implemented to support learners through analyzing feedback texts. Case comparisons of the feedback used for different learning tasks can also show how feedback can be used in specific ways across learning tasks with different complexity.

Study context and data source

An undergraduate course titled "Introduction to educational technology," offered by a large R-1 University in the U.S. during the 2019 spring semester, was chosen as the study context. Fifty learners participated in this course and were divided into four groups with each group guided by one instructor. The course used a blended learning approach which combines in-person and online instructional activities so that the mode of instruction can be flexible (Boelens et al. 2018). This course included face-to-face lectures, labs, and online modules delivered via a digital Badge-Based Learning Management System (Badge LMS). This Badge LMS was a comprehensive digital platform for learning, including the functions like delivering learning resources (e.g., filmed presentations, video tutorials, textural reading materials, multimedia courseware), submitting assignments, receiving feedback, etc. Learners could access different parts of a learning task and then submit their assignments that they finished. Finally, instructors could view the learners’ submissions and provide feedback via this Badge LMS. This LMS allows them to revise and resubmit their assignments accordingly. Once the learners’ submitted assignments meet the final learning criteria specified by the instructors, they receive a digital badge.
A competency-based approach was followed in this course. Learners were required to complete a set of learning tasks throughout the semester on the Badge LMS. For each of the learning tasks, the learners first obtained basic instructions from in-person lectures. They then studied the online tutorials, completed the assigned weekly work, and submitted them via the Badge LMS. The instructor reviewed the submitted work and evaluated it based on assessment rubrics. Depending on the quality of the submitted work, the instructor provided individualized feedback to help learners improve their work until they demonstrated a mastery of the competencies associated with each learning task. Learners were rewarded with a digital badge once the result of assessments reached the specified learning criterion. Four lab instructors posted their feedback via the Badge LMS to learners. The feedback texts were downloaded from the Badge LMS as an MS Excel file and served as the raw data.

Case description and sampling method

The Excel file contained feedback for a set of learning tasks (i.e., challenges) which are required to develop 11 competencies. These competencies focused on lesson planning and integrating technologies in teaching and learning. These competencies and tasks are further detailed in Table 1. The information structure of this dataset is described in Table 2. The tasks in the format of quizzes focused on information remembering, different from the learning tasks that are competency-focused, where the learners had to finish the given task and demonstrate specific competencies (see Table 1). Since quizzes with single and multiple choices were graded by the platform automatically (i.e., limited feedback was provided), they were excluded in this study. As a result, 36 regular tasks, each was viewed as a case. Due to the large quantity of feedback texts to analyze, we decided to select and analyze specific cases that together represented the whole learning tasks.
Table 1
The complexity of learning tasks associated with competencies
Competency
Associated learning tasks (Assigned points according to the syllabus)
1. Basic course tool
Blogs as reflective writing tools (3); Screencast type demonstration tools (3); Google Docs as a collaborative writing tool (3); Cloud storage and sharing tools (3)
2. Digital literacy
Skills of the twenty-first century (2); Teaching in the twenty-first century (4); Developing a workshop about digital literacy and twenty-first century skills (4)
3. Writing objectives
Defining performance (3); Defining condition (3); Defining criterion (3); Writing lesson objectives (5)
4. Presentation Tools
Designing your presentation (4); Creating your presentation (5); Value of developing a presentation (3)
5. Assessment and Interaction Tools
Polls, forms and surveys (4); Study and Quiz Tools (3); Assessments as learning tools (3)
6. Mobile Learning Tools
Video production with mobile devices (4); Virtual reality with mobile learning (3); Types of mobile learning apps (3)
7. Website Development Tools
Designing your website (4); Creating your website (5); Value of developing a website (3)
8. Interactive Module
IM background and content planning (10); Planning for technology integration (10); IM drafting and feedback (10); IM final version (30)
9. Video Production Tools
Designing your video (4); Creating your video (7); Value of developing and editing a video (3);
10. Scratch
Introduction to Scratch (null); Scratch as a community (null); Creating your own Scratch project (null)
11. Information Literacy and Research Tools
Accessing journal research articles (3); Using and evaluating the accessed information (5); Copyright and fair use (4); Accessing, organizing, and storing info (5); Reflecting on research tools (5)
Table 2
The structure of the raw dataset
Challenge (task) Id
Challenge Name
Submission Status
Awarded Value
Awarded Points
Maximum Points
Instructor Comments
17589
Creating your video
3
Null
0
7
WA, [student name], really great start!!! Do you have 2 effects? If you do have, please guide me to find it. For your credits, please give the source information, especially the YouTube video, tell us who the author is and where you get it (URL)
A critical case sampling method was used, which is a process of selecting a small number of critical cases that can provide rich information and have the greatest impact on discovering knowledge (Patton 2015). This method allows deciding cases with particular features of a population of interest with limited resources (Patton 2015). When this method was followed, the complexity and nature of learning tasks which influence how difficult to develop competencies were considered. Task complexity was operationalized as the maximum points assigned to each task in the syllabus, including 2, 3, 4, 5, 7, 10, and 30 points. The task complexity in terms of the assigned points was listed in Table 1.
There were approximately three types of learning tasks based on their complexity. The first type was relatively simple and required learners to spend relatively little effort to complete it. Tasks with the assigned points 2, 3, and 4 fell into this category. The second type had an intermediate level of complexity, including the tasks with the assigned points 5 and 7. The third type was the most complex, and it required learners to spend a lot of effort to complete it. It included the tasks with the assigned points 10 and 30. The core competency in the course was to design an online learning module. In this study, we selected the tasks that were crucial steps in the learning process of course design (the course objective) and generated rich feedback data.
Within the category of low complexity tasks, the one titled "Defining Performance (3 points)" was selected since it is the first task for writing learning objectives. This task was one of the core components in the early design process. To complete it, the learners needed to write a performance component for a learning objective. Within the second category with intermediate complexity, we chose "Creating your video (7 points)" since past teaching experiences indicated that this task was vital for course content development. Learners received lots of feedback and made revisions to complete the work accordingly. Particularly, in this task, the learners needed to use video production tools to produce a video focusing on an instructional theme. Within the third category with high complexity, we selected the task "IM drafting and feedback (10 points)" since this is the last task, which significantly shaped the learners' final product based on the given feedback. This task required the learners to create a lesson plan for an online learning module.
In sum, the three tasks, namely "Defining Performance," "Creating your video," and "IM drafting and feedback" were selected as three cases for analysis. These three learning tasks represented critical cases when considering task complexity and the vital steps in achieving the primary goal of the course. All 1551 pieces of the feedback texts associated with these three tasks (884 for the task "Defining performance," 307 for the task "Creating your video," and 360 for the task "IM drafting and feedback") were coded, which generated 17,266 coded references for analysis.

Data analysis

The sample dataset was split into three files corresponding to each chosen learning task. Then, these three documents were imported into QSR Nvivo 12 software. The coding was conducted at the thematic level, during which each feedback message was broken down into meaningful chunks. Two experienced coders coded the data simultaneously. A Constant Comparison Analysis technique was applied, which includes three consecutive phases. First, the feedback text was coded into small chunks, each was given one or more descriptors (codes) indicating the basic feature of feedback. Second, the generated codes were grouped into 11 clusters based on the similarity between the types of feedback. Third, the generated clusters were further integrated so that the effectiveness-related features of feedback were summarized as five levels (see the coding schema in Appendix 1). Two researchers used the mixed card sorting method (Wood and Wood 2008) to classify the obtained feedback clusters according to Hattie and Timperley ( 2007)'s feedback model. Through discussions, the coders reached full agreement (100%) about the feedback classification.
The quantity of each type of feedback was calculated in terms of the total number of the coded references. Qualitative and quantitative comparisons of the implemented feedback across the three tasks were then conducted on the types, features, and quantity of the feedback.

Findings

For each of the three selected learning tasks (Task), the levels of task complexity (Complexity), the total times a student submitted their responses to a learning task (submission number), the total times that instructor provided feedback for specific tasks (feedback number), the ratio of the quantity of feedback to the quantity of submissions (Ratio), and the quantity of the coded references of feedback texts (CR) are summarized in Table 3. Totally, 1551 pieces of feedback provided for 1830 times of submission were coded. As a coding result, 17,266 pieces of coded references were generated for analysis. Among the coded feedback, 1341pieces were formative feedback (CR_Form) and 517 pieces were summative feedback (CR_Sum).
Table 3
The basic statistics of the feedback texts
Task name
Task_Complx
Submis_Number
FB Number
Ratio (%)
Coded Ref
CR_Fom FB
CR_Sum_FB
Defining performance
Low
981
884
90.11
9185
676
220
Creating your video
Medium
324
307
94.75
5152
304
147
IM drafting and feedback
High
525
360
68.57
2929
361
150
Summed total
N/A
1830
1551
84.75
17,266
1341
517

What types of feedback are implemented by instructors to support online CBL?

The results indicated that 11 types of feedback were implemented (see Fig.  1). Each type of feedback is described below.

Diagnostic feedback (F1)

When learners' submissions did not reach the learning criterion specified by the instructors, diagnostic feedback was used to provide assessment results or diagnosis and illustrate the gaps between the current and the desired performance. For instance, "… right now, the points for the current cards will be 9.5/10." Such gaps were further explained by pointing out problems identified in the submitted work. Examples of these problems are given in Table 4.
Table 4
Types of problems identified in diagnostic feedback
Types of problems
The quoted feedback examples
The concerns
"I think your content is good, but I'm a bit concerned with the interactivity of your module."
The misunderstandings learners had
"Just keep in mind that 'assessment' doesn't necessarily mean 'quiz' or 'test,' so as you move forward if you decide you want to include something else or take out a quiz or whatever, that's completely fine."
The erroneous elements
"Remember, objectives focus on what the learner will do, not what the instructor will do…"
The missing pieces
"There is one piece missing from your form 'fuzzy and specific…'"
Unnecessary parts
"Please just keep the performance part, no need to include a criterion for this challenge."
Technical issues that stop evaluation
"[Badge LMS name] doesn't like to display tables well, and so it's blocking me from seeing anything that you've submitted."
The part which needs revision
"The A(audience), B(behavior), C(condition) components (of learning objectives) are covert, so if that's what you're submitting, then it needs revision."
The work that is confusing
"There is a little back and forth zooming with the first picture that I don't quite understand."

Feedback for justification (F2)

This type of feedback was used to explain the instructors' judgments. For example, telling learners why some submissions were judged as erroneous, "this lesson (you designed) sounds really good for a face-to-face class, but this should be purely an online interactive module." Some instructors also explained why a specific requirement was provided, such as "can you give me only one theme-related statement? The reason I ask is because some of these statements are good, while others need some work…" Instructors' personal understanding of the submission was also shared in the feedback before giving a suggestion, "I think your performance statement includes two domains, comprehension, and analysis. Am I correct?" Then, this instructor suggested, "… consider breaking your performance statement into two objectives for your learners."

Feedback for improvement (F3)

This type of feedback provided suggestions to help learners improve their work. These suggestions included correct examples, demonstrating how the work should be done, negative examples that should be avoided, specific steps guiding learners to complete the work, and directions to the learning resources were provided. Additionally, indirect suggestions were also provided in feedback, such as cueing, and recommending more effective information search. The feedback was also used to guide learners to review the task requirements to orient their efforts in the right direction. Quoted examples of suggestions are presented in Table 5.
Table 5
Types of suggestions for improving the work
Types of suggestions
The quoted feedback examples
Correct examples
"You could transform your question into an objective, by saying something like, 'Students will be able to identify (by circling (or whatever you might choose as a method) good practices for a good learning environment.'"
Negative examples
"When write performance statement, you should … Nouns like "evaluation, analysis" should not be used."
Steps to guide learners to complete the work
"My suggestion is to re-consider these three items highlighted … Then, review the document (entitled [file name]), and re-do for that part."
Directions to resources
"Review the document "[file name] (you can download it from LMS)."
Cueing leading to more effective information search
"Think about making your behavior more specific—what is 'fractional use?' Are they solving a problem or applying a concept? Think about this moving forward and keep in mind behavioral objectives."
Guide learners to review tasks requirements
"Make sure you review the final requirements for the module so you can be sure you have everything there."

Feedback as complementary teaching (F4)

Instructors used this type of feedback to explain key concepts and clarify learning tasks. This can help learners to complete the learning tasks. For example, "For a performance statement, it's what you want your learners to do by the end of the lesson." In some of this type of feedback, directions were also provided to learners to apply the learned skills. For example, "this is an excellent video … For the website [an upcoming learning task], if you want to use this, you could clip a part of the video."

Motivational feedback (F5)

This type of feedback was used for motivating learners. Sometimes, instructors directly encouraged learners to work. Sometimes, instructors motivated learners in indirect ways. For example, highlighting the value of the learning tasks, providing the normative referential formation of the assessment results, showing positive expectations toward learners' incoming work, clarifying the goals of tasks, etc. Quoted examples of the ways of motivating learners are presented in Table 6.
Table 6
Types of ways to motivate learners
Ways of motivating
The quoted feedback examples
Directly encourage learners
"Keep up the good work!"
Highlight the value of the learning tasks
"These kinds of videos are so helpful in math classes because students often need something demonstrated multiple times to make it work."
Provide normative referential information
"Most students have needed 3–4 rounds of revisions to get this perfect, so this is pretty good for a second shot."
Show positive expectation
"So, I really want you to do well on this. This seems like it's going to be really good!"
Clarify the goals of tasks
"Don't worry about being 'denied' at this point. We will give you a chance to resubmit this challenge again once we complete the peer review in class on Wednesday."

Feedback as praise (F6)

This type of feedback was used as praise, which does not provide much useful information related to specific learning tasks. For example, "Good," "perfect," or "well done." Although this type of feedback was not viewed as effective, it was frequently used in this course.

Feedback for enhancing time management (F7)

This type of feedback was used to help learners with time management. For example, prompting timing issues related to submission, "Because this is so late getting submitted, I can't accept it. If you had gotten these in last week when you said you were going to, I could have made an exception, but there are always going to be deadlines in life, so it's important to try and stick with them as much as possible." Instructors also used this type of feedback to push learners to submit their work as soon as possible, "Make sure you get this badge finished by midnight tonight…" or remind learners that they still have time to improve their work further, "thanks for the early submission… However, it is not the required thing. We will talk more about this later this semester!".

Connective feedback (F8)

Instructors made diverse connections between different learning tasks and instructions through feedback. For example, connecting current tasks with upcoming tasks, connecting current tasks with the knowledge and skills learned previously, and connecting current learning to the future scenarios where the learned skills can be applied. The identified connections built via feedback are provided in Table 7.
Table 7
Types of connections built using feedback
Types of connections
The quoted feedback examples
Connect current tasks with upcoming tasks
"It is clear with this verb what you will ask students to do and how you will know if they can do it. We'll talk more about this today in [the] lab."
Connect current tasks with the learned skills
"Think about all the tools you've learned and how you can integrate as many of them as possible to help execute your great ideas!"
Connect current learning content to the future scenarios where the learned skills can be applied
"Remember this [course] eventually will all be online, so you might want to think about revising these now so you can just use them later in your lesson plan."

Feedback to encourage the use of feedback (F9)

Instructors used this type of feedback to encourage learners to use feedback, for example, explaining the feedback, "sorry that my earlier feedback was not very clear. What I hoped to emphasize is the potential for video to 'show' learners about something …" This feedback can also encourage learners to make use of feedback and iteratively improve their work. For example, "we can go back and forth (revise, resubmit, evaluate, and give feedback) a couple times until it's perfect!" The last way is to use feedback to direct learners to view the attached feedback in the format of documents or images. For example, "I indicated this missing part in the attached image."

Feedback to foster communication and help-seeking (F10)

This type of feedback was used to encourage learners to communicate with instructors and actively seek help. For example, "Email me [the instructor's email] directly if you have any questions…" or "…if you want to talk about it more in [the] lab on Wednesday, we can certainly do that. And if you have any other questions or concerns, please let me know." By using these methods, instructors explicitly reminded learners to seek help actively.

Emotional feedback (F11)

Emotional feedback was also used by instructors to express different emotions and thus enhance the connections between learners and instructors. The emotions include an appreciation of what the learners have done, sympathy, humor to help learners mitigate anxiety, using emojis to express positive feelings, as shown in Table 8.
Table 8
Ways of expressing emotions in the feedback
Emotion expressed
The quoted feedback examples
Appreciation of the good work
"My favorite part is the last section that you showed learners how to set a reachable and specific goal."
Appreciation of the efforts
"You've put a lot of time into this and come up with some great material."
Sympathy
"I'm sorry you experienced the crash when you used Camtasia to create your video."
Humor
"Don't worry. Your computer does not hate you. Try and if you finish, upload it."
Emotions indicated by emoji
“:),” “: D,” or “: -),” etc

What are the effectiveness-related features of the feedback implemented by instructors to support online CBL?

The effectiveness-related features of the feedback were identified by classifying the types of feedback into different levels, according to Hattie and Timperley's ( 2007) feedback model. Based on the total number of the coded references of each type of feedback, approximately 36% of all the implemented feedback, including feedback for improvement (F3) and complementary teaching (F4), connective feedback (F8), and feedback to encourage the use of feedback (F9), worked at the process level and focused on facilitating the process of completing learning tasks. This largest cluster of feedback was effective in helping learners to process task information.
The second-largest cluster of the feedback, about 29% of all the implemented feedback, included feedback for diagnosing (F1) and making justifications (F2) for the given evaluation. Working at the task level, this cluster of feedback was conditionally effective. It means that the power of this type of feedback depends on whether the subsequent task information is useful for improving strategy processing or enhancing self-regulation.
The third-largest group of feedback, about 28% of all the implemented feedback, worked at the self-level, mainly using feedback as praise (F6). These feedback messages did not provide useful information related to the learning tasks. Thus, they were viewed as the least effective.
About 7% of the implemented feedback focused on enhancing learner's self-regulation, including feedback to enhance motivation (F5), time management (F7), and foster communication and help-seeking (F10). Last, the instructors spent minimal effort on providing emotional feedback (F11), which only accounted for less than 1% of all the implemented feedback. Although these feedback clusters working at the regulation and emotion level can be very effective, the instructors did not use such feedback much in this course.

How can feedback implementation vary across learning tasks with different levels of complexity?

For each learning task, the quantity of the coded reference (QCR) for each type of feedback was calculated and organized in Table 9. We also calculated the ratio (type-all-ratio) of the coded reference of each type of feedback to the summed quantity of the coded reference of all kinds of feedback. For each learning task, we marked the top three most used types of feedback by the icon "↑," and the three least used types of feedback by the icon "↓," as shown in Table 9.
Table 9
The quantity and percentage of each type of feedback grouped by learning tasks
Types of feedback
Task 1 (Low complexity)
Task 2 (Medium complexity)
Task 3 (High complexity)
QCR a
Type-all-ratio b
QCR
Type-all-ratio
QCR
Type-all-ratio
Diagnostic feedback (F1)
475
28.89% (↑)
293
30.75% (↑)
221
18.54% (↑)
Feedback for justification (F2)
60
3.65%
21
2.20%
15
1.26% (↓)
Feedback for improvement (F3)
477
29.01% (↑)
230
24.13% (↑)
412
34.56% (↑)
Complementary teaching (F4)
23
1.40%
2
0.21% (↓)
7
0.59% (↓)
Motivational Feedback (F5)
74
4.50%
30
3.15%
62
5.20%
Praise (F6)
434
26.40% (↑)
321
33.68% (↑)
307
25.76% (↑)
Enhancing time management (F7)
1
0.06% (↓)
4
0.42% (↓)
20
1.68%
Connective feedback (F8)
28
1.70%
31
3.25%
71
5.96%
Encourage use of feedback (F9)
51
3.10%
3
0.31% (↓)
30
2.52%
Foster communication and help-seeking (F10)
12
0.73% (↓)
12
1.26%
45
3.78%
Emotional feedback (F11)
9
0.55% (↓)
6
0.63%
2
0.17% (↓)
Summed total
1644
100%
953
100%
1192
100%
a"QCR" refers to the quantity of the coded reference
b"Type-all-ratio" refers to the ratio of the coded reference of each type of feedback to the summed value of the coded reference of all kinds of feedback for each learning task
Consistent patterns of feedback use across the learning tasks were observed. Feedback for diagnosis (F1), improvement suggestions (F3), and feedback as praise (F6) was consistently and frequently used across the three learning tasks with a low, medium, and high level of complexity respectively. However, feedback for complementary teaching (F4), time management (F7), and emotional feedback (F11) were significantly less used across the three learning tasks.

Discussions and recommendations

The study aimed at examining the practice of feedback implementation in online CBL. The results indicated that the instructors implemented 11 types of feedback. These results showed that feedback could be used to facilitate online CBL from a variety of dimensions and help close learning achievement gaps (Guskey 2007). This finding is consistent with what Besser and Newby ( 2020) reported that instructors used different types of feedback to support online CBL, such as feedback focused on outcomes, motivation, interaction, clarification, extension, closing learning gaps, learning transfer, and regulation. Such diverse feedback is beneficial for mitigating learners-perceived dissatisfaction resulting from their feedback experience since learners prefer all types of feedback (Besser and Newby 2020).
Some types of feedback rarely mentioned in the previous literature were observed in this study. These included feedback for connecting current learning tasks with other learning tasks, instructional modules, assessment (F8), and the feedback for guiding learners to use feedback (F9). The reason why connective feedback was implemented might be that the dominating instructional strategy in this course was CBL, which required learners to complete a set of interrelated learning tasks. In a typical CBL model, learning is structured horizontally to integrate the learned content across the curriculum and vertically to help learners master the course content in depth (Gervais 2016). Thus, making broad connections among learning tasks, instruction, and assessments via feedback is supportive to help learners achieve meaningful learning. Connective feedback can also help learners to navigate through such a blended learning format by connecting current learning tasks and different instruction modules (e.g., online module and face to face module), helping learners make sense of the assessment results.
The feedback that encourages learners to use feedback may also be a special feature in online CBL. CBL promotes persistence, which encourages learners to keep trying until they succeed (Bloom 1980). Thus, instructors may give several iterations of feedback and supervise learners to make use of prior assessments and feedback to facilitate their learning. The online learning context might be another cause of the use of this type of feedback. The online learning environment is mostly self-driven, and if learners are not comfortable with self-learning, this kind of learning environment can be overwhelming for them (Bawa 2016). Thus, instructors tried to help learners by highlighting the essential parts, such as reminding learners to use important feedback.
Feedback at the process level can be effective, and it was actually used extensively in this course. However, self-level feedback (i.e., praise), which is viewed as the least effective (Hattie and Timperley 2007), was extensively implemented. Praise can have mixed pattern of effects on learning. For example, Cimpian et al. ( 2007) found praise worded in person (e.g., you are a good drawer) can cause learners to denigrate their skills, feel unhappy, avoid their mistakes and quit the tasks. In contrast, learners who were told that they did a good job tend to use strategies to correct their mistakes and persist with the task. Therefore, online instructors must be cautious when they use feedback for praise because of these potential risks. Feedback for diagnosis and justification was frequently used, but such feedback is only conditionally effective. To determine its effectiveness, further investigations, which link this type of feedback and the task information provided subsequently, are needed. Online instructors must be aware that only giving such feedback is not enough. More efforts should be made to connect this type of feedback with task information provided subsequently.
Finally, instructors did not give much feedback for enhancing self-regulation and emotional feedback. However, these types of feedback can be very beneficial since they can help mitigate the issues associated with the online context, for example, learner perceived isolation and limited instructor understanding of learners (Bawa 2016; Hung and Chou 2015). The limited amount of this type of feedback might have been caused by the intensive workload for instructors to grade the work and deliver personalized feedback to a large group of learners or simply because instructors were not aware of the value of such feedback. Considering the potential benefits of these types of feedback, it may be helpful to increase instructors' awareness of the value of the regulative and emotional feedback so that they can apply them.
Case comparisons further indicate there were consistent patterns regarding the types of feedback used. Feedback for diagnosing (F1), suggestions for improvement (F3), and praise (F6) were consistently and frequently used. This may because a comprehensive assessment system is an essential feature of CBL (Gervais 2016), and the implementation of diagnostic feedback (F1) and feedback for improvement (F3) are well aligned with this feature by providing detailed assessment information. In contrast, feedback for complementary teaching (F4) and emotional feedback (F11) were consistently less used across the different learning tasks. However, previous studies have indicated that feedback has differing effects for tasks with different complexity (Kluger and DeNisi 1996; Zhai and Gao 2018). Thus, feedback should be used in a personalized way according to task complexity, but we did not observe a significant difference in using feedback across tasks. Such differences between this study's findings and previous studies might be caused by the competency-based learning approach or the fact that instructors did not have enough time to personalize their feedback. We suggest online instructors to consider using feedback in a more customized way in learning tasks with different complexity, since information processing in tasks with different complexity is different (Zandvakili et al. 2018). For example, instructors can provide more regulative feedback to help learners manage the process and control the quantity of the overall given feedback for complex tasks.

Conclusions, implications, and limitations

This study investigated the practice of feedback implementation in online CBL. We found that instructors used eleven types of feedback to support online learners. The effectiveness-related features and the quantity of the implemented feedback were identified, which indicated the advantages and limitations in the practice of implementing feedback, and provided suggestions for feedback improvement. The case comparison further highlighted that the most basic feedback needed in online CBL. While regulative feedback and emotional feedback can be very effective, they were rarely used. Feedback should also be used in a customized way, but this was not the case in this study, which calls for more attention.
The findings of this study can enhance competency-based learning (CBL), by providing suggestions and recommendations for instructors about how to present feedback for learning, and the types of feedback to be provided. This can lead to better learning experiences and outcomes, as well as mitigate the learners' unsatisfactory perceptions of the received feedback. Specifically, this study suggests that when providing feedback, effectiveness-related features of feedback must be considered. In addition to using the feedback at the task level, feedback at regulation level and emotional level should be also used, since they can be very effective. Moreover, the nature and complexity of learning tasks are also critical for feedback implementation. Therefore, instructors should provide customized feedback based on a comprehensive consideration of the learning task attributes, and real-time learning performance. For example, for learning tasks with high complexity, it is necessary to control the quantity of the feedback delivered in the scenarios of low learning performance to avoid potential cognitive overload.
Based on the qualitative coding work, the processed feedback texts with the labels of features can be used for text classification and feature identification for mining feedback strategies from large-scale feedback datasets in the future. Such future work may help mitigate the current limitation that only a small number of cases were investigated in this study.

Appendix 1. Coding schema

Level
Cluster
Initial descriptor
Explanation
Task-level
Diagnostic feedback (F1)
Gaps
Indicate the gaps between current performance and the desired performance
Confirm_Good
Confirming students' good work
Concern
Show concerns of the work
Misunderstanding
Pointing out the misunderstanding students have for the tasks
Error
Point out erroneous elements under specific conditions
Missing
Point out the missing parts or components
Unnecessary
Point out the unnecessary part
Technical_issue
Point out technical issues that impeded evaluation
Revision
Point out the part that needs revision
Confusing
Clarify the confusing work
Feedback for justification (F2)
Reasons
Provide reasons about the judgment or requirements
Explain_Err
Explain why the response is erroneous
Personal_understanding
Sharing instructor's personal understanding toward specific learning tasks
Process-level
Feedback for improvement (F3)
Correct_Err
Correcting the erroneous elements
Suggest_improve
provide suggestions about how to improve the work
Tools_improve
Suggest tools for students to improve the works
Evaluate_suggest
Evaluate suggestions
Task_requirement
Paraphrase or directly show the task requirements
Correct_example
Provide correct examples or demonstration
Negative_example
Provide a negative example that should be avoided
Steps
Show students the next steps to finish the work
Principle
Prompt the principles
Resources
Direct students to find or use supportive learning resources
Cueing
Cueing and leading to more effective information search
Example_improvement
Provide examples about how to improve the work further
Feedback for teaching (F4)
Explain
Explain the key terms or key concepts or critical parts of the learning tasks
Reinforce
Reinforce the key concepts learned
Apply
Direct student to apply the learned skills
Feedback promoting feedback (F9)
Promote_FB
Encourage the use of feedback and revision
Explain_FB
Explain the feedback
FB_attach
Feedback attached document
FB_image
Feedback image
Other_support
Offer other types of support
Connective feedback (F8)
Connect_other_task
Connect current learning tasks to other learning tasks
Connect_previous
Connect feedback to previously learned content
Connect_future_work
Connect feedback to future work
Connect_future_instruct
Connect feedback to future instruction
Connect_assess
Connect feedback with assessments
Regulation level
Motivational feedback (F5)
Encourage_try
Encourage students additional try
Further_learning
Promote further learning for additional growth
Keep_up
Keep up the good work
Stimulate_thinking
Stimulate students to think more on it
Questioning
Questioning students' work
Purpose
Describe the purpose or goal of the learning tasks
Value
Highlight the value of the task
Normative_reference
Provide normative reference information
Prompting
Prompting students to take actions for moving forward
Self_efficacy
Enhance self-efficacy
Expectation
Show expectation of students’ progress
Feedback for time management (F7)
Timing
Prompt submitting timing issues
Feedback fostering communication and help-seeking (F10)
Communication
Encourage communication
Help_seeking
Promote help-seeking
Other_support
Offer other types of support
Self-level
Feedback as praise (F6)
Appreciation
Show appreciation of students' work
Self_positive
Show positive personal affection toward students
Emotional level
Emotional feedback (F11)
Emoji
Using Emoji
Emotional_connect
Make emotional connections
Tolerance
Show tolerance
Sympathy
Show sympathy
Humor
Show humor and help students mitigate anxiety

Acknowledgements

Not applicable.

Competing interests

The authors declare that they have no competing interests.
Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://​creativecommons.​org/​licenses/​by/​4.​0/​.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Literature
About this article

Other articles of this Issue 1/2021

International Journal of Educational Technology in Higher Education 1/2021 Go to the issue

Premium Partner

    Image Credits