Formative assessment involves a process of verification, assessment, and decision-making with the purpose of optimizing the teaching–learning process (Pinger et al.,
2018). Compared with final or summative assessment, it can increase both motivation and involvement of trainees and provide opportunities for the correction of errors (Silvers & Sarvis,
2020). Hence, formative assessment represents a learning experience itself, developing trainees’ responsibility, autonomy, and communication, thereby improving their capacity for self-reflection and academic achievement (Martínez et al.,
2016). These formative assessment practices will be mimicked and implemented by teachers, following their experience as trainees (Hamodi et al.,
2017). Moreover, the effective integration of formative assessment has additional potential because it can offer an appropriate structure for sustained meaningful interactions through the development of effective online learning communities (Sorensen & Takle, 2005), which can be of special interest for preservice and inservice teachers.
Certainly, formative assessment can be easily adapted to the asynchronous nature of interactivity in online learning environments. Among others, discussion forums are common formative assessment format either in synchronous or asynchronous manner (Xiong & Suen,
2018) and can be useful for promoting trainer feedback, peer assessment and automated feedback.
The use of discussion forums as a tool for providing formative assessment is well-considered by participants in online learning environments (Gaylard Baleni,
2015; Ogange et al.,
2018). However, despite participants’ positive attitudes towards the use of discussion forums, research evidences suggests that its use involves a significant workload for trainers in monitoring the online environment and subsequent discussions between students (McCarthy,
2017). This difficulty in digesting a large amount of textual data in a relatively short time can seriously jeopardize the quality of periodic formative assessment provided, despite the potential benefits of using discussion forums and similar formats to provide formative assessment.
Different studies have focused on how the affective domain can influence the learning process and therefore needs to be considered within formative assessment. Participants can share interesting insights into the course topics and provide their impressions and affective states during the course through online forums and debates (Moreno-Marcos et al.,
2019).
The educational environment is a consistent determinant of students’ cognitive and affective outcomes (Fraser et al.,
2021). In particular, the emotional environment, or the quality of social and emotional interactions in the classroom—between and among students and teachers—influences students’ achievement (Reyes et al.,
2012). When an emotional climate is characterized by warm, respectful, and emotionally-supportive relationships (a positive climate), students perform better, partly because they are more emotionally engaged in the learning process and they enjoy it more (Reyes et al.). In Kashy-Rosenbaum and colleagues (
2018) experimental study, academic achievement was significantly higher within classrooms characterized by positive emotional environment, but significantly lower within classrooms characterized with negative emotional environment.
Several limitations are also met when measuring emotional climate in online learning environments. Traditional instruments for measuring a group’s emotional climate are based on thoroughly-designed questionnaires aimed at retrieving a subjective report from each participant about his/her psychological state. These include the questionnaires developed by Fraser et al. (
2021) for STEM classrooms or by Alonso-Tapia and Nieto (
2019) for high-school students. Other traditional instruments use observational tools to capture the behavioral activity to infer the emotional climate, such as the Classroom Assessment Scoring System of Pianta and colleagues (
2008). More recently, with the generalization of the use of sensors, implicit detection of emotion has involved measuring physiological data, such as heartbeat, facial expressions, etc. (Feidakis,
2016). However, none of these three approaches fits well in online learning environments. Firstly, the behavioral and measurement of physiological data is impractical. Secondly, because the asynchronous nature of online learning environments—even though some activities can be carried out synchronously—makes it futile to try to measure a group’s emotional climate at a particular moment in time.
Therefore, to leverage the benefits of formative assessment in trainees’ outcomes, it is necessary to develop automated tools to help trainers to process data, so that they can provide authentic real-time feedback based on the emotional climate of their group of trainees. At this point, textual data mining becomes a necessary tool for facilitating formative assessment in online learning environments.
Using sentiment analysis (SA) to improve formative assessment in online learning environments
Sentiment analysis (SA) is defined as the computational field concerned with contextually mining of unstructured text documents (such as opinions, sentiments, attitudes, evaluations, or emotions) so that structured and insightful knowledge can be obtained and employed for different tasks (Mite-Baidal et al.,
2018). In education, a recent literature review conducted by Kastrati and colleagues (
2021) identified 92 studies on SA of students’ feedback in online learning environments which, in general, measured the comments of trainees concerning various aspects of the trainers’ role. Results highlight the need for standardized solutions and a focus on emotional expression and detection, because the field is rapidly growing.
Research on this topic has identified the existence of gendered patterns of communication in participants’ uses of VLEs, which is different for specific areas. For example, in Sun and colleagues’ (
2020) research on online technology communities, the frequency of female users expressing positive emotions was higher, and also male participants were more likely to express impatience and dissatisfaction in the process of technical learning. Similar patterns were found in the analysis of female and male posts in social media (Çoban et al.,
2021). These authors also found that male users posted more positive messages as they grew older, while the reverse was observed for females. Therefore, age would also influence the gendered patterns observed. In other areas, such as health communities, female users are more likely to seek emotional support in health communities (Liu et al.,
2018) and express more-negative emotions than male users, especially the expression of anxiety and sadness.
Females usually interact more in the VLE than their male peers (Oreski & Kadoic,
2018; Van Horne et al.,
2018) and they usually participate to a lesser extent in the proposed activities with contributions that integrate fewer mistakes (Kickmeier-Rust et al.,
2014). Shapiro et al. (
2017) observed that females expressed more-negative views about their progress and self-perceived evaluation. These findings are aligned with previous results in the literature showing that female students usually underestimate their abilities compared with their male peers with similar achievements, especially in the areas of mathematics, computing, and social sciences (Huang,
2013). However, no previous literature was found on the use of SA to explore gender differences in the communication patterns—and sentiment expression—of trainees in higher education, which could jeopardize the use of SA as a formative generalized assessment tool.
In summary, previous studies identify a need for developing more-accessible ways of performing SA as a tool for measuring the emotional climate of a group in online learning environments. Moreover, although other personal factors might influence how students express themselves (i.e., age, race, and socioeconomic level), there is a need to integrate a gender perspective when developing SA techniques not only to develop a deeper understanding of participants’ interactions, but also to design more-inclusive tools and implement a better and more-targeted formative assessment.