Finds documents with both search terms in any word order, permitting "n" words as a maximum distance between them. Best choose between 15 and 30 (e.g. NEAR(recruit, professionals, 20)).
Finds documents with the search term in word versions or composites. The asterisk * marks whether you wish them BEFORE, BEHIND, or BEFORE and BEHIND the search term (e.g. lightweight*, *lightweight, *lightweight*).
Unlocking Gameplay Insights with Epistemic (Ordered) Network Analysis: Understanding the Potential of Video Games to Foster Authentic Scientific Practices in STEM Education
The article investigates the potential of video games to foster authentic scientific practices in STEM education, focusing on the use of Epistemic (Ordered) Network Analysis (ONA) to uncover gameplay behaviors and their impact on learning outcomes. It highlights the motivational benefits of well-designed educational games, which can simulate socioculturally situated contexts and support situational interest, particularly for students who may lack prior knowledge or self-efficacy. The study analyzes data from middle school students playing Crystal Island, a microbiology-focused game, to identify distinct student archetypes based on their in-game behaviors. These archetypes are then compared with learning outcomes, motivational measures, and external video-game practices and preferences. The findings reveal significant differences in engagement patterns, learning gains, and motivational scores among the identified clusters, providing valuable insights into how gameplay behaviors can influence educational success. The article also discusses the implications of these findings for tailoring educational game designs to better support learning and interest development, offering a novel approach to understanding and enhancing the affordances of educational games.
AI Generated
This summary of the content was generated with the help of AI.
Abstract
This study investigates student learning and interest within the context of a single-player, open-world game designed for microbiology inquiry. The game immerses players in the role of investigative scientists tasked with diagnosing a mysterious illness on a remote island. Ordered Network Analysis (ONA) was combined with clustering techniques to analyze in-game actions (i.e., interactions with non-playable characters, exploration, and utilization of in-game educational tools) allowing us to construct student archetypes based on the behavioral patterns of 122 middle schoolers. The analysis identified four distinct clusters of students with varying engagement patterns—two showing apparent patterns of engagement and two showing apparent patterns of disengagement. The study contributes insights into tailoring educational game designs to address disengaged or ineffective behaviors, enhancing the efficacy of game-based learning experiences.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Introduction
Well-designed educational games can serve as virtual environments that simulate socioculturally situated contexts, where students can model the skills, knowledge, identities, values, and epistemologies that often define specific communities of practice (Gee, 2017; Shaffer, 2006). In particular, features of well-designed games seem to be useful for helping to support situational interest (Kiili et al., 2021), which is a necessary condition for students who do not have enough knowledge to have developed individual interest (Hidi & Renninger, 2006) and who may not otherwise have enough self-efficacy to persist in the face of struggle (Britner & Pajares, 2006). Research has highlighted the motivational potential of game-based learning, most notably as springboards into deeper content engagement and subsequent educational success (Laine & Lindberg, 2020; Squire, 2012). To leverage these benefits, educational games have been designed to explore complex scientific practices in STEM in areas such as limnology (Gagnon & Swanson, 2023; Gaydos & Squire, 2010), urban planning (Foster et al., 2018), biomedical engineering (Ruis et al., 2019), ecosystems science (Dede et al., 2017), and microbiology (Cloude et al., 2020; Rowe et al., 2011), among others.
Although motivational and educational benefits have been found for specific games in specific learning contexts (e.g., Kim & Shute, 2015; Kim et al., 2022), summary reviews of game-based learning literature point to a more inconsistent picture of the benefits of games (Staaby, 2021; Young et al., 2012). Further studies of the situated behaviors of students when learning with games are needed to tailor pedagogical practices before, during, and after play in ways that optimally support learning and interest development (Bado, 2022).
Advertisement
Researchers in human–computer interaction have worked for decades to identify patterns of play styles and preferences and categorize learners based on these archetypes (Bartle, 1996; Cömert & Samur, 2023). Understanding play styles has potential to enhance the affordances of educational games by informing tailored experiences that are responsive to individual and group needs (Tondello et al., 2017). Further work is needed to explore and understand patterns of play by groups with greater nuance, particularly as it relates to learning engagement and success. In this paper, we first seek to determine whether or not we can construct student archetypes from their in-game behaviors (RQ1). We then examine the degree to which these behavioral archetypes reflect motivational factors, such as interest and self-efficacy (RQ2). We use Ordered Network Analysis and clustering techniques to identify different archetypes in student behaviors within the context of Crystal Island, a single-player, open-world inquiry game focused on microbiology content. These archetypes are then compared with learning outcomes, motivational measures, and external video-game practices and preferences in order to better understand these patterns.
Relevant Literature
Educational Games Research
Educational games offer students the possibility to engage with realistic settings and science practices that were previously difficult to simulate in the classroom. This includes contexts that are impractical, such as deep-sea ocean ecosystems (Liu et al., 2023b), inaccessible, such as historical events (Dede et al., 2017), or that carry risk or ethical concerns, such as simulated experiences with racial bias (Pribbenow et al., 2021). Games for learning can also serve as interactive tools that support tinkering and engagement with real-world complexity, such as the virtual Environment Land Science, which presents an interactive city model that students can quickly modify and re-zone as urban planners to test thresholds for economic and environmental gains (Foster et al., 2018). These features of games offer researchers access to unprecedented data about students’ complex interactions with these learning environments that can be used to better understand player behaviors and inform interactive game improvements (Gagnon & Swanson, 2023).
To that end, modeling student actions through game data has adopted multiple approaches in prior work. Some research has used strategies for identifying features/codes such as player affect based on patterns of action or interaction in player log data. These approaches typically leverage modeling techniques used with intelligent tutoring systems, refining interaction-based detectors (Baker & Ocumpaugh, 2014) to reliably identify affective states such as confusion, frustration, engaged concentration, boredom, and delight (Kai et al., 2015; Baker et al., 2014). Although these approaches are essential for providing targeted on-demand support for students during play, designing supports for specific play styles might help to improve their efficacy.
For this purpose, other researchers have developed frameworks for categorizing players by play styles or archetypes based on their shared patterns. The foundational player taxonomies framework was offered by Bartle (1996), who categorized players as Achievers versus Explorers in terms of content engagement, and Socializers versus Killers in terms of peer engagement. Frameworks for characterizing player types have since expanded in detail and complexity to group players by goals (e.g. Bateman & Boon, 2005; Busch et al., 2016), preferences (e.g. Nacke et al., 2014), or degree of skill (e.g., Drachen et al., 2009). Cömert and Samur (2023) synthesized existing player typologies into a seven-group taxonomy of players with a focus on how different styles may align with specific educational practices and outcomes. Although valuable as a summative overview, their typology is not grounded in experimental research and may require further validation. Other frameworks offer insights into the ways that students may play and learn with games, but may also have limited generalizability to new games or contexts. For instance, Slater et al. (2017) mapped the results of a latent cluster analysis on logs of player action in Physics Playground to Bartle’s four categories. Their work was able to identify Achievers and Explorers but categorized Killers and Socializers into a single disengaged subgroup, likely due to the single-player nature of the game.
Advertisement
Approaches that adopt a bottom-up approach to identifying player types as they emerge from the data may help to address this issue. Researchers such as Swanson et al. (2022) used k-Means clusters to identify player types based on their action clusters (which actions they engage in more or less frequently). The benefit of this approach is that it provided visual comparisons across groups that could be explored and explained based on the data and context. On the other hand, the action cluster models only show which actions are performed more and less often by each play style. Techniques for visual and statistical comparison of sequential data, such as ordered network analysis, have potential to offer more nuanced insights into play styles such as which actions/features are more likely to be paired with others, and in what order. While research on games and learning is well-represented in the Quantitative Ethnography community (e.g., Barany & Foster, 2019; Liu et al., 2023a; Paquette et al., 2021; Shaffer, 2006), work that leverages clustering analyses toward the identification of player types represents a valuable methodological and practical innovation.
Interest Development
Research on interest development shows that it is a multi-staged process in which well-developed interest is closely aligned with knowledge development (Schiefele, 1999; Schraw & Lehman, 2001). In fact, Hidi and Renninger’s (2006) four-stage model of interest suggests that well-developed individual interest in a subject develops alongside content knowledge, meaning that, in the early stages of learning a subject, students are characterized by interest in the situational affordances of the learning context.
Hidi and Renninger’s work aligns with other theories of interest, which attempt to distinguish between intrinsic and extrinsic motivation. For example, Deci and Ryan (1985, 2012) propose the Self-Determination Theory (SDT), which they used to develop the Intrinsic Motivation Inventory (IMI). This tool identifies 6 factors related to interest development including interest/enjoyment, perceived competence, effort/importance, pressure/tension, value/usefulness, and relatedness. The IMI has also been broadly used in the literature on game enjoyment (Tyack and Mekler, 2020), as its ability to capture multiple facets of interest development has proven useful.
Different theoretical approaches to interest may suggest different ways to support it. Whereas Hidi and Renninger (2006) characterize interest by affective, cognitive, and value-based dimensions, SDT focuses on students’ needs for autonomy, competence, and social relatedness. Work to reconcile these theories has suggested that needs identified by SDT may be more relevant during the situational interest phase, but research has also shown age-based effects (i.e., younger students having stronger affective and value-based needs, and older students needing more autonomy) and cultural differences (see discussion in Renninger & Hidi, 2011).
The reciprocal relationship between knowledge and interest in the later phases contributes to the ways in which students perceive themselves. In particular, mastery experiences with a topic often contribute more strongly to a student’s sense of self-efficacy—belief in the ability to succeed—than other sources of this belief (Britner & Pajares, 2006). In turn, self-efficacy sometimes contributes more strongly to later achievement than more accurate assessments of the student’s skills (Bandura, 1986; Caprara et al., 2011; Zysberg and Schwabsky, 2021).
Methods
Context and Participants
In this study, we analyzed data from 122 middle school students who engaged with the game Crystal Island as a component of their standard science curriculum. This experience took place in an urban school in the southeastern United States for two days. The composition of our participant group was evenly distributed in terms of gender, with 53 students identifying as male, 66 as female, and 5 opting to self-describe in another manner. The demographic distribution of our sample reflected a diversity of backgrounds, with a significant representation from groups historically underrepresented in STEM: 46% of the students were Black, 16% Hispanic, 5% Asian, 5% Multiracial, and 1% Native American.
Learning Environment
Crystal Island is a single-player, open-world game designed to foster interest and improve inquiry-based learning in the field of microbiology (Rowe et al., 2011). In this game, players take on the role of an investigative scientist tasked with diagnosing a mysterious illness that has afflicted a research station. Their objective is to identify both the pathogen responsible for the disease and its medium of transmission, all within the confines of a virtually simulated remote island. The game was designed to align with curriculum standards for middle grades science, including the North Carolina Standard Course of Study Essential Standards for Microbiology (8.L.1; North Carolina Department of Public Instruction, 2015). Its problem-solving activities emphasize the nature and practice of scientific inquiry as called for by the Next Generation Science Standards (National Research Council, 2013). Through gameplay, students develop and apply microbiology content knowledge related to disease spread, treatment, and prevention, as well as scientific reasoning skills to understand the symptoms, carriers, and trends in an outbreak.
The gameplay begins in a tutorial area where players are introduced to the fundamental mechanics of the game. Here, they learn essential skills such as interacting with non-playable characters (NPCs) to gather information, picking up objects, collecting readings, and submitting concept matrices with the main ideas of the readings. These concept matrices are requested at the end of every reading. Upon completing the tutorial, players are directed to the infirmary to receive further instructions, although they are free to explore the island as they choose. The game’s locations are arranged in a circular layout, as depicted in Fig. 1, encouraging players to visit each site, engage with NPCs, and collect reading materials scattered throughout all these locations. As players collect information, they are expected to gather objects across the island and scan them in the laboratory, formulating and testing hypotheses regarding the potential virus or bacteria causing the outbreak.
Fig. 1
Overview of Crystal Island with the expert “golden pathway” for game completion as operationalized by Sawyer et al. (2018)
Throughout the game, a worksheet (embedded in the game) is available for players to record their observations, interactions, and experimental results systematically. This tool aids in organizing the data collected from readings, conversations, and laboratory tests. Once they find the object and the specific virus or bacteria that caused the outbreak, they must return to the infirmary to suggest a treatment. Successfully proposing the correct treatment concludes the game.
Experimental Process
To collect the data analyzed in this work, four members of the research team helped to administer the game during regular science classes. Two researchers acted as observers and did not engage with the students. The other two provided introductions and technical support to students. The study lasted four days across two different science classrooms, with two days dedicated to each classroom. Each class period was about 45 min long, with four class periods per day. Students first logged into the game system with a unique, anonymous ID and completed a pre-survey, pre-test, and introductory video, taking about 15 min. Students then played the game for about 60 min split over the two days. Finally, they completed a post-survey and post-test in the remaining 50 min. Students’ survey responses and gameplay were automatically logged through the game system.
Learning and Motivational Measures
At the outset of the study, students began by taking a set of established, validated surveys, including a demographic questionnaire, a science content pre-test, scaled from 0 to 17, and surveys for Self-Efficacy (as outlined by Britner & Pajares, 2006), Situational Interest (following the model of Linnenbrink-Garcia et al., 2010), and game literacy on a scale from 0 to 3, where 0 indicated no video game play per week, and 3 represented more than 10 h of play per week. These surveys were selected to capture students’ initial knowledge, interest, self-efficacy, and game literacy—factors we hypothesized would influence their interaction with the game.
After replying to the surveys and pre-test, the students engaged with Crystal Island during two class periods, spanning two days. Upon completing the game, they were asked to fill out a series of follow-up surveys. These post-game surveys included a science content post-test, which was identical to the pre-test to measure knowledge gains, as well as evaluations of various motivational constructs. Specifically, these constructs included the “catch” and “hold” subscales of situational interest (according to Knogler et al., 2015) and five subscales of the Intrinsic Motivation Inventory (IMI; developed by Deci & Ryan, 2007), including interest-enjoyment (IE), perceived competence (PC), effort-importance (EI), pressure-tension (PT), and value-utility (VU). Unlike the earlier self-efficacy and situational interest surveys, these subscales assessed students’ perceptions of their experience with the game and were therefore only administered at the end of the learning experience. We deliberately avoided repeating the self-efficacy and situational interest surveys to minimize the risk of survey fatigue or negative feelings from an overload of questions (Porter et al., 2004). All the subscales were validated again in our dataset using Cronbach’s alpha. A summary of these measures and their corresponding Cronbach’s alpha is shown in Table 1. All subscales show a Cronbach’s alpha above 0.6, with most exceeding 0.85, indicating acceptable consistency.
The coding framework for this study draws on work by Zambrano et al. (2023), who analyzed a different cohort of students interacting with an earlier version of Crystal Island. Their approach utilized in-game actions and locations as the constructs for conducting an ONA. Building upon these initial codes, we aimed to refine and expand them to capture a broader spectrum of student interactions within the game. Our goal was to delineate patterns of inquiry and exploration, as well as identify potential indicators of challenges faced by students.
We initially identified 16 distinct codes representing the full range of possible actions within Crystal Island. However, this extensive set of codes resulted in a high number of potential transitions between each pair of constructs. Consequently, the feature set we were using for cluster analysis (see the “Clustering” section) outnumbered our participant count. This high-dimensional feature set generated a low silhouette score (SS = 0.11), indicating weak clustering validity. To mitigate the “curse of dimensionality”—a term describing the challenges that arise with increasing data dimensions (see Verleysen et al., 2005)—we removed four codes that were uncommon across all the student clusters. This reduction to 12 codes improved the silhouette score (SS = 0.15).
The 12 resulting codes and their corresponding umbrella categories are detailed in Table 2. Readers should note that although ENA and ONA typically report interrater reliability (IRR) between coders, the extraction process required for producing the base codes from students’ log files does not require that type of validation. Each code represents an event that occurred during gameplay. When an event occurs, it is recorded as a new utterance and coded according to the corresponding behavior and umbrella category. For instance, if a student opens and closes an article in less than 10 s, the event qualifies as Rushed Reading, and a new utterance is coded with Rushed Reading under the umbrella category of Rushing. For this reason, there is no ambiguity in the initial coding scheme (students’ behavior was either present or absent as defined).
Table 2
Codebook
Final code
Definition
Rushing
Rushed Reading: Less than 10 s of reading for each article
Rushed Conversation: Less than 5 s interacting with an NPC (repeated conversations are not considered) or less than 1SD below the median time spent speaking with a specific NPC
No reflection
No Reflection after Scanning: 3 consecutive hypothesis tests without adding data to the worksheet, reading again, or conversing with an NPC, with a separation of less than 20 s between each scan
No Reflection after Reading: 3 consecutive concept matrix submissions in less than 10 s
Worksheet entries
Students add an element to the worksheet
Long conversation
Over the median of time speaking with the specific NPC or more than 20 consecutive seconds
Scan after exploration
Scan After Reading: Students read for at least 5 min before testing a hypothesis
Scan After Conversation: Students had at least 3 min of conversations before testing a hypothesis
Long time outside
Excessive Time Tutorial: More than 10 min in the tutorial
Returning to Tutorial: Return to Tutorial after visiting any other location
Excessive Time Outside: 2 consecutive minutes outside
Inside not actions
2 consecutive minutes inside without conducting any additional action except for reading (which can take more than 2 min)
We grouped these 12 codes into the 7 umbrella categories looking to enhance the interpretability of the ONA models while reducing the number of dimensions to consider in the clustering analysis. To define these 7 umbrella categories, we first generated an initial round of ordered networks and performed clustering analysis to examine those networks for thematic similarity and similar network patterns. For example, we noticed that students who were rushing their reading practices were also rushing their conversations with NPCs. These nodes were not only more common among certain clusters of students, but they were also semantically similar. Therefore, we collapsed these two codes under one construct denominated Rushing. We applied the same approach to define the constructs No Reflection, Scan After Exploration and Long Time Outside. This final reduction to 7 umbrella codes simplified the analysis while enhancing its coherence and interpretability; it also improved the silhouette score (SS = 0.23), indicating a better goodness of fit.
Ordered Network Analysis (ONA)
The purpose of this research is to categorize student archetypes and explore the relationships between their prior knowledge, motivational levels, learning outcomes, and specific actions taken within the game. To accomplish this, we utilize Ordered Network Analysis (ONA), as introduced by Tan et al. (2022), a method previously applied to analyze log data from Crystal Island and recognize differences in the in-game actions between high and low-learning students (Zambrano et al., 2023). Similar to Epistemic Network Analysis (ENA; Bowman et al., 2021; Shaffer et al., 2016), ONA uses a moving window to identify connections between the constructs in the lines within that window. However, ONA accounts for the order in which the studied constructs appeared in the data. ONA distinguishes between the strength of the connection when action A is followed by action B (e.g., the student talks to an NPC and then enters an item in the worksheet), and the strength of the connection when action B is followed by action A (e.g., the student enters an item in the worksheet and then talks to an NPC). ONA also calculates connections that involve self-transitions (e.g., repeatedly talking to an NPC). This analysis allows a more detailed depiction of student behavior by illustrating both the direction and strength of actions (represented by bi-directional edges between nodes) and the frequency of construct repetition (indicated by node sizes) in the game, offering a deeper understanding of how students interact with the game, and revealing nuanced patterns of behavior that correlate with their learning processes and motivational aspects.
In this study, the grouping variables used for Ordered Network Analysis (ONA) were the identified clusters (see the “Clustering” section). Both the units and the conversation variable—which serves as a boundary for connections calculated by the previously described moving window—corresponded to each individual student’s play session. This ensured that data from one student was not linked with data from another. Although previous research analyzing gameplay behaviors with non-linguistic data has used variables like game levels to segment data (Karumbaiah et al., 2019), we avoided breaking the data into more detailed stanzas. This decision was based on the short duration of Crystal Island gameplay (Avg = 41.6 min, SD = 15.6) and the absence of natural break points.
To define the length of the moving window (which determines how far back codes from one line are associated with codes from previous lines), we used the standard length of 4, commonly employed in many studies that use ENA and ONA (Siebert-Evenstone et al., 2017). We tested various window lengths, ranging from 2 to 10, but observed no substantial differences in the results. We chose the moving window approach over the infinite stanza because players’ initial actions were similar across the board, and recent actions are typically more relevant to current actions than earlier ones.
Clustering
Clustering is an unsupervised machine learning technique used to group sets of objects so that objects in the same group (called a cluster) are more similar (in some sense) to each other than those in other groups. It is a common technique for data analysis employed to discover the inherent groupings in the data (Jain et al., 1999). Mathematically, clustering algorithms typically measure similarity or distance between data points using metrics such as Euclidean distance (the straight-line distance between two points in multidimensional space), Manhattan distance (sum of the absolute differences of their Cartesian coordinates), or more complex measures tailored to specific data types. For example, the k-means clustering algorithm (MacQueen, 1967; Arthur & Vassilvitskii, 2007) assigns each point to the cluster with the nearest mean. This process iteratively updates cluster centroids (the mean point of all the points in the cluster) and reassigns points to the cluster whose centroid is closest, minimizing within-cluster variances and aiming to produce compact, homogeneous clusters.
To categorize students into archetypes based on similar in-game behavioral patterns, we utilized clustering analysis, specifically k-means clustering. This clustering was applied considering the frequency of directed transitions between different activities (e.g., from Conversations to Worksheets) and the frequency of repeated activities (e.g., from Conversation to Conversation) for each student as features. We employed the rENA package to quantify the frequency of these transitions (Marquart et al., 2019). The frequency of these transitions was normalized using the total number of transitions conducted by each student. This unitization technique mirrors the normalization employed in traditional ENA (Bowman et al., 2021). Transitions that no student made (i.e., those with 0 occurrences and a normalized weight of 0 for all students) were excluded. All transitions made by at least one student were included.
The variance explained by the first two principal components of our model (which generate the ONA visualization) was 28.1% and 15.6%, respectively. This is typical for variance explained in ENA and ONA visualizations. To mitigate the loss of explained variance inherent in dimensional reduction (in this case, more than 50%), we conducted clustering analysis in the high-dimensional space prior to implementing the dimensional reduction (SVD) required for ENA’s visualizations. We employed silhouette analysis (Kaufman & Rousseeuw, 2009), using the Sci-Kit Learn library in Python (Pedregosa et al., 2011), to determine the optimal number of clusters. Silhouette values, which vary between − 1 and 1, measure the degree to which an object is similar to its own cluster (cohesion) as opposed to different clusters (separation). We calculated silhouette values for a range of 2 to 20 clusters, ultimately selecting the cluster count (N = 4) that produced the highest silhouette score (which is the average across all the individual silhouette values of each unit of analysis). After determining the cluster for each student, these clusters were then utilized as grouping variables to construct networks using the WebENA tool (Marquart et al., 2021).
Results
Clustering of ONA Results
Silhouette analysis revealed that four clusters optimally balance the cohesiveness within clusters and separation between them. The distinct patterns of in-game actions of students within these four clusters are shown in Fig. 2, which (for visual clarity) excludes the smallest line weights (lw < 0.1). For simplicity, these four clusters will be referred to as the Roamers (Fig. 2a), the Conversers (Fig. 2b), the Worksheet Users (Fig. 2c), and the Scanners (Fig. 2d). Additionally, Table 3 presents all line weights for each cluster, highlighting the group most likely to perform a given transition (i.e., with grayscale) in order to give an overview of how these clustering patterns emerged. The most common self-transition (node) for each cluster is marked with an asterisk, and line weights large enough to display in Fig. 2 (LW > 0.1) are given in bold.
Fig. 2
Four groups identified by the cluster analysis. The most common self-transition (node) for each cluster is marked with an asterisk, and line weights large enough to display in Fig. 2 (LW > 0.1) are given in bold
Weighted associations table. The maximum value for each row (i.e. transition) is highlighted in gray, LW ≥ 0.1 are given in bold. Asterisks mark the most common node (i.e., self-transition) for that cluster
In ONA, models are visualized using directed network graphs. Edge thickness represents the relative frequency of co-occurrence and the direction of transitions between two codes. For instance, in Fig. 2A, the connection between the codes long time outside and long conversation (LW = 0.46) is stronger than the connection between long conversation and worksheet (LW = 0.10). The chevron on each connection indicates the predominant direction of the transition. In the same figure, for the connection between long time outside and long conversation, the chevron points toward long time outside, indicating that the transition from long conversation to long time outside (LW = 0.46) is more frequent than the reverse transition (LW = 0.20). Additionally, the size of the nodes represents the frequency of self-references or self-transitions (when the same code is repeated within the moving window). In Fig. 2A, for example, students more frequently repeat the action long time outside (LW = 0.67) than the long conversations with NPCs (LW = 0.14). These line weights are directly comparable with the line weights of other edges in the graph. Each plotted point in the models represents the summary pattern for a single student’s data, which can be interpreted in relation to the ordered networks. For example, a student in the lower right quadrant is more likely to have transitions including the construct long time outside (Fig. 2A). Finally, the four unit means with confidence intervals represent the summary pattern of the student points in each cluster.
Our results show that nodes (or self-transitions) with the largest line weights for each cluster are closely related to other transitions with strong line weights. The Roamers are best characterized by spending a long time outside, as demonstrated by that action’s node (LW = 0.67) and this action’s presence in the three transitions with the next highest line weights (LW = 0.15 to LW = 0.46). The Conversers also show higher line weights for long time outside but are best characterized by long conversations (node LW = 0.28), which is common in six of its highest transitions (LW = 0.12 to LW = 0.29). Notably, the actions associated with the Roamers and Conversers seem to indicate disengagement.
In contrast, the Worksheet Users and Scanners are characterized by more productive actions. The Worksheet Users’ largest node involves worksheet entries (LW = 0.57), as do three of its other top transitions (LW = 0.12 to LW = 0.32). The Scanners’ group’s largest node is scan after exploration (LW = 0.81), which is present in four of its other top transitions (LW = 0.13 to LW = 0.15). In contrast with the first two clusters, the Worksheet Users and Scanners show more transitions that seem to indicate productive engagement with the game.
Learning Measures by Cluster
Table 4 presents the number of students who completed pre and post-tests and who solved the game. It also presents scores and learning gains achieved on these tests. In general, there is high attrition between the pre and post-tests that classroom teachers attributed to a post-covid lack of engagement that year. There are trends in the learning measures, but these should be interpreted cautiously given the high standard deviations.
Table 4
Learning measures. The maximum value for each row is highlighted in gray. Variables where a Mann–Whitney U test indicated a statistically significant difference between the low-engagement groups (Roamers and Conversers) and the high-engagement groups (Worksheet Users and Scanners) are shown in bold
In general, the two groups who showed the least behavioral engagement (the Roamers and Conversers) performed worse on these measures. Conversely, the groups who showed higher behavioral engagement also had higher learning measures. A Mann–Whitney U test (W = 382, p = 0.050) revealed marginally significant differences in post-test scores between the low behavioral engagement groups (Roamers and Conversers) and the high engagement groups (Worksheet Users and Scanners). No significant differences were observed in the pre-test across the groups (W = 1920.5, p = 0.31). Although the Scanners had the highest pre-test score (6.58), they did not show any gains in their post-test. The Worksheet Users did not start with the highest pre-test score but were the only group to show a positive learning gains score. Both of these groups also showed substantially higher rates of solving the game.
Motivational Measures by Cluster
Table 5 presents the 9 motivational measures for each cluster. Overall, there are lower motivational scores for Roamers and Conversers—whose behavior appeared to be more disengaged in Fig. 2—than for the Worksheet Users and Scanners. After applying the Benjamini–Hochberg correction (Benjamini & Hochberg, 1995), Mann–Whitney U tests revealed significant differences in initial situational interest (W = 2382.5, p = 0.001) and marginal differences in initial self-efficacy (W = 2167.5, p = 0.034) between the low behavioral engagement groups (Roamers and Conversers) and the high engagement groups (Worksheet Users and Scanners). The Roamers, in particular, have the lowest scores on all 9 measures. Conversely, the Worksheet users have the highest motivational scores on both the Linnenbrink-Garcia et al.’s (2010) Situational Interest (SI) scale and the Britner and Pajares’ (2006) Self Efficacy (SE) scale.
Table 5
Motivational measures. The maximum value for each row is highlighted in gray. Variables where a Mann–Whitney U test indicated a statistically significant difference between the low-engagement groups (Roamers and Conversers) and the high-engagement groups (Worksheet Users and Scanners) are highlighted in bold
Within just the IMI scores, other trends are also noticeable. Significant differences were found between the two behavioral engagement groups for perceived interest-enjoyment (W = 732.5, p = 0.003) and perceived competence (W = 692.5, p = 0.015). Both interest-enjoyment and perceived competence were particularly low for the Roamers, a trend that persisted across all IMI subscales, even those that did not show statistically significant differences. In contrast, the Scanners had the highest scores across all IMI subscales, as well as the Catch and Hold subscales of Knogler et al.’s (2015) situational interest. Notably, the highest IMI subscale for the Roamers (IMI range = 2.57 to 3.20) was for pressure-tension—a measure of external motivation that was the lowest value IMI scale for both the Worksheet Users and Scanners.
Video Game Practices and Preferences by Cluster
Table 6 presents the results of a survey designed to understand students’ game-playing practices and preferences outside of school. Worksheet users and Scanners, who have the highest levels of motivation across various measures, also spend more time playing video games each week. This difference is mainly observed in the proportion of students in each group who engage in gaming for more than 5 h per week: 60.0% for Worksheet Users, 41.4% for Scanners, 33.4% for Roamers, and 30% in the Conversers. The game preferences among the Conversers, Worksheet Users, and Scanners showed notable similarities, as action and adventure games emerged as the top choices across these groups, while puzzle and sports games were the least favored game types. This contrasts with the students in the Roamers, who prefer simulation and sports games over adventure games. These results suggest that the number of hours played by students and the game preference (or lack of preference), mainly for adventure games, might be associated with in-game actions, game completion, and external measures.
Table 6
Game literacy/preference measures. The maximum N for each row is shown in gray
Discussion and Conclusion
In the exploration of student learning and interest within the microbiology inquiry game Crystal Island, the integration of Ordered Network Analysis (ONA) and clustering techniques provided nuanced insights into middle school students’ engagement patterns. The identified clusters, marked by distinct in-game behavioral tendencies, illuminate the interplay between gameplay and educational outcomes.
In examining the clusters and their behavioral engagement, two distinct patterns emerged. The Roamers and Conversers exhibited signs of disengagement, characterized by extended periods outside of any location where students could acquire the content or hints to solve the mystery presented in the game or prolonged conversations followed by rushing or inactivity. These clusters demonstrated lower levels of active participation, reflected in fewer worksheet entries, fewer objects scanned (hypotheses tested), and limited exploration actions.
The learning outcomes mirrored the observed behavioral patterns. Clusters with lower engagement, namely Roamers and Conversers, tended to display less favorable learning measures, including lower pre-test scores, post-test scores, and learning gains. When comparing the groups with positive outcomes, the Worksheet Users showed higher post-test scores despite starting with a slightly lower pre-test score than the Scanners. Like the two disengaged groups, the Scanners did not exhibit significant learning gains.
In general, motivational aspects aligned with the observed behavioral patterns, providing further context. The Roamers and Conversers showed lower situational interest, self-efficacy, IMI scores and catch and hold scores, indicating diminished engagement. In contrast, the Worksheet Users and Scanners showed higher situational interest, suggesting they were at least interested in the game experience, if not the domain content. Notably, the Scanners exhibited higher scores across most motivational measures, which aligns with their higher pre-test scores. However, the Worksheet Users pre-test scores were very close to those found among the Scanners, and they were the only group to show positive learning gains.
Additional context was provided by a survey on students’ game-playing practices and preferences outside of school. The more engaged groups (Worksheet Users and Scanners) reported spending more time outside of school playing video games. Action and adventure games were favored across the engaged groups, indicating a potential link between game preference and in-game actions, though note that this preference was not sufficient, as many in Conversers (a disengaged group) also enjoyed these games.
The findings show known relationships between learning and motivational measures—notably, that students need some prior knowledge in order to show higher levels of interest in the learning domain. In this study, these trends are emerging even in situational interest scores, which are designed to capture earlier phases of interest development before knowledge effects are more likely to appear. These knowledge and motivational trends also correspond with differences that can be seen in the ONA clustering analysis of student behaviors.
This is a small study (N = 122) that showed higher attrition in post-test measures than previous research in Crystal Island, there was a relatively good representation of racial and gender differences at this school. Still, larger samples might show that there are nuanced differences in behaviors, including some that were collapsed in this analysis (e.g., those that became part of umbrella codes). For example, differences between rushing a conversation and rushing a reading activity might emerge as more important in a larger sample, and they might require different kinds of interventions in order to best support the students. We might also consider additional codes or operationalizations.
Still, this study offers a new way of investigating students’ behaviors and their relationships to learning and motivational measures. In particular, our findings offer broad differences between the behaviors of disengaged students (Roamers and Conversers) and engaged students (Worksheet Users and Scanners). Our results show behavioral differences between students who show improved learning (Worksheet Users) and those who do not (Scanners), even though both groups start with relatively high knowledge and appear relatively engaged. Specifically, our results suggest that for the engaged group, students who do not consistently use the worksheet to track their results may not be doing enough reflection on their testing in order to learn the material. This, along with behavioral trends in the disengaged group could help us to develop in-game nudges to assist students in important learning activities.
More generally, these results underscore the need to tailor educational game designs to address disengaged or ineffective behaviors. The novel combination of ONA and clustering that we present here offers an opportunity to produce insights into student archetypes. This research could help to equip game developers and educators with the knowledge to enhance the affordances of educational games, ensuring responsiveness to individual and group needs. Future research should delve deeper into the dynamics of such clusters, exploring both the persistence of these behavior patterns over time and the response that these different groups of students might have to potential interventions, such as adjusting gameplay dynamics, incorporating personalized challenges, or providing additional support for disengaged students.
Declarations
Ethical Approval
All study procedures were approved by the Institutional Review Boards of the University of Pennsylvania and North Carolina State University.
Consent to Participate
Written informed consent was obtained from all individual participants and their parents or legal guardians included in the study.
Consent for Publication
The participants and their parents or legal caregivers have consented to the submission of any results that do not include personal identifying information. In this paper, we only present aggregated results that do not share any personal identifying information of the participants.
Competing Interests
The authors declare no competing interests.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Unlocking Gameplay Insights with Epistemic (Ordered) Network Analysis: Understanding the Potential of Video Games to Foster Authentic Scientific Practices in STEM Education
Authors
Andres Felipe Zambrano
Amanda Barany
Jaclyn Ocumpaugh
Nidhi Nasiar
Jessica Vandenberg
Alex Goslen
Jordan Esiason
Jonathan Rowe
Stephen Hutt
Arthur, D., & Vassilvitskii, S. (2007). k-means++: The advantages of careful seeding. In Proceedings of the eighteenth annual ACM-SIAM symposium on Discrete algorithms, Society for Industrial and Applied Mathematics,7, 1027–1035.
Bado, N. (2022). Game-based learning pedagogy: A review of the literature. Interactive Learning Environments,30(5), 936–948.CrossRef
Baker, R. S., & Ocumpaugh, J. (2014). Interaction-based affect detection in educational software. The Oxford handbook of affective computing, 233–245.
Baker, R. S., Ocumpaugh, J., Gowda, S. M., Kamarainen, A. M., & Metcalf, S. J. (2014). Extending log-based affect detection to a multi-user virtual environment for science. In User Modeling, Adaptation, and Personalization: 22nd International Conference, UMAP 2014, Aalborg, Denmark, July 7–11, 2014. Proceedings 22 (pp. 290–300). Springer International Publishing.
Bandura, A. (1986). Social foundations of thought and action: A social cognitive theory. Prentice Hall.
Barany, A., & Foster, A. (2019). Examining identity exploration in a video game participatory culture. In Advances in Quantitative Ethnography: First International Conference, ICQE 2019, Madison, WI, USA, October 20–22, 2019, Proceedings 1 (pp. 3–13). Springer International Publishing.
Bartle, R. (1996). Hearts, clubs, diamonds, spades: Players who suit MUDs. Journal of MUD Research,1(1), 19.
Bateman, C., & Boon, R. (2005). 21st Century Game Design (game development series). Charles River Media, Inc.
Benjamini, Y., & Hochberg, Y. (1995). Controlling the false discovery rate: A practical and powerful approach to multiple testing. Journal of the Royal Statistical Society: Series B (Methodological),57(1), 289–300.CrossRef
Bowman, D., Swiecki, Z., Cai, Z., Wang, Y., Eagan, B., Linderoth, J., & Shaffer, D. W. (2021). The mathematical foundations of epistemic network analysis. In Advances in Quantitative Ethnography: Second International Conference, ICQE 2020, Malibu, CA, USA, February 1–3, 2021, Proceedings 2 (pp. 91–105). Springer International Publishing.
Britner, S. L., & Pajares, F. (2006). Sources of science self-efficacy beliefs of middle school students. Journal of Research in Science Teaching: The Official Journal of the National Association for Research in Science Teaching,43(5), 485–499.CrossRef
Busch, M., Mattheiss, E. E., Hochleitner, W., Hochleitner, C., Lankes, M., Fröhlich, P., ... & Tscheligi, M. (2016). Using player type models for personalized game design-An empirical investigation. IxD&A, 28, 145–163.
Caprara, G. V., Vecchione, M., Alessandri, G., Gerbino, M., & Barbaranelli, C. (2011). The contribution of personality traits and self-efficacy beliefs to academic achievement: A longitudinal study. British Journal of Educational Psychology,81(1), 78–96.CrossRef
Cloude, E. B., Dever, D. A., Wiedbusch, M. D., & Azevedo, R. (2020, November). Quantifying scientific thinking using multichannel data with Crystal Island: Implications for individualized game-learning analytics. In Frontiers in Education (Vol. 5, p. 572546). Frontiers Media SA.
Cömert, Z., & Samur, Y. (2023). A comprehensive player types model: Player head. Interactive Learning Environments,31(5), 2930–2946.CrossRef
Deci, E. L. & Ryan, R. M. (1985). Intrinsic motivation and self-determination in human behavior. New York: Plenum. Deci, E. L. & Ryan, R. M. (Eds.) (2002). Handbook of self-determination research. Rochester: University of Rochester. Deci, E. L., & Ryan, R. M. (2003). Intrinsic motivation inventory. Self-determination theory, 267.
Deci, E. L., & Ryan, R. M. (2007). SDT: Questionnaires: Intrinsic Motivation Inventory (IMI). Retrieved October 27, 2007, from http://www.psych.rochester.edu/ SDT/measures/intrins.html
Deci, E. L., & Ryan, R. M. (2012). Self-determination theory. Handbook of Theories of Social Psychology,1(20), 416–436.CrossRef
Dede, C., Grotzer, T. A., Kamarainen, A., & Metcalf, S. (2017). EcoXPT: Designing for deeper learning through experimentation in an immersive virtual ecosystem. Journal of Educational Technology & Society,20(4), 166–178.
Drachen, A., Canossa, A., & Yannakakis, G. N. (2009, September). Player modeling using self-organization in Tomb Raider: Underworld. In 2009 IEEE symposium on computational intelligence and games (pp. 1–8). IEEE.
Foster, A., Shah, M., Barany, A., Petrovich, M. E., Cellitti, J., Duka, M., Siebert-Evenstone, A., Kinley, H., Quigley, P. & Shaffer, D. W. (2018). Virtual learning environments for promoting self transformation: Iterative design and implementation of Philadelphia land science. In Immersive Learning Research Network: 4th International Conference, iLRN 2018, Missoula, MT, USA, June 24–29, 2018, Proceedings 4 (pp. 3–22). Springer International Publishing.
Gagnon, D. J., & Swanson, L. (2023). Open Game Data: A technical infrastructure for open science with educational games. Joint International Conference on Serious Games (pp. 3–19). Cham.CrossRef
Gaydos, M., & Squire, K. (2010). Citizen science: Designing a game for the 21st century. In Interdisciplinary models and tools for serious games: Emerging concepts and future directions (pp. 289–305). IGI Global.
Gee, J. P. (2017). Affinity spaces and 21st century learning. Educational Technology, 27–31.
Hidi, S., & Renninger, K. A. (2006). The four-phase model of interest development. Educational Psychologist,41(2), 111–127.CrossRef
Jain, A. K., Murty, M. N., & Flynn, P. J. (1999). Data clustering: A review. ACM Computing Surveys (CSUR),31(3), 264–323.CrossRef
Karumbaiah, S., Baker, R. S., Barany, A., & Shute, V. (2019). Using epistemic networks with automated codes to understand why players quit levels in a learning game. In Advances in Quantitative Ethnography: First International Conference, ICQE 2019, Madison, WI, USA, October 20–22, 2019, Proceedings 1 (pp. 106–116). Springer International Publishing.
Kaufman, L., & Rousseeuw, P. J. (2009). Finding groups in data: An introduction to cluster analysis. John Wiley & Sons.
Kiili, K., Lindstedt, A., Koskinen, A., Halme, H., Ninaus, M., & McMullen, J. (2021). Flow experience and situational interest in game-based learning: Cousins or identical twins. International Journal of Serious Games,8(3), 93–114.CrossRef
Kim, Y. J., Metcalf, S. J., Scianna, J., Perez, G., & Gagnon, D (2022). Aqualab: Establishing validity of an adventure game for middle school science. In Iyer, S. et al. (Eds.) Proceedings of the 30th International Conference on Computers in Education.Asia-Pacific Society for Computers in Education
Kim, Y. J., & Shute, V. J. (2015). The interplay of game elements with psychometric qualities, learning, and enjoyment in game-based assessment. Computers & Education,87, 340–356.CrossRef
Knogler, M., Harackiewicz, J. M., Gegenfurtner, A., & Lewalter, D. (2015). How situational is situational interest? Investigating the longitudinal structure of situational interest. Contemporary Educational Psychology,43, 39–50.CrossRef
Laine, T. H., & Lindberg, R. S. (2020). Designing engaging games for education: A systematic literature review on game motivators and design principles. IEEE Transactions on Learning Technologies,13(4), 804–821.CrossRef
Lester, J. C., Rowe, J. P., & Mott, B. W. (2012). Narrative-centered learning environments: A story-centric approach to educational games. Emerging technologies for the classroom: A learning sciences perspective (pp. 223–237). New York, NY.
Linnenbrink-Garcia, L., Durik, A. M., Conley, A. M., Barron, K. E., Tauer, J. M., Karabenick, S. A., & Harackiewicz, J. M. (2010). Measuring situational interest in academic domains. Educational and Psychological Measurement,70(4), 647–671.CrossRef
Liu, X., Hussein, B., Barany, A., Baker, R. S., spsampsps Chen, B. (2023a, October). Decoding player behavior: Analyzing reasons for player quitting using log data from puzzle game Baba is You. In International Conference on Quantitative Ethnography (pp. 34–48). Cham: Springer Nature Switzerland.
Liu, X., Slater, S., Andres, J. M. A. L., Swanson, L., Scianna, J., Gagnon, D., & Baker, R. S. (2023b, October). Struggling to detect struggle in students playing a science exploration game. In Companion Proceedings of the Annual Symposium on Computer-Human Interaction in Play (pp. 83–88).
MacQueen, J. (1967). Some methods for classification and analysis of multivariate observations. In Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability,1(14), 281–297.
Marquart, Cody L., Zachari Swiecki, Wesley Collier, Brendan Eagan, Roman Woodward, and David Williamson Shaffer. (2019). RENA: Epistemic network analysis.” https://CRAN.R-project.org/package=rENA
Marquart, C. L., Hinojosa, C., Swiecki, Z., Eagan, B., & Shaffer, D. W. (2021). Epistemic network analysis (Version 1.7.0) [Software]. Available from http://app.epistemicnetwork.org
Nacke, L. E., Bateman, C., & Mandryk, R. L. (2014). BrainHex: A neurobiological gamer typology survey. Entertainment Computing,5(1), 55–62.CrossRef
National Research Council. (2013). Next Generation Science Standards: For States, By States. The National Academies Press. 10.17226/18290
Paquette, L., Grant, T., Zhang, Y., Biswas, G., & Baker, R. (2021). Using epistemic networks to analyze self-regulated learning in an open-ended problem-solving environment. In Advances in Quantitative Ethnography: Second International Conference, ICQE 2020, Malibu, CA, USA, February 1–3, 2021, Proceedings 2 (pp. 185–201). Springer International Publishing.
Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., Grisel, O., ... & Duchesnay, É. (2011). Scikit-learn: Machine learning in Python. the Journal of machine Learning research, 12, 2825–2830.
Porter, S. R., Whitcomb, M. E., & Weitzer, W. H. (2004). Multiple surveys of students and survey fatigue. New Directions for Institutional Research,2004(121), 63–73.CrossRef
Pribbenow, C. M., Caldwell, K. E. H., Dantzler, D. D., Brown, P., Jr., & Carnes, M. (2021). Decreasing racial bias through a facilitated game and workshop: The case of fair play. Simulation & Gaming,52(3), 386–402.CrossRef
Renninger, K. A., & Hidi, S. (2011). Revisiting the conceptualization, measurement, and generation of interest. Educational Psychologist,46(3), 168–184.CrossRef
Rowe, J. P., Shores, L. R., Mott, B. W., & Lester, J. C. (2011). Integrating learning, problem solving, and engagement in narrative-centered learning environments. International Journal of Artificial Intelligence in Education,21(1–2), 115–133.CrossRef
Ruis, A., Siebert-Evenstone, A., Pozen, R., Eagan, B., & Shaffer, D. W. (2019). Finding common ground: A method for measuring recent temporal context in analyses of complex, collaborative thinking.
Sawyer, R., Rowe, J., Azevedo, R., & Lester, J. (2018). Filtered time series analyses of student problem-solving behaviors in game-based learning. International Educational Data Mining Society.
Schiefele, U. (1999). Interest and learning from text. Scientific Studies of Reading,3(3), 257–279.CrossRef
Schraw, G., & Lehman, S. (2001). Situational interest: A review of the literature and directions for future research. Educational Psychology Review,13, 23–52.CrossRef
Shaffer, D. W. (2006). Epistemic frames for epistemic games. Computers & Education,46(3), 223–234.CrossRef
Shaffer, D. W., Collier, W., & Ruis, A. R. (2016). A tutorial on epistemic network analysis: Analyzing the structure of connections in cognitive, social, and interaction data. Journal of Learning Analytics,3(3), 9–45.CrossRef
Siebert-Evenstone, A. L., Irgens, G. A., Collier, W., Swiecki, Z., Ruis, A. R., & Shaffer, D. W. (2017). In search of conversational grain size: Modeling semantic structure using moving stanza windows. Journal of Learning Analytics,4(3), 123–139.CrossRef
Slater, S., Bowers, A. J., Kai, S., & Shute, V. (2017). A typology of players in the game physics playground. In DiGRA Conference.
Squire, K. (2012). Designed cultures. Games, learning, and society: Learning and meaning in the digital age, 10–31.
Staaby, T. (2021). Still in another castle. Asking New Questions about Games, Teaching and Learning. Gamevironments, (15).
Swanson, L., Gagnon, D., Scianna, J., Mccloskey, J., Spevecek, N., Slater, S., & Harpsted, E. (2022). Leveraging cluster analysis to understand educational game player styles and support design. In GLS 13.0 Conference Proceedings.
Tan, Y., Ruis, A.R., Marquart, C., Cai, Z., Knowles, M., Shaffer, D.W: Ordered network analysis. In: Damşa, C., Barany, A. (eds.) Advances in Quantitative Ethnography: Fourth International Conference, ICQE 2022. Springer International Publishing (2022). https://doi.org/10.1007/978-3-031-31726-2_8
Tondello, G. F., Wehbe, R. R., Orji, R., Ribeiro, G., & Nacke, L. E. (2017, October). A framework and taxonomy of videogame playing preferences. In Proceedings of the Annual Symposium on Computer-Human Interaction in Play (pp. 329–340).
Tyack, A., & Mekler, E. D. (2020, April). Self-determination theory in HCI games research: Current uses and open questions. In Proceedings of the 2020 CHI conference on human factors in computing systems (pp. 1–22).
Verleysen, M., spsampsps François, D. (2005, June). The curse of dimensionality in data mining and time series prediction. In International work-conference on artificial neural networks (pp. 758–770). Berlin, Heidelberg: Springer Berlin Heidelberg.
Young, M. F., Slota, S., Cutter, A. B., Jalette, G., Mullin, G., Lai, B., ... & Yukhymenko, M. (2012). Our princess is in another castle: A review of trends in serious gaming for education. Review of educational research, 82(1), 61–89.
Zambrano, A. F., Barany, A., Ocumpaugh, J., Nasiar, N., Hutt, S., Goslen, A., ... spsampsps Mott, B. (2023). Cracking the code of learning gains: Using ordered network analysis to understand the influence of prior knowledge. In International Conference on Quantitative Ethnography (pp. 18–33). Cham, Springer Nature Switzerland.
Zysberg, L., & Schwabsky, N. (2021). School climate, academic self-efficacy and student achievement. Educational Psychology,41(4), 467–482.CrossRef