Community knowledge assessment in a knowledge building environment
Introduction
According to Stahl (2006) group cognition requires a theory of collaboration that takes the group as the unit of analysis rather than the individual. ‘Learning by groups’ is not the same as ‘learning in groups’ or individual learning through social processes. The new theme reflects the larger societal interest in knowledge creation/knowledge building. Knowledge Building is a group phenomenon, even when contributions come from identifiable individuals. It is the production of public knowledge of value to a community (Scardamalia & Bereiter, 2003). The community may be a research or design group or the world at large or it may be a group of learners in a class—in which case it is important to distinguish individual learning from the group's knowledge-building accomplishments. Neither one can be reliably inferred from the other and the interaction between the two is vital. Knowledge Building is defined by a set of knowledge-building principles which represent design challenges, ideals, and objects of continual improvement (Scardamalia & Bereiter, 2006). The principle of “community knowledge, collective responsibility” places value on contributions to shared, top-level goals of the community which are prized and rewarded over individual achievements (Scardamalia, 2004, Zhang et al., 2009). Despite the importance of community knowledge and collaborative learning, the question of “How is community knowledge measured?” has remained largely unexplored (Hong, Teplovs, & Chai, 2007).
To understand the nature of group or community knowledge, it is important to distinguish between a psychological concept of knowledge as something within an individual mind and a social concept of knowledge as conceptual artifacts that have a public life (Bereiter, 2002, Bereiter and Scardamalia, 1996, Hyman, 1999, Popper, 1972). Community knowledge is public knowledge—ideas made accessible to all community members through contributions to collective knowledge spaces. Thus ideas have a life beyond the individual mind and can be continually accessed and improved—much as is the case with publication in journals. Community knowledge involves a dynamic process—interactions between ideas and people knowledge (i.e., knowing people's expertise)—with participants monitoring who is working on what ideas or problems and advancing knowledge in the community (Hong & Lin-Siegler, 2012). To clarify community knowledge, Scardamalia and Bereiter (2003) distinguish between the process through which the cultural capital of society is internalized/learned by individuals and the deliberate effort to improve community knowledge – the cultural or intellectual capital of the community.
Despite its time-consuming nature, a common way to assess group or community knowledge is to assess group products (Bourner et al., 2001, Cheng and Warren, 2000, Lee et al., 2006, Lejk and Wyvill, 2001). These group products can be a project report, a business plan, a technology tool, a design artifact, or so forth. While such outputs represent the most common way of assessing group work, there are several problems. Not all group learning results in a group product and not all group products signify contributions and understanding by all members. Especially in many online learning environments, the main learning activity tends to be online discussion. The “group product” is online activity logs and the content of individual contributions to discussions. In an effort to analyze group learning, researchers have employed peer assessment of community knowledge (Cheng and Warren, 2000, Lejk and Wyvill, 2001). For example, in a study by Lee et al. (2006) students were asked to identify important collaborative knowledge contributions to a community by collaboratively assessing one another's online notes in the computer discourse. A systematic review of empirical studies about peer assessment by Topping (1998) suggested that peer assessment was a reliable and valid assessment approach. However, like the assessment of group products, peer assessment can also be very time-consuming and reflect only one facet of community knowledge. If assessment tools could detect changes in group achievements and support users in advancing those achievements, that would be more convenient and advance our understanding of community knowledge.
One important advance in technological design for online learning environments has been automated assessment. Most online environments (e.g., Blackboard) use automated assessment measures such as average number of notes created, notes read, notes revised, words per note and so forth. These measures can be extremely useful in capturing the dynamics of online activities in a community and can be readily used on an on-demand basis by teachers and learners. But they are mainly designed to capture individual's behavioral patterns (e.g., reading or writing patterns) and/or group interaction patterns, but not the content knowledge the community is working with. To address this issue, content analysis is used to analyze notes and related content knowledge embedded in the database. This complements behavioral indicators but brings with it problems of its own.
A commonly used unit of content analysis has been theme. Themes can be either pre-specified based on an existing protocol or emergent from an open-coding process (Strauss & Corbin, 1990). But either way, there is an issue of subjectivity of theme selection. Second, content analysis has been used more often by researchers as a research tool than by users as an interaction/learning tool for assessing content in a database. Particularly for teachers (and students as well), it is a difficult and time-consuming technique to apply, thus limiting its use and usefulness for advancing the state of community knowledge. For a detailed account of content analysis, including lack of coherence in terms of theoretical base and choice for the unit of analysis, see the review by De Wever, Schellens, Valcke, and Van Keer (2006). A third issue is that it remains unclear what represents effective content analysis for assessing community knowledge. In part, this is because community knowledge is a complex construct that is not simply represented in individual members' personal contributions, but also in their interactions and collective understanding.
Use of key-term measures may address a number of issues raised above and open up easier forms of assessment with greater potential for providing helpful feedback to users. First, as a fundamental unit of content analysis, key terms (or words) represent a fairly objective unit of analysis. Second, with current technology, it is possible to easily extract key terms from a database (comparing terms in the database against those in a pre-selected dictionary or curricular materials) and to create related measures for automated assessment. Thirdly, key terms may be used to capture and visually represent content knowledge recorded in a community knowledge space and to show relationships between community knowledge and individual contributions. Key terms have been widely used for subject-indexing in books and for idea search (e.g., in academic papers or on the Internet), and for visual knowledge representation (e.g., semantic or propositional network, see Anderson, 2000; knowledge or concept map, see Novak, 1998, and tag clouds, see Hassan-Montero & Herrero-Solana, 2006). Moreover, by examining the extent to which key terms are shared between individual members, it is possible to identify more frequently used concepts and ideas as opposed to those that are not-shared by community members.
It is worth noting that measurement of community knowledge does not mean that assessment of individual knowledge is no longer important. On the contrary, in a knowledge building community individual contributions are essential for advancing group understanding. Also, beliefs underlying assessment, as well as assessment methods, can have a tremendous impact on students' learning experiences and subsequently on what and how they contribute. If assessment is based only on a group outcome or product, it is difficult, if not impossible, to determine what individual students have learned (van Aalst, 2012). Or, if assessment is only focused on a group product, without engaging all students in individual as well as collective knowledge advances, the potential of some students to contribute may be lost, with group knowledge as a whole suffering in consequence. Evaluating both individual contributions and group knowledge can help overcome problems associated with awarding the same grade to all members of a group for collaborative work, while ignoring the unique contribution made by each individual member. Teachers and students should both benefit from more useful assessments of individual and collaborative knowledge advances, especially designs that make it possible to provide feedback as work proceeds rather than simply after the fact.
The present study aimed to improve a key-term-based assessment tool in Knowledge Forum, as part of a sustained effort to advance this platform as an effective, user-friendly knowledge building environment. Knowledge building requires sustained creative work with ideas; accordingly, Knowledge Forum is designed to support high-level knowledge processes and discourse that leads to generation and improvement of ideas. This is in contrast to learning platforms that support simpler forms of discourse and accumulation and organization of conceptual and procedural knowledge rather than knowledge creation. Knowledge Forum supports members of an online community in contributing their ideas in the form of notes that are entered into “views” (online problem-solving spaces for collaborative knowledge work among community members). Knowledge Forum allows users to build on and annotate the notes of other members, as well as co-author notes, generate problems, add keywords, and create higher level rise-above notes to integrate different notes with relevant ideas or concepts. Online knowledge-building activities (e.g., note contributing, note reading, or note revising) are recorded automatically into a database, and can be computed statistically by means of a built-in tool called the Analytic Toolkit (Burtis, 2002).
Design-based research with technology advances integral to classroom knowledge practices, has characterized Knowledge Forum development (Scardamalia, 2004, Scardamalia and Bereiter, 2010). Building on this effort, the present study aims to advance existing tools by investigating key-term online measures for assessing community knowledge. The key-term tool currently in Knowledge Forum was developed by Teplovs (Teplovs, 2008, Teplovs, 2007, Teplovs, 2007) and uses a “Term Extraction” application provided by Yahoo (for details regarding term extraction see http://www.programmableweb.com/api/yahoo-term-extraction). The key-term tool can be used to compare keywords/key terms extracted from different sets of notes in a database, and to identify shared or overlapping terms with any benchmark (e.g., a dictionary or curriculum document) that researchers or community members select. However, research is required to determine how various indicators of change over time relate to personal knowledge growth and community knowledge advances. The behavioral measures available (e.g., number of notes posted and read) are more suitable for assessing interaction patterns than idea improvement, but idea improvement is essential for knowledge building. Therefore, the purpose of this study is to explore possibilities of using key-term measures: (1) to help assess and visually represent both personal and community knowledge, and (2) to complement (not replace) traditional online behavioral measures, in order to better capture knowledge building dynamics.
Section snippets
Design and data source
Fig. 1 shows the research design of this study. First, both the conventional measures and the proposed key-term measures were employed to compare two pre-selected databases (details below). Then similarities and differences between proposed key-term measures and conventional online measures were explored to see if the proposed measures capture aspects of knowledge-building not captured with conventional online measures.
The main data source was discourse among students in a Knowledge Forum
Comparisons using conventional online behavioral measures
A number of traditional online measures were used to compare the two datasets for this study. A base line for subsequent work was first established; a t-test showed no significant difference between the two phases or datasets in terms of the total number of notes contributed (t = 1.89, df = 21, p > .05, with N = 254, M = 11.55, SD = 4.39 in phase 1, and N = 212, M = 9.64, SD = 3.92 in phase 2). More detailed analyses are shown in Table 2; t-tests indicate that there were significantly more
Summary and conclusion
In summary, findings based on the first comparison using conventional behavioral measures suggested a change of interaction patterns from more superficial inquiry (in phase 1) to more reflective, deeper inquiry (in phase 2). Findings based on the second comparison using key-term related measures further showed that there were higher percentages of shared key terms and higher frequency of use of shared key terms in the community knowledge space in phase 2 (as compared with phase 1). This change
Acknowledgments
Support for writing this article was provided, in part, from a Social Sciences and Humanities Research Council of Canada grant (512-2002-1016) and a Taiwan's National Science Council grant (NSC # 101-2628-S-004-001-MY3).
References (36)
- et al.
Content analysiss schemes to analyze transcripts of online asynchronous discussion groups: a review
Computer & Education
(2006) Assessment in collaborative learning
Cognitive psychology and its implications
(2000)Education and mind in the knowledge age
(2002)- et al.
Rethinking learning
- et al.
First-year undergraduate experiences of group project work
Assessment & Evaluation in Higher Education
(2001) Analytic toolkit for knowledge forum (Version 4.0)
(2002)- et al.
Making a difference: using peers to assess individual students' contributions to a group project
Teaching in Higher Education
(2000) - et al.
Students’ intuitive understanding of promisingness and promisingness judgments to facilitate knowledge advancement
- et al.
Assessing the similarity of distributions-finite sample performance of the empirical Mallows distance
Journal of Statistical Computation and Simulation
(1998)
Improving tag-clouds as visual information retrieval interfaces
Beyond group collaboration: facilitating an idea-centered view of collaboration through knowledge building in a science class of fifth-graders
The Asia-pacific Education Researcher
Teacher-education students’ views about knowledge building theory and practice
Instructional Science
How learning about scientists’ struggles influences students’ interest and learning in Physics
Journal of Educational Psychology
Principle-based design to foster adaptive use of technology for building community knowledge
On community knowledge
How knowledge works
The Philosophical Quarterly
Students assessing their own collaborative knowledge building
International Journal of Computer-supported Collaborative Learning
Cited by (45)
Understanding student teachers’ reflective thinking using epistemic network analysis and fine-grained trace data
2023, Thinking Skills and CreativityFostering design-oriented collective reflection among preservice teachers through principle-based knowledge building activities
2019, Computers and EducationCitation Excerpt :One way to make reflection effective is perhaps to transcend reflection as merely individual or intrapersonal activities (Darling-Hammond & Bransford, 2005). Building on a sociocultural framework, reflection should not just be viewed as individual activities, but it should also be deliberately designed to become a collaborative community activity (Lin, Hmelo, Kinzer, & Secules, 1999; Harford & MacRuairc, 2008; Hong & Scardamalia, 2014). Moreover, building on Popper's (1987) 3-world epistemological framework (Boyd, 2016), reflection should not be just focused on personal thoughts that exist only privately in human's mind that Popper refers to as the 2nd world.
Principle-based design: Development of adaptive mathematics teaching practices and beliefs in a knowledge building environment
2017, Computers and EducationCitation Excerpt :They take charge of the entire lesson design process from producing initial lesson ideas, practicing their teaching, negotiating with peers during feedback and co-design activities, to becoming a critically reflective practitioner. Lastly, to nurture the “community knowledge” principle, students were encouraged to work collaboratively as a community by engaging in collective idea exchange, feedback, and co-design discussion activities as a community (Hong & Scardamalia, 2014). All the lesson design activities were only guided by the principles.
Teachers developing more creative learning views via online knowledge building activities
2023, 31st International Conference on Computers in Education, ICCE 2023 - ProceedingsAssessing college students' sense of community for advancing community knowledge
2023, 31st International Conference on Computers in Education, ICCE 2023 - ProceedingsInteractive Tools for Distributed Community Building and Collaboration in Maker Education
2023, Computer-Supported Collaborative Learning Conference, CSCL