Next Article in Journal
Prospects of Precipitation Based on Reconstruction over the Last 2000 Years in the Qilian Mountains
Next Article in Special Issue
Empirical Research on the Metaverse User Experience of Digital Natives
Previous Article in Journal
Assessment of the Urban Expansion and Its Impact on the Eco-Environment—A Case Study of Hefei Municipal Area
Previous Article in Special Issue
Virtual Reality and Metacognition Training Techniques for Learning Disabilities
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Developing and Comparing Indices to Evaluate Community Knowledge Building in an Educational Research Course

by
Calixto Gutiérrez-Braojos
1,*,
Linda Daniela
2,
Jesús Montejo-Gámez
3 and
Francisco Aliaga
4
1
Department of Educational Research Methods, Assessment and Evaluation, University of Granada, 18071 Granada, Spain
2
Faculty of Education, Psychology and Art, University of Latvia, LV1083 Riga, Latvia
3
Department of Didactics of the Mathematics, University of Granada, 18071 Granada, Spain
4
Department of Educational Research Methods, Assessment and Evaluation, University of Valencia, 46010 Valencia, Spain
*
Author to whom correspondence should be addressed.
Sustainability 2022, 14(17), 10603; https://0-doi-org.brum.beds.ac.uk/10.3390/su141710603
Submission received: 2 August 2022 / Revised: 16 August 2022 / Accepted: 20 August 2022 / Published: 25 August 2022
(This article belongs to the Special Issue Digital Technologies for Sustainable Education)

Abstract

:
This paper implements a novel approach to analyzing the degree of Collective Cognitive Responsibility (CCR) in a Knowledge Building community, based on socioeconomic and scientometric measures. After engaging in Knowledge Forum (KF) discussions for one semester, 36 students identified impactful ideas in their portfolios, which were then used to develop their impact scores. These scores were then transformed and plotted along the Lorenz Curve and the Gini coefficient to visualize the degree of equidistribution of recognition in the community and, by extension, the degree of collective responsibility shared by members of the community. Additionally, students were classified into member roles based on the impact of their contributions, and we explored the flow of member roles across several discussion topics, based on Price’s model of scientific production. Our results show convergence between peers’ and teachers’ ratings of impactful contributions, which both point to medium levels of collective responsibility in the community. In short, on the one hand, this procedure shows its sensitivity to detect communities that could not comply with the CCR principle. On the other hand, we discuss the necessity of reflective evaluation to address the pedagogical challenge of fostering collective responsibility for knowledge advancement and empowering novel students to take charge of their knowledge work at the highest levels.

1. Introduction

Knowledge creation, utilization, and dissemination are the pillars of modern knowledge societies (Drucker, 1993; Nonaka and Takeuchi, 1995) [1,2]. Thus, the capacity of the citizenry to engage in creative knowledge work is both a societal priority and an educational priority (OECD, 2018; UNESCO, 2015) [3,4]. In recent years, there has been growing recognition among educational experts and learning scientists that education needs to prepare learners of all ages to not only be acquirers of knowledge, but also creators of knowledge (e.g., Bereiter and Scardamalia, 2018; Hargreaves, 1999; Jugembayeva, 2022; Santos-Rego, et al., 2020; Tan, So and Teo, 2014) [5,6,7,8,9]. Thus, education for knowledge collective creation is a priority for knowledge societies (Gutiérrez-Braojos et al., 2019; Lee and Jin, 2019) [10,11,12]. However, many university courses are still implementing lecture-style teaching approaches (with teachers being the holders of the expert knowledge and the students the acquirers of that knowledge. Hakkinen and Hammalainen (2012) [13], based on Wegerif’s (2006) [14] ideas, claim that some of the assumptions underlying current pedagogies are influenced by the industrial age and exclusively pursue the development of individual skills. As a result, students are not exposed to knowledge creation until they enter working life. Because there is an increasing demand for workers who are able to work with knowledge, students should be provided with educational experiences with knowledge creation and innovation, in order to prepare them to be active contributors to the knowledge society.

1.1. Bringing Knowledge Building into University Classrooms

Knowledge Building (KB) is an educational approach that involves design thinking (Bereiter and Scardamalia, 2003; 2018) [5,15], i.e., creation of knowledge. In fact, this ability to create has come to be considered as the highest level of cognitive domain of the educational process in the main models of instructional design, e.g., the review of Bloom’s taxonomy (Bloom et al., 1856 [16], by Anderson and Krathwohl, 2001) [17]. In this context, creation of knowledge entails the design and improvement of intellectual artifacts such as theories, explanations, and proofs. Ideas are considered intellectual artifacts of the community, as they reside in the community’s discourse rather than in people’s minds (Van Aalst, 2012) [18]. In this way, the Knowledge Building pedagogy moves ideas into the center of the educational process through learning tasks designed to discuss real knowledge problems (Scardamalia and Bereiter, 2006) [19]. Simply defined, Knowledge Building is the deliberate process of creating knowledge that is valuable to the community. Knowledge Building research has been conducted in any educational level and disciplines (e.g., Chen and Hong, 2016; Bereiter and Scardamalia, 2010, Ellis, et al., 2011; Hong and Scardamalia, 2014; Lax, et al., 2006) [20,21,22,23].
Although it is not a necessary condition to carry out the Knowledge Building pedagogy, KB practitioners often use an artefact to provide a community space, which expands opportunities for collaboration among members beyond the classroom with no time constraints. The most widely used technological platform by KB practitioners is the Knowledge Forum, i.e., a multimedia community knowledge space designed by Scardamalia (2004) [24] to facilitate the shared construction of knowledge. In KF, participants share notes, perspectives, theories, evidence, or resources that are part of the constructive discourse of knowledge. The software provides space to upload these contributions and access them at any time. In addition, the software makes it possible to visualize the way contributions are linked to each other, showing the development of constructive discourse. The KF also provides a tool for selecting promising ideas, as well as several tools to evaluate the participation in the construction of knowledge, identifying the number of notes built by members, the number of readings per member, progress in their vocabulary throughout the course, build-ons among members, and readings between members. This type of learning—using learning platforms—is proven to be a valuable learning tool for students (Daniela, Rūdolfa, and Rubene, 2021; Daniela, and Rūdolfa, 2019) [25,26].
During the implementation of Knowledge Building, students establish the frontiers of their collective understanding through collaborative, constructive discourse supported by the platform KF (Bereiter and Scardamalia, 2018; Hong et al., 2016) [5,27]. They formulate problems and pertinent knowledge objectives, investigate these problems, generate, critique, and improve shared ideas, and evaluate their progress as a knowledge community (Cacciamani, et al., 2012; Yucel et al., 2016) [28,29]. Thus, students are not simply taught to assimilate the expert knowledge conveyed by the teacher and reproduce it on a test as in lecture-style classrooms; instead, they are engaged in authentic knowledge work with their own ideas and those of their peers, relative to authoritative sources (Hong et al., 2011; Yang et al.,2016) [30,31]. As a result, the role of the teacher in the Knowledge Building classroom shifts toward fostering community norms of engagement that promote collective responsibility for the creation and improvement of knowledge, i.e., the responsibility for generating better versions of knowledge is distributed among all members (Scardamalia, 2002) [32].
Many KB studies have been done recognizing the importance of the Collective Cognitive Responsibility principle in KB communities (e.g., Cacciamani, et al. 2021; Gutiérrez-Braojos and Salmeron, 2015; Gutiérrez-Braojos et al., 2019a; Ma et al. 2016; Yang et al., 2021) [10,33,34,35,36]. Parsing the CCR is important in the KB. In fact, we can understand that the CCR is a desired horizon in the KB. Some researchers have found the presence of highly centralized student networks in Knowledge Building communities in different educational contexts and academic discipline (Lax et al., 2016; Mylläri, et al., 2010) [37]. In these cases, a group of more knowledgeable students who make up the core of the community is found, and this group offers opportunities for less knowledgeable students to lead the class discussion. It is possible that some new students in the KB may present maladaptive learning strategies (for example procrastination, see Monroe and González-Geraldo, 2022) [38] that become evident in active pedagogies such as the KB. These students would delegate the cognitive load to others, generating a centralized community (Gutiérrez-Braojos et al., (2018) and Gutiérrez-Braojos et al., (2019a) [10,39].

1.2. Assessment and Measures of Collective Cognitive Responsibility in Knowledge Building Communities

Evaluating the Collective Cognitive Responsibility is particularly important during educational implementation of the KB. The CCR is one of its key pedagogical principles (see Scardamalia, 2002) [32]. KB is interested in generating a knowledge product that is distributed equitably (as much as possible) among all members. In the sense that all students, and not just the most advantaged, participate, build, and master knowledge. Thus, it is defensible to say that this pedagogy is a boost for educational quality at any educational level.
Therefore, in order to know how well the KB is implementing, as well as to make it easier for students to be aware of their gaps and progress in reflective evaluations (Herman, 1992; Xie & Sharma, 2008;Yang et al.,2016) [31,40,41], it is necessary to explore how to measure the Collective Cognitive Responsibility, and how to make this information accessible to students (Cacciamani, et al., 2021; Changchong, et al., 2020; Oshima, et al., 2018) [33,42,43]. Offering students opportunities for reflective assessment allows them to reorient their strategies towards collective and individual achievement. Recent work in educational contexts, similar to this study, has found that students demand reflective assessments (Diez-Gutiérrez, and Gajardo, 2021) [44].
There are some authors who have addressed how to measure Collective Cognitive Responsibility. Ma, Matsuzawa, and Scardamalia (2016) [35] proposed a diachronic approach to evaluate CCR in virtual environments supported in the tool KBDeX and Social Network Analysis, SNA (see Oshima, et al., 2012; Oshima et al., 2021) [45,46]. Based on Collaborative Innovation Networks (Gloor, 2005, 2006) [47,48], the authors assume that collective cognitive responsibility is connected to a rotation of leadership over time. Based on this approach, leadership is associated with discussions initiated by a member who introduces a new idea or connects several ideas of other members, which, in turn, facilitates the advancement of community knowledge. To analyze this rotational leadership, Ma et al. (2016) [35] established a sequential analysis. In the first stage, the units of analysis are those keywords that are relevant in the subject of knowledge. Therefore, a community member acquires leadership whenever he/she makes a contribution that contains a combination of keywords that generates subsequent discussions among the rest of the members. In a second stage, an analysis of the content of the contributions is carried out before and after the leadership emerges. The authors found leadership diversity, suggesting that students can become leaders during the KB. On a nearby line, other researchers have contributed from the SNA to understand the CCR (e.g., Yamada, et al., 2019) [49].
We take advantage from the “promising ideas tool” (Chen et al., 2015) [50] to analyze Collective Cognitive Responsibility. Chen, et al., (2015) [50] developed a “promising ideas tool” within the KF platform to help the community identify the promising ideas generated in virtual communities. To do so, each student selects the ideas that seem promising according to some criteria agreed upon in class. The authors applied a content analysis to understand the value of promising ideas from the community perspective, and they found that promising ideas are connected to the advancement of knowledge in a community. In other words, members who contributed promising ideas achieved significantly greater knowledge advances than other members who did not.
As commented above, Chen et al. (2005) [50] used the promising idea construct to identify those who achieved knowledge advances in the community. Likewise, Ma et al. (2006) [35] identified leadership and thus the Collective Cognitive Responsibility by measuring the recognition received by the members in the community. Gutiérrez-Braojos et al. (2018 and 2019a) [10,39] bring these ideas together to measure the capacity to generate promising ideas of the members of a community through the recognition that they receive from their peers, and to use the measurements obtained to assess the CCR. This can be put into practice by taking advantage of the analogy between a Knowledge Building community and a scientific community, where the contributions in the KF platform play the role of research papers. Under this analogy, the individuals’ capacity to generate promising ideas according to the community is quantified by adapting the notion of impact from Scientometrics. In this scope, the “impact” of a contribution can be understood as the level of consensus within the community about the potential value of the contribution to achieve knowledge advance, whereas the impact of each individual accounts the impact of all his/her contributions. Based on the assumption that the more equitably the impact is distributed among its members, the higher the level of CCR assumed in the community, Gutiérrez-Braojos and colleagues borrowed ideas of Lorenz (1905) [51] and Gini (1912) [52] from Economy to analyze and quantify how the recognition is distributed, and thus to estimate CCR along different topics of discussion (over time). Thus, relying on ideas from Scientometrics (Price, 1986) [53], Gutiérrez-Braojos and colleagues observed how the recognition flew through the discussions in the community. Based on these analyses, they explored each individual’s responsibility in the creation of knowledge, the collective cognitive responsibility, and the value of the more recognized contributions for collective or community knowledge.
This study is based on this last proposal to analyze Collective Cognitive Responsibility. Despite its potential to evaluate a KB community according to its members’ criterium, this approach raises two questions that raise a challenge to assess Knowledge Building communities. The first one concerns the quantification of the impact. The number of citations received by some individuals do not, by themselves, show a clear interpretation of the impact of such an individual (do some need 10 citations to be impactful? Is it enough to receive just 9 or 5? Why?). It also depends on the topic of discussion, because the most popular topics are expected to generate more citations for all the members, so the number of citations to be impactful should also be higher in these topics. Hence, a measure of impact that carries an interpretation and enables a comparison between topics is needed. The second question concerns the value of the impact to reach the reflexive assessment of a KB community. When estimating CCR through a measure of impact within the community it is necessary to take into account some external measure of quality of the contributions that is comparable with this impact measure. In this way, the external assessment provides feedback to the community that enables testing the agreement between the internal and the external points of view and then supports the development of a reflective assessment.

1.3. Study Objectives

The purpose of this study is to address this assessment challenge by developing a recent way to explore the dynamics of community knowledge advancement in higher education, paying special attention to evaluating the Collective Cognitive Responsibility. Given that Knowledge Building and knowledge creation are synonymous (Bereiter and Scardamalia, 2018) [5], we look into to existing assessments used in real-world knowledge-creating contexts, such as scholarly and scientific communities, as inspiration for our new set of indices to assess collective responsibility for knowledge advancement. Whereas classroom interventions have been developed to help students identify and work with promising ideas during Knowledge Building (Chen, et al., 2015; Lee, et al., 2016) [50,54], follow-up work is interested in identifying impactful builders based on the quality and quantity of contributions recognized by the community. By extension, using econometric measures, we determine the degree of collective responsibility assumed by students in the community knowledge, taking every discussion topic over time as the unit of analysis. Additionally, we identify roles of members in the community and explore the flow of member roles through several discussion topics over the course of a semester. Therefore, we aim to:
Describe indices that quantify the members’ commitment according to internal evaluation (by peers), and the quality of their contributions according to external evaluation (by experts).
Estimate the Collective Cognitive Responsibility according to internal evaluation, and the distribution of the quality of contributions according to external evaluation.
Additionally, it is necessary to identify members’ roles in the community based on their contributions’ impact and explore the flow of member roles across several discussion topics over the course of a semester. Specifically, we aim to:
Explore roles of members in the community based on both their commitment to CCR and the quality of their contributions.
Analyze the temporal evolution of the measurements of commitment and quality of contributions across several discussion topics.

2. Materials and Methods

2.1. Participants and Educational Context

Thirty-six undergraduate students (2 males) enrolled in an educational research course at the University of Granada, Spain) participated in this study. The students were novel to KB and KF. The students and teacher divided the course into three main topics of interest or big challenges:
1st topic: Paradigms of Educational Research, PER. What are the paradigms for students that will be trained in a social education degree? Which covered approach should be adopted to face social and educational research?
2nd topic: Processes of educational research I, PrI. How should social and educational research be carried out? What study planning, information search, and data gathering techniques will be used?
3rd topic: Processes of educational research II, PrII. How should social and educational research be carried out? The covered analytic strategies and communication of results are shown.
Participants worked for 16 weeks through 3 modes of productive activity: small group collaboration in face-to-face mode, community knowledge work in online mode, and individual learning and reflection. In the face-to-face mode, students worked collaboratively in two sessions each week to solve a set of knowledge problems related to an educational research discussion topic. In the online mode, students used the Knowledge Forum (KF) to discuss ideas in an expansive, collectively organized space that visualized the edge of the community knowledge. The KF platform included several tools to facilitate collaborative Knowledge Building: (i) interaction scaffolds (e.g., “I need to understand”, “A better theory”, “Putting our knowledge together”) in contribution windows to support idea improvement; (ii) navigation tools to browse contributions through scaffold use, authorship, etc., and create new discussion spaces; and (iii) a menu to analyze and assess individual and group activity within the platform (e.g., reading, writing, editing behaviors). Finally, in the individual mode, students were encouraged to reflect on their ideas. In particular, they were asked to select those contributions in the KF platform that they considered to contain promising ideas, that is, those contributions which were relevant for improving the community knowledge. The process of selection constituted a specific task in the course, in order to focus the reflections of the students on what are the actual valuable contributions. Students made their selection in their portfolios, which were compiled by the teacher and provided to the research group.

2.2. Measurements and Development of Indices for Impactful Members

In this study, we developed indices to find out the impactful members of a KB community according to peer evaluation (i.e., students identifying impactful contributions in the Knowledge Forum discussion) and the quality of their contributions according to expert evaluation (i.e., the teacher identifying impactful ideas in the Knowledge Forum discussion) by using the note in the KF platform as a unit of analysis. Below, we first present the index based on peer evaluation, and then the index based on expert evaluation.
Peer evaluation: an index for contribution to the advancement of ideas valued by the community (CAC)
Students’ portfolios were used to identify the total number of impactful contributions in the community. As commented above, each portfolio included a set of Knowledge Forum notes that the student who made the portfolio identified as relevant for improving the community knowledge. For each contribution, students had to justify their selection. A total of 337 contributions were collected through the portfolios, which incorporated the content of the contributions, its authorship, the interaction scaffolds used, and the justification given by their peers. Thus, each student was given an impact score based on the overall peer recognition of their contributions. We called this index Contribution to the Advancement of Ideas valued by the Community (CAC):
CAC = 100 ∗ (RMC/TRM),
where RMC represents the sum of mentions received by individuals in the community, and TRM refers to total mentions received in the community. The CAC value for one student should be interpreted as the percentage of peer recognition from the community devoted to that student. The values of this index are between 0 and 100. It is important to realize that CAC allows us to compare members belonging to different communities, and it also makes it possible to connect any empirical data with an ideal situation of equidistribution of impact, where index values per individual should be equal to (100/N), where N stands for the number of members in the community (Gutiérrez-Braojos et al., 2019) [10,11].

2.3. Expert Evaluation: An Index for Contribution to the Advancement of Ideas Valued by Experts (CAE)

Teacher evaluation of the quality of student’s contributions was used as assessment that is external to the community. The teacher rated all the contributions in the Knowledge Forum with the Structure of Observed Learning Outcome Taxonomy (SOLO, Biggs and Collis, 1982) [55], which has been widely used to evaluate student learning on virtual platforms, specifically to analyze the accuracy, structural complexity, and originality of the knowledge reflected in the contributions (e.g., Brown, et al., 2006; Holmes, 2005; Schrire, 2005) [56,57,58]. The SOLO taxonomy has five levels of complexity, meta-categorized into two levels. On the one hand, superficial contributions refer to overly simplistic and/or disconnected ideas that do little to advance the community knowledge (pre-structural level). On the other hand, deep-level contributions include relevant ideas that coherently integrate essential aspects of the task requirements (relational level) and contributions that involve generalizations, knowledge transference, and novelty (extended abstract level). The teacher rated 193 of the 377 contributions made in the Knowledge Forum as deep contributions. Thus, each student was given a score based on the overall expert recognition of their contributions. We called this index Contribution to the Advancement of ideas valued by Experts (CAE):
CAE = 100 × DCE/TQC,
where DCE is the number of deep contributions of an individual, rated by the expert, and TQC represents the total number of deep contributions rated by the expert (i.e., the sum of the DCE values for all the members of the community). This index is created as an analogy to CAC to facilitate the comparison of peer-ratings and expert-ratings of impactful contributions in the community. In fact, CAE is interpreted as the proportion of quality contributions the individual has made in a community, according to the criteria of the expert. The limited range 0–100 of this index has analogous advantages to those of the CAC index: it enables not only interpreting the value as a percentage but also comparing two topics or one topic against the ideal situation of equidistribution of deep contributions (Gutiérrez-Braojos et al., 2019) [10,11].

2.4. Plan of Analysis for Indices of Impactful Members

After developing impact scores for each student based on peer expert evaluations, we transformed the aforementioned indices to estimate the degree of collective responsibility and explore the flow of member roles within the community over time. Below, we present the plan of analysis: (1) descriptive statistics, (2) calculation of the distribution of impact, (3) role classification based on impact scores, and, finally, (4) temporal dynamics of members’ roles.

Descriptive Statistics

The mean, median, and mode, as well as minimum and maximum values, were calculated with the purpose of summarizing and locating the information offered by the calculated indices. As for the dispersion, the coefficient of variation (CV) gives a measure of the relationship between the size of the mean and the variability. Through the CV, we wanted to compare the variability in the indices. Fisher’s asymmetry was also calculated in order to find out the distribution of the values on the indices.

2.5. Calculation of Distribution of Impact within a Community

The Lorenz curve, Gini coefficient (G), and medial cumulative value (Ml) were used to quantify the degree of inequality among students regarding the distribution of their scores on the above-mentioned indices. On the one hand, the Lorenz curve is a graphical representation that informs us how promising ideas are distributed among the members of the community. This graph gives us the percentage of accumulated impact by a group of members in the community (y-axis) and the accumulated percentage of members of the community that said group represents (x-axis). We used this curve to visualize the theoretical ideal of collective responsibility for knowledge advancement, where the impact is equally distributed within the community. Therefore, the closer the Lorenz curve is to a straight line, the closer the community is to the optimal state of Knowledge Building. On the other hand, the Gini coefficient (Gini, 1912, 1921) [52,59] and the medial cumulative value quantify the difference between the Lorenz curve and the straight line. In other words, the Gini value is proportional to the area between the two lines. Therefore, lower Gini coefficient values associated with a given index indicate greater equality in the impact distribution. Likewise, the medial cumulative value describes the point at which 50% of the community has not accumulated that given value. Therefore, the closer the value of Ml is to the median, the more equitable the impact distribution is. It should be noted that the Lorenz curve and Gini coefficient are usually used to measure and visualize unequal distributions of wealth within a population. Given the tendency toward unequal distributions of impactful contributions between more knowledgeable and less knowledgeable students, we adopted these measures to explore not only the mere presence or absence of collective responsibility, but also the degree of collective responsibility within a Knowledge Building community. Further explanations concerning the value of these measures to estimate collective responsibility for knowledge advancement can be found in Gutiérrez-Braojos et al., (2019a) [10,11].

2.5.1. Role Classification Based on Impact Scores

Members’ roles were identified based on grouped values of the indices. Students were classified into five roles (high impact builders, medium-high impact builders, medium impact builders, medium-low impact builders, and low impact builders), based on peer evaluation, and five other roles, based on expert evaluation of the cognitive complexity of students’ contributions: core builders, high-persistence builders, medium-persistence builders, low-persistence builders, and non-persistent builders. This terminology, which is inspired in the Price’s (1986) [53] analysis of publishing flows during a lapse of time, makes reference to the continuance of the individuals to produce high-quality contributions according to the external evaluation. Table 1 shows the classification criteria for members’ roles. Students whose scores were higher than 80% of the community were ranked as “high-impact builders” (according to peers’ recognition) or “core builders” (according to experts’ evaluation). Students whose scores were equal to 0 were ranked as “low-impact builders” (respectively, “non-persistent builders”). The remaining members were considered “medium-impact builders” (respectively, persistent builders). In order to provide greater detail about the distribution of roles in the community, three sub-groups of “medium-impact builders” and “persistent builders” were created.

2.5.2. Temporal Dynamics of Members’ Roles

Price’s (1986) [53] model of scientific production was used to understand the temporal dynamics of members’ roles across different topics in the community. In examining the authorship patterns of research papers over time, Price proposed two types of authors: continuant authors, who publish every year, and transient authors, who publish only once or a few isolated times (Gutiérrez-Braojos et al., 2019) [10,11]. Typically, the continuant authors constitute the core of a research field. We adopted Price’s model to identify the core group of impactful builders in our KB community. In the sections below, we refer to students who are rated as impactful in every topic of interest in the community as “core impact builders”. Likewise, we refer to students who are impactful in two topics (or several of them, but not every topic) as “continuant impact builders”. By contrast, “transient impact builders” are those students who are only impactful in one single topic. This grouping allows us to analyse the flow of each member’s role within each topic of discussion and identify whether there were students who had consistently high impact levels over time.

2.5.3. Comparison of Peer and Expert Evaluations

In order to validate peer ratings against expert ratings, we compared the proportion of the community rated as high-impact builders, medium-impact builders, and low-impact builders with the proportions of core, persistent, and non-persistent builders. In other words, percentages of agreement between the peer-based index (CAC) and the expert-based index (CAE) were calculated for each role.

3. Results

3.1. Descriptive Statistics Associated with CAC and CAE

Table 2 shows the mean, median, and mode, as well as minimum and maximum values, for the CAC (peer evaluation) and CAE (expert evaluation) indices across the main topics of discussion: (i) Paradigms of Educational Research (PER), (ii) Processes of educational research I (PrI), and (iii) Processes of educational research II (PrII). In the case of the CAC index (on the left), the means are equal in the three topics of discussion, regardless of the index, but they show different standard deviations. The median increases slightly as the course progresses, but the maximum values decrease slightly as the course progresses. The coefficient of variation (CV) indicates that the data are most homogeneous in topic PrII and least homogeneous in topic PER. The coefficient of asymmetry (g1) is positive in the three topics, and so the distribution is asymmetric to the right (i.e., the CAC index has a greater concentration of low values, although this asymmetry is corrected as the course progresses). As for CAE (on the right), the means are relatively consistent In the three topics of discussion. However, these results are not found for the median values—topic PrI has the highest median value, and topic PER has the lowest median value. The coefficient of variation is high in all the discussion topics, which may indicate that the scores are not homogeneous, especially in the PER topic. Similar to the CAC index, the CAE index shows asymmetric distribution to the right for all three topics, decreasing as the course progresses.
It is not surprising that the mean values for both indices are equal and do not change with the topic of discussion. Indeed, definitions of CAC and CAE under this assumption make it possible to calculate the mean of both indices to get 100/N, where N is the number of individuals. Therefore, mean values of the analyzed indices only depend on the number of individuals in the community (they do not depend on the specific index or change with time/topic). This property is especially suitable for our interest in investigating collective responsibility for two reasons. First, it reveals the robustness of proportion-based indices against fluctuations of topics or opinions of peers/experts. Second, these statistical results serve as a reference value for CAC in the ideal case of maximum collective responsibility (i.e., a scenario where all the members receive the same recognition). Consequently, the closer the CAC values are to 2.788, the more collective responsibility is assumed by all the members of the community.

3.2. Estimation of Collective Responsibility Based on the Calculation of Distribution of Impact within a Community

Figure 1 shows that the peer-based index (CAC) provided similar Lorenz curves for every topic of discussion. The Gini index shows that the equidistribution is slightly higher as the course progresses. These results reveal that the top 50% of impactful contributions are concentrated in 20% of the students. Conversely, it means that the other 50% is concentrated in 80% of the students. Although these contributions were not considered to be as impactful, their potential to improve the knowledge in the community should not be ruled out. The expert-based index (CAE) provided a more unequal distribution in the topic PER, followed by the topic PrI. Although 50% of the contributions are concentrated in 80% of the students, according to the teacher, a high percentage of students contributed superficially, based on the SOLO taxonomy.
Table 3 shows the Gini coefficient and the medial cumulative value for both indices across the three topics of discussion. Comparing CAC and CAE values allows us to observe that, once again, CAE gave rise to slightly less equidistribution of values in every single topic. In particular, the most unequal values between the two indices were found in the discussions on the topic PrII (processes of educational research), suggesting that this topic is more challenging than the other two topics (i.e., research paradigms).

3.3. Exploration of Role Classification Based on Impact Scores

Figure 2 and Figure 3 show the classification of students in their roles based on their CAC and CAE scores. With regard to the CAC index, results show that, in every discussion topic, the biggest group consisted of medium-impact builders, followed by low-impact builders (Figure 2). There were low percentages of high-impact builders, as well as medium-impact builders. These results suggest that in the same learning environment, discussing the same topics and working on the same assignments, there are differences in students’ production of impactful contributions.
Likewise, the CAE index presents a high percentage of non-persistent builders, followed by low-persistence builders (Figure 3). By contrast, a low percentage of high-impact builders and medium-impact builders (hi, mid) was found. The comparison of the peer and expert evaluations indicates that the latter gives rise to a higher number of non-persistent builders than low impact ones, which is consistent with the teacher’s coding of a high percentage of superficial contributions in the section above. The differences between the two indices (expert and peer evaluation) are not as clear in the percentages for the rest of the categories. It is also interesting to note there are more high-impact builders in the first two topics than in the third topic.
Figure 4 illustrates the behavior of the different profiles within a discussion in the Knowledge Forum platform. Once the students commented the characteristics of three research paradigms, they discussed which were the most accurate for their professional scope. Student 1 (low impact) included information from an authorized source, but when she gave her opinion about which is the best paradigm she offered an explanation with low cognitive complexity. Student 2 (also low impact) repeated previous information about the paradigms and did not offer her opinion. Student 3 (moderate-high impact) clearly summarized previous information about a single paradigm that was in her opinion the most appropriate for the social educators. Student 4 (high impact) agreed with student 3 but raised the possibility of using a quantitative or qualitative techniques but, without combining them. Student 5 (high impact) positioned herself in favor of the sociocritical theory, and remarked that the Action-Research paradigm admits several investigations, some of which could be quantitative and others qualitative. Finally, the learner developed the idea of merging quantitative and qualitative methods according to the needs of the Action-Research phases to give answer to the research objective. This example illustrates that students did not recognize as impacting those who repeated information, i.e., those who did not contribute to the improvement of previous ideas were no longer of interest to the community. On the other hand, students that argued their opinion, established properties to the knowledge according to context, or merged ideas in some way resulted as high impact students.

3.4. Exploration of Temporal Dynamics of Members’ Roles

Figure 5 shows the transition flows of students’ roles across the three discussion topics for the CAC index. It reveals that there were small numbers of continuant high-impact builders (2.8% in the first transition and no continuance in the second transition) and low-impact builders (8.3% or less in both cases). However, there was a high percentage of continuant medium-impact builders (more than 50% in each topic).
Similarly, the CAE index (Figure 6) reveals that core builders made up a minority: there were no continuant core builders in the first transition and a low percentage (2.8%) of core builders from topic PrI to PrII. Again, the highest percentage of continuance was found in the persistent builders group, mainly in the second transition. Nevertheless, percentages of continuant low-impact builders were considerably higher than on the CAC index in every transition. Taken together, these results suggest that the majority of students in this Knowledge Building community who started as medium-impact builders remained in this role.

3.5. Comparison of Peer and Expert Evaluations

Table 4 shows the number of high-impact builders, medium-impact builders, and low-impact builders, based on peer evaluations (grey column), and core, persistent, and non-persistent builders, according to expert evaluations (white column). Students who were identified as low-impact builders by their peers were also considered non-persistent builders by the teacher in 100% of the cases for the PER and PrII discussion topics, whereas agreement was 71% in the case of PrI. However, there was greater disagreement between medium-impact builders and persistent ones. Finally, there were 0 cases in which all the high-impact builders identified by students were also considered core builders by the teacher. This indicates potential discrepancies between the peer evaluation and expert evaluation processes. For example, the criteria for rating impactful contributions may not be as similar to the criteria for rating deep and shallow ones (i.e., SOLO taxonomy) as we had anticipated.

4. Conclusions and Discussion

In this study, we developed a new approach to examine collective responsibility for knowledge advancement in a Knowledge Building community. Using peer evaluation and expert evaluation, we explored the community dynamics and patterns of students’ engagement across various discussion topics in a university-level education course. More specifically, we created the index of Contribution to the Advancement of ideas valued by the Community (CAC) for peer evaluation, and the index of Contribution to Advancement of ideas valued by Experts (CAE) for expert evaluation. CAC proved to be useful for identifying the rating of a student’s impactful contributions according to his/her peers, whereas CAE was useful for identifying the rating of a student’s high-quality contributions according to his/her teacher. CAC and CAE not only allow the comparison of various discussion topics within a community, but also against the ideal situation of Collective Cognitive Responsibility, thus providing a global perspective of the extent to which a member is engaged in collective responsibility for knowledge advancement in multiple Knowledge Building communities.
CAC and CAE measure different constructs and they are complementary indices. In this paper, the added value of CAE quantifies the number of high-quality in a comparable with CAC, and it thus enables not only performing the analysis of equidistribution with CAC, but also having the teacher’s evaluation about what is impactful in the community (Table 4). Indeed, the purpose of CAE is to provide formative assessment (external to the community) that supports the development of reflective assessment, which is the type of assessment that is specific to Knowledge Building communities. The more similarity there is between the results of CAC and CAE-related analyses, the more agreement can be assumed between community assessment and teacher assessment, and thus the more reflective assessment capacity the community has. This is especially useful in novel KB communities, where the students are not used to autonomously carry out the reflective assessment.
Additionally, the Lorenz curve and the Gini coefficient were used to estimate the equidistribution of impactful contributions within the community. Based on the two indices used, our results demonstrated a relatively low level of collective responsibility (i.e., high inequality in the distribution of impactful contributions) across all three discussion topics in our course, which reinforces the pedagogical challenge of engaging students in authentic knowledge work in university-level courses, as stated in the literature review. Regarding novel patterns of engagement observed in this Knowledge Building community, we classified students in three major roles based on their CAC (high impact builders, medium impact builders, and low impact builders) and CAE (core builders, persistent builders, non-persistent builders) scores. With this information, we carried out three analytical approaches to the Collective Cognitive Responsibility.
First, we carried out an analysis for each discussion topic. The results show that there were very few high-impact builders and core builders, and the majority of the community comprised medium-impact and persistent builders. Our findings show that high peer recognition is especially concentrated in a few members in each discussion topic. In fact, other Knowledge Building studies in university courses (e.g., Lax et al., 2016; Mylläri et al., 2010) [23,37] found similar results. These results also appear to replicate the laws of Lotka (1926) [60] and Pareto (1896) [61], who show that a small, prolific group of authors publish a considerable proportion of the scientific productions, whereas the majority of the authors publish a smaller number of scientific productions in a discipline.
Interestingly, we found that the discussion topic seemed to have an effect on the production and search for impactful contributions. The third discussion topic, research processes II, had the fewest impactful contributions identified by students. We recognize that this was the most technical topic because it involved data analysis and reporting and writing up results. On the other hand, the first discussion topic, research paradigms, had the fewest number of complex contributions identified by the teacher. We acknowledge that this was the most abstract topic because it involved comparing different research design epistemologies. One possible reason for this discrepancy between the peer and expert ratings is that they had different expectations about the level of mastery needed for each discussion topic. Nevertheless, more technical and/or abstract topics require more coordinated efforts, not only to generate impactful contributions, but also to improve the shared knowledge in the community. Teachers should consider giving students more time and/or additional supports to collaboratively work on difficult problems.
Second, an analysis across the three discussion topics was also carried out. Core authors were not found in all the topics, as in Prices’ (1986) [53] model of scientific production within a research community. In other words, we found low levels of continuance in high-impact builders and in core builders, which means that a few different students led different discussion topics, with the majority of members’ roles remaining stable over time. These results match the idea of rotation leadership (Ma et al., 2016) [35] and reveal that sometimes rotation of responsibility in the advancement of knowledge could hide a low Collective Cognitive Responsibility for the rest of the members.
Third, a comparison of peer evaluations and teacher evaluations showed no agreement about which members are “good” (high impact/core). However, peers’ and experts’ evaluations do agree when classifying “medium” (medium impact, persistent) builders and the “worst” (low impact/non-persistent) builders. By comparing our two indices more closely, we noticed that students tended to give more recognition than the quality assigned by the teachers, suggesting that the teachers had stricter criteria for differentiating between superficial and deep-level contributions. As a result, the peer evaluation index generated a higher number of medium-impact builders than the number of persistent ones obtained by the expert evaluation index. However, differences between the two indices are not as apparent when observing the highest and lowest categories. It should also be noted that, although the contributions of the majority of the students were not considered impactful, their potential to contain promising ideas should not be dismissed. Past research (Chen et al., 2015) [50] indicates that different criteria are used for judging promising ideas and correct ideas, and that students should be equally fluent in applying different criteria to assess each others’ ideas for various purposes during Knowledge Building. In this regard, additional qualitative analyses are needed to understand how students and teachers conceptualize and categorize impactful contributions. Moreover, subjective experiences of being a member of the community can influence students’ recognition due to being tied to their social relationships. More work is needed to uncover adult students’ perceptions and experiences when engaging in Knowledge Building in university-level courses.
In summary, these results match those of Gutiérrez-Braojos, et al. (2018) and Gutiérrez-Braojos et al. (2019) [10,11,39] and the findings seem to indicate that it is necessary for the teacher to implement effective strategies to facilitate collective responsibility for knowledge advancement. There could be several reasons for this finding. Recent studies in a similar context have found that reflective evaluation drives Collective Cognitive Responsibility and the advancement of collective ideas. Therefore, offering reflective assessment sessions where students receive information from the CCR (Analytics tool based on these indices), could help develop a more cohesive community when it comes to building ideas. It is also possible that the students are not familiar with pure constructivist pedagogies, causing them to have a fragmented conception and use surface approaches in knowledge building (Tsai et al., 2017) [62]. Therefore, more time might be needed to assimilate the Collective Cognitive Responsibility (Zhang, et al., 2009) [63]. It is also possible that the topics are too easy or too difficult for the students, thus affecting their level of interest and engagement. Another possibility is that the task was not authentic enough to engage students in sustained discussions over time. Additional reasons for students’ adoption of a passive stance in collaborative online discussions (such as “ghosts”, “lurkers”, and “free riders”) are elaborated in Strijbos and De Laat’s (2010) [64] review of participation roles. The pedagogical challenge is to empower students to engage in meaningful, constructive interactions online, without pigeonholing them into one fixed role. Teachers should work with students to identify complex knowledge problems and design challenges to solve together.

5. Study Limitations and Future Directions

Fostering collective responsibility for knowledge advancement over the span of several weeks is a great challenge for teachers. Nonetheless, we find it noteworthy that our findings parallel those obtained in a previous study that employed a similar methodology with another sample of university students (Gutiérrez-Braojos et al., 2019) [10,11]. These studies point to the potential usefulness of the indices within a Knowledge Building community. Thus, the Knowledge Building challenge remains: idea generation is easy, whereas idea improvement is hard (Scardamalia and Bereiter, 2006) [19]. Teachers need to provide scaffolding for students to remain on a trajectory of continual idea improvement. We recommend that future studies apply our indices of collective responsibility for knowledge advancement in other Knowledge Building contexts to test the power of our methodology.
To conclude, we propose that student portfolios and peer ratings are one way to identify impactful contributions in a community knowledge and, by extension, estimate the degree of collective responsibility within a Knowledge Building community. To truly test our proposal, we are working toward developing a new Knowledge Forum analytic tool to provide embedded, transformative assessment for students and teachers in order to make their Knowledge Building process clear, such as productive engagement patterns that lead to the equidistribution of recognition within the community knowledge. In preparing students to become future knowledge workers, teachers need to empower students to take charge of their own learning and that of other community members at the highest levels (Scardamalia and Bereiter, 1994) [65]. Making peer feedback and peer recognition an integral part of the Knowledge Building process can only catapult students toward assuming collective responsibility for knowledge advancement.

Author Contributions

All author participate in every part of the research. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Ministry of Science and Innovation-State Research Agency under Grant number PID 2020-116872-RA-100.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved by the Institutional Review Board (or Ethics Committee) of University of Granada (nº: 2900/CEIH/2022; 09 de Junio de 2022).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Drucker, P. The Post-Capitalism Society; Harper and Row Publishers: New York, NY, USA, 1993. [Google Scholar]
  2. Nonaka, I.; Takeuchi, H. The Knowledge-Creating Company: How Japanese Companies Create the Dynamics of Innovation; Oxford University Press: New York, NY, USA, 1995. [Google Scholar]
  3. OECD. The Future of Education and Skills. Education 2030. The Future We Want. 2018. Available online: https://www.oecd.org/education/2030/E2030%20Position%20Paper%20(05.04.2018).pdf (accessed on 1 August 2022).
  4. UNESCO. Repensar la Educación? Hacia un Bien Común Mundial? 2015. Available online: http://unesdoc.unesco.org/images/0023/002325/232555e.pdf (accessed on 1 August 2022).
  5. Bereiter, C.; Scardamalia, M. Fixing Humpty-Dumpty: Putting higher-order skills and knowledge together again. In Theory of Teaching Thinking: International Perspectives; Kerslake, L., Wegerif, R., Eds.; Routledge: London, UK, 2018; pp. 72–87. [Google Scholar]
  6. Hargreaves, D.H. The Knowledge-Creating School. Br. J. Educ. Stud. 1999, 47, 122–144. [Google Scholar] [CrossRef]
  7. Jugembayeva, B.; Murzagaliyeva, A.; Revalde, G. Pedagogical Model for Raising Students’ Readiness for the Transition to University 4.0. Sustainability 2022, 14, 8970. [Google Scholar] [CrossRef]
  8. Santos Rego, M.Á.; Lorenzo Moledo, M.; Godás Otero, A.; Sotelino Losada, A. Aprendizaje cooperativo, autoimagen y percepción del ambiente de aprendizaje en educación secundaria. Bordón. Rev. Pedagog. 2020, 72, 117–132. [Google Scholar] [CrossRef]
  9. Tan, S.C.; So, H.J.; Yeo, J. Knowledge Creation in Education; Springer: Singapore, 2014. [Google Scholar]
  10. Gutiérrez-Braojos, C.; Montejo-Gámez, J.; Ma, L.; Chen, B.; Muñoz de Escalona, M.; Scardamalia, M.; Bereiter, C. Exploring Collective Cognitive Responsibility through the Emergence and Flow of Forms of Engagement in a Knowledge Building Community: Smart Pedagogy for Technology Enhanced Learning. In Didactics of Smart Pedagogies; Daniela, L., Ed.; AG Springer International Publishing: Cham, Switzerland, 2019; pp. 213–232. [Google Scholar]
  11. Gutiérrez-Braojos, C.; Montejo-Gámez, J.; Marín-Jiménez, A.; Campaña, J. Hybrid learning environ-ment: Collaborative or competitive learning? Virtual Real. 2019, 23, 411–423. [Google Scholar] [CrossRef]
  12. Lee, J.Y.; Jin, C.H. How collective intelligence fosters incremental innovation. J. Open Innov. Technol. Mark. Complex. 2019, 5, 53. [Google Scholar] [CrossRef]
  13. Häkkinen, P.; Hämäläinen, R. Shared and personal learning spaces: Challenges for pedagogical design. Internet High. Educ. 2012, 15, 231–236. [Google Scholar] [CrossRef]
  14. Wegerif, R. A dialogic understanding of the relationship between CSCL and teaching thinking skills. Int. J. Comput. Supported Collab. Learn. 2006, 1, 143–157. [Google Scholar] [CrossRef]
  15. Bereiter, C.; Scardamalia, M. Learning to work creatively with knowledge. In Powerful Learning Environments: Unravelling Basic Components and Dimensions; Corte, E.D., Verschaffel, L., Entwistle, N., Merriënboer, J.V., Eds.; Elsevier Science: Oxford, UK, 2003; pp. 73–78. [Google Scholar]
  16. Bloom, B.S.; Engelhart, M.D.; Furst, E.J.; Hill, W.H.; Krathwohl, D.R. Taxonomy of educational objectives: The classification of educational goals. In Handbook 1: Cognitive Domain; Bloom, B.S., Ed.; David McKay: New York, NY, USA, 1956. [Google Scholar]
  17. Anderson, L.W.; Krathwohl, D.R. A Taxonomy for Learning, Teaching, and Assessing: A Revision of Bloom’s Taxonomy of Educational Objectives; Longman: Harlow, UK, 2001. [Google Scholar]
  18. van Aalst, J. Knowledge building: Rationale, examples, design and assessment. Comput. New Zealand Sch. Learn. Teach. Technol. 2012, 24, 220–238. [Google Scholar]
  19. Scardamalia, M.; Bereiter, C. Knowledge building: Theory, pedagogy, and technology. In Cambridge Handbook of the Learning Sciences; Sawyer, K., Ed.; Cambridge University Press: New York, NY, USA, 2006; pp. 97–118. [Google Scholar]
  20. Chen, B.; Hong, H.-Y. Schools as Knowledge-Building Organizations: Thirty Years of Design Research. Educ. Psychol. 2016, 51, 266–288. [Google Scholar] [CrossRef]
  21. Ellis, G.W.; Rudnitsky, A.N.; Moriarty, M.A.; Mikic, B. Applying knowledge building in an engineering class: A pilot study. Int. J. Eng. Educ. 2011, 27, 945–957. [Google Scholar]
  22. Hong, H.Y. Scardamalia Community knowledge assessment in a knowledge building environment. Comput. Educ. 2014, 71, 279–288. [Google Scholar] [CrossRef]
  23. Lax, L.; Singh, A.; Scardamalia, M.; Librach, L. Self-assessment for knowledge building in health care. QWERTY Interdiscip. J. Technol. Cult. Educ. 2006, 2, 19–37. [Google Scholar]
  24. Scardamalia, M. CSILE/Knowledge Forum®. In Education and Technology: An Encyclopedia; ABC-CLIO: Santa Barbara, CA, USA, 2004; pp. 183–192. [Google Scholar]
  25. Daniela, L.; Rūdolfa, A. Learning platforms—How to make the right choice. In Didactics of Smart Pedagogy: Smart Pedagogy for Technology Enhanced Learning; Daniela, L., Ed.; Springer: Berlin/Heidelberg, Germany, 2019; pp. 191–212. ISBN 978-3-030-01550-3. [Google Scholar]
  26. Daniela, L.; Rūdolfa, A.; Rubene, Z. Results of the evaluation of learning platforms and digital learning materials. In Remote Learning in Times of Pandemic. Issues, Implications and Best Practice; Daniela, L., Visvizi, A., Eds.; Taylor&Francis: Abingdon, UK, 2021; pp. 196–210. ISBN 978-0-367-76570-5. [Google Scholar]
  27. Hong, H.Y.; Chen, B.; Chai, C.S. Exploring the development of college students’ epistemic views during their knowledge building activities. Comput. Educ. 2016, 98, 1–13. [Google Scholar] [CrossRef]
  28. Cacciamani, S.; Cesareni, D.; Martini, F.; Ferrini, T.; Fujita, N. Influence of participation, facilitator styles, and metacognitive reflection on knowledge building in online university courses. Comput. Educ. 2012, 58, 874–884. [Google Scholar] [CrossRef]
  29. Yucel, U.A.; Usluel, Y.K. Knowledge building and the quantity, content and quality of the interaction and participation of students in an online collaborative learning environment. Comput. Educ. 2016, 97, 31–48. [Google Scholar] [CrossRef]
  30. Hong, H.Y.; Chen, F.C.; Chai, C.; Chan, W.C. Teacher-education student’ views about knowledge building theory and practice. Instr. Sci. 2011, 39, 467–482. [Google Scholar] [CrossRef]
  31. Yang, Y.; Van Alst, J.; Chan, C.C.; Tian, W. Reflective assessment in knowledge building by students with low academic achievement. Int. J. Comput.-Supported Collab. Learn. 2016, 11, 281–311. [Google Scholar] [CrossRef]
  32. Scardamalia, M. Collective cognitive responsibility for the advancement of knowledge. Lib. Educ. A Knowl. Soc. 2002, 97, 67–98. [Google Scholar]
  33. Cacciamani, S.; Perrucci, V.; Fujita, N. Promoting Students’ Collective Cognitive Responsibility through Concurrent, Embedded and Transformative Assessment in Blended Higher Education Courses. Technol. Know. Learn 2021, 26, 1169–1194. [Google Scholar] [CrossRef]
  34. Gutiérrez-Braojos, C.; Salmerón-Pérez, H. Exploring collective cognitive responsibility and its effects on students’ impact in a knowledge building community. Infanc. Aprendiz. 2015, 38, 327–367. [Google Scholar] [CrossRef]
  35. Ma, L.; Matsuzawa, Y.; Scardamalia, M. Rotating leadership and collective responsibility in a grade 4 Knowledge Building classroom. Int. J. Organ. Des. Eng. 2016, 4, 54–84. [Google Scholar] [CrossRef]
  36. Yang, Y.; van Aalst, J.; Chan, C. Examining Online Discourse Using the Knowledge Connection Analyzer Framework and Collaborative Tools in Knowledge Building. Sustainability 2021, 13, 8045. [Google Scholar] [CrossRef]
  37. Mylläri, J.; Åhlberg, M.; Dillon, P. The dynamics of an online knowledge building community. A 5-year longitudinal study. Br. J. Educ. Technol. 2010, 41, 365–387. [Google Scholar] [CrossRef]
  38. Monroy, F.; González-Geraldo, J.L. Diseño de una escala de procrastinación en español y medición de los niveles de procrastinación de estudiantes de educación. Bordón. Rev. Pedagog. 2022, 74, 63–76. [Google Scholar] [CrossRef]
  39. Gutiérrez-Braojos, C.; Ma, L.; Montejo-Gámez, J.; Chen, B. That’s an impactful idea: Using peer citation to explore collective responsibility for knowledge advancement. In Knowledge Building: A Place for Everyone in a Knowledge Society, Proceedings of the 22nd Annual Knowledge Building Summer Institute; Knowledge Building International: Toronto, ON, Canada, 2018; p. 227. [Google Scholar]
  40. Herman, J.; Aschbacher, P.; Winters, L. A Practical Guide to Alternative Assessment; Association for Supervision and Curriculum Development: Alexandria, VA, USA, 1992. [Google Scholar]
  41. Xie, Y.; Ke, F.; Sharma, P. The effect of peer feedback for blogging on college students’ reflective learning processes. Internet High. Educ. 2008, 11, 18–25. [Google Scholar] [CrossRef]
  42. Changhong, Y.; Yibing, Z.; Xiaocui, Y. Collective Cognitive Responsibility in Knowledge Building Community: Theoretical Model Construction and Application Research; Paper Presented at KBSI 2020; 2020. Available online: https://ikit.org/summerinstitute2020/wp-content/uploads/2021/03/147-Yin-Zhang-Yin-Collective.pdf (accessed on 1 August 2022).
  43. Oshima, J.; Oshima, R.; Fujita, W. A mixed-methods approach to analyze shared epistemic agency in jigsaw instruction at multiple scales of temporality. J. Learn. Anal. 2018, 5, 10–24. [Google Scholar] [CrossRef]
  44. Diez-Gutierrez, E.; Gajardo Espinoza, K. Online assessment in Higher Education in times of Coronavirus. What do students think? Bordon J. Pedagog. 2021, 73, 39–57. [Google Scholar] [CrossRef]
  45. Oshima, J.; Oshima, R.; Matsuzawa, Y. Knowledge Building Discourse Explorer: A social network analysis application for knowledge building discourse. Educ. Technol. Res. Dev. 2012, 60, 903–921. [Google Scholar] [CrossRef]
  46. Oshima, J.; Yamashita, S.; Oshima, R. Discourse Patterns and Collective Cognitive Responsibility in Collaborative Problem-Solving. In Proceedings of the 15th International Conference of the Learning Sciences—ICLS 2021; de Vries, E., Hod, Y., Ahn, J., Eds.; International Society of the Learning Sciences: Bochum, Germany, 2021; pp. 517–520. [Google Scholar]
  47. Gloor, P.A. Capturing Team Dynamics through Temporal Social Surfaces. In Proceedings of the 9th International Conference on Information Visualisation IV05, London, UK, 6–8 July 2005; IEEE: London, UK, 2005; pp. 6–8. [Google Scholar]
  48. Gloor, P.A. Swarm Creativity: Competitive Advantage through Collaborative Innovation Networks; University Press: Oxford, UK, 2006. [Google Scholar]
  49. Yamada, M.; Tohyama, S.; Kondo, H.; Ohsaki, A. A Case Study of Multidimensional Analysis for Student staff Collective Cognitive Responsibility in Active Learning Classrooms. Int. J. Educ. Media Technol. 2019, 13, 115–124. Available online: https://ijemt.org/index.php/journal/article/view/193 (accessed on 1 August 2022).
  50. Chen, B.; Scardamalia, M.; Bereiter, C. Advancing knowledge-building discourse through judgments of promising ideas. Int. J. Comput.-Supported Collab. Learn. 2015, 10, 345–366. [Google Scholar] [CrossRef]
  51. Lorenz, M.O. Methods of measuring the concentration of wealth. Publ. Am. Stat. Assoc. 1905, 9, 209–219. [Google Scholar] [CrossRef]
  52. Gini, C. Variabilità e Mutuabilità. In Contributo allo Studio delle Distribuzioni e delle Relazioni Statistiche; C. Cupini: Bologna, Italy, 1912. [Google Scholar]
  53. Price, D.d.S. Little Science, Big Science… and Beyond; Columbia University Press: New York, NY, USA, 1986. [Google Scholar]
  54. Lee, V.Y.; Tan, S.C.; Chee, J.K. Idea Identification and Analysis (I2A): A Search for Sustainable Promising Ideas within Knowledge-Building Discourse. In Transforming Learning, Empowering Learners: The International Conference of the Learning Sciences (ICLS), 1; Looi, C.K., Polman, J.L., Cress, U., Reimann, P., Eds.; International Society of the Learning Sciences: Singapore, 2016. [Google Scholar]
  55. Biggs, J.B.; Collis, K.F.Y. Evaluating the Quality of Learning: The SOLO Taxonomy; Academic Press: New York, UK, 1982. [Google Scholar]
  56. Brown, N.; Smyth, K.; Mainka, C. Looking for evidence of deep learning in constructively aligned online discussions. In Proceedings of the Fifth International Conference on Networked Learning; Banks, S., Hodgson, V., Jones, C., Kemp, B., McConnell, D., Smith, C., Eds.; Lancaster University: Lancaster, UK, 2006; pp. 315–322. [Google Scholar]
  57. Holmes, K. Analysis of asynchronous online discussion using the SOLO Taxonomy. Aust. J. Educ. Dev. Psychol. 2005, 5, 117–127. [Google Scholar]
  58. Schrire, S. Knowledge building in asynchronous discussion groups: Going beyond quantitative analysis. Comput. Educ. 2006, 46, 49–70. [Google Scholar] [CrossRef]
  59. Gini, C. Measurement of inequality of incomes. Econ. J. 1921, 31, 124–126. [Google Scholar] [CrossRef]
  60. Lotka, A. The frequency distribution of scientific productivity. J. Wash. Acad. Sci. 1926, 16, 317–323. [Google Scholar]
  61. Pareto, V. Cours D’Économie Politique; F. Rouge: Lausanne, Switzerland, 1896. [Google Scholar]
  62. Tsai, P.S.; Chai, C.S.; Hong, H.Y.; Koh, H.L. Students’ conceptions of and approaches to knowledge building and its relationship to learning outcomes. Interact. Learn. Environ. 2017, 25, 749–761. [Google Scholar] [CrossRef]
  63. Zhang, J.; Scardamalia, M.; Reeve, R.; Messina, R. Designs for collective cognitive responsibility in knowledge building communities. J. Learn. Sci. 2009, 18, 7–44. [Google Scholar] [CrossRef]
  64. Strijbos, J.W.; De Laat, M.F. Developing the role concept for computer-supported collaborative learning: An explorative synthesis. Comput. Hum. Behav. 2010, 26, 495–505. [Google Scholar] [CrossRef]
  65. Scardamalia, M.; Bereiter, C. Computer support for knowledge-building communities. J. Learn. Sci. 1994, 3, 265–283. [Google Scholar] [CrossRef]
Figure 1. Lorenz curves associated with CAC and CAE indices across the three topics.
Figure 1. Lorenz curves associated with CAC and CAE indices across the three topics.
Sustainability 14 10603 g001
Figure 2. Distributions of roles according to the CAC index across the different discussion topics.
Figure 2. Distributions of roles according to the CAC index across the different discussion topics.
Sustainability 14 10603 g002
Figure 3. Distributions of roles according to the CAE index across the different topics of discussion.
Figure 3. Distributions of roles according to the CAE index across the different topics of discussion.
Sustainability 14 10603 g003
Figure 4. Examples of contributions in the Knowledge Forum from different profiles of students.
Figure 4. Examples of contributions in the Knowledge Forum from different profiles of students.
Sustainability 14 10603 g004
Figure 5. Temporal evolution of the roles according to the CAC index. Note: circles represent the community in each topic. Thus, the same piece sector always represents the same member regardless of the sector.
Figure 5. Temporal evolution of the roles according to the CAC index. Note: circles represent the community in each topic. Thus, the same piece sector always represents the same member regardless of the sector.
Sustainability 14 10603 g005
Figure 6. Temporal evolution of the roles according to the CAE index. Note: circles represent the community in each topic. Thus, the same piece sector always represents the same member regardless of the sector.
Figure 6. Temporal evolution of the roles according to the CAE index. Note: circles represent the community in each topic. Thus, the same piece sector always represents the same member regardless of the sector.
Sustainability 14 10603 g006
Table 1. Members’ role classification.
Table 1. Members’ role classification.
Peer EvaluationConditionExpert Evaluation
High impact buildersX > 80Core builders
Medium-impact builders (hi)80 > X > 50Persistent builders (hi)
Medium-impact builders (mid)50 > X > 20Persistent builders (mid)
Medium-impact builders (lo)20 > X, X different from 0Persistent builders (lo)
Low-impact buildersX = 0Non-persistent builders
Note: X = score of the index, and > = “greater than”. Hence, “P > X > p” indicates that the score of the member is greater than p% and less than P% of the community.
Table 2. Descriptive statistics associated with CAC and CAE.
Table 2. Descriptive statistics associated with CAC and CAE.
CACCAE
1st Topic (PER)2nd Topic
(PrI)
3rd Topic
(PrII)
1st Topic
(PER)
2nd Topic
(PrI)
3rd Topic
(PrII)
Mean2.7782.7782.7782.7782.7782.778
Me2.362.442.541.922.311.32
Min000000
Max11.81110.5697.62713.42612.30810.526
Sd2.7692.5222.3063.5742.9153.11
CV0.9970.9080.8301.2871.0491.120
g11.5211.0940.4671.1761.2770.931
Table 3. Inequality measures associated with CAC and CAE.
Table 3. Inequality measures associated with CAC and CAE.
CACCAE
PERPrIPrIIPERPrIPrII
G0.4920.4780.4560.6480.5380.583
Ml4.7244.0655.0857.4814.6157.895
Table 4. Percentages of agreement about groups according to the CAE index and the CAC index across the discussion topics.
Table 4. Percentages of agreement about groups according to the CAE index and the CAC index across the discussion topics.
Topic PERTopic PRITopic PRII
High impact builders (N = 3)33.33% CoreHigh impact builders (N = 3)66.67% CoreHigh impact builders (N = 4)0% Core
66.67% Persistent33.33% Persistent100% Persistent
0% Non-persistent0% Non-persistent0% Non-persistent
Medium
impact builders (N = 26)
7.69% CoreMedium
impact builders (N = 26)
0% CoreMedium
impact builders (N = 23)
4.35% Core
53.85% Persistent76.92% Persistent82.61% Persistent
38.46% Non-persistent23.08% Non-persistent13.04% Non-persistent
Low impact builders (N = 7)0% CoreLow impact builders (N = 7)14.29% CoreLow impact builders (N = 9)0% Core
0% Persistent14.29% Persistent0% Persistent
100% Non-persistent71.43% Non-persistent100% Non-persistent
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Gutiérrez-Braojos, C.; Daniela, L.; Montejo-Gámez, J.; Aliaga, F. Developing and Comparing Indices to Evaluate Community Knowledge Building in an Educational Research Course. Sustainability 2022, 14, 10603. https://0-doi-org.brum.beds.ac.uk/10.3390/su141710603

AMA Style

Gutiérrez-Braojos C, Daniela L, Montejo-Gámez J, Aliaga F. Developing and Comparing Indices to Evaluate Community Knowledge Building in an Educational Research Course. Sustainability. 2022; 14(17):10603. https://0-doi-org.brum.beds.ac.uk/10.3390/su141710603

Chicago/Turabian Style

Gutiérrez-Braojos, Calixto, Linda Daniela, Jesús Montejo-Gámez, and Francisco Aliaga. 2022. "Developing and Comparing Indices to Evaluate Community Knowledge Building in an Educational Research Course" Sustainability 14, no. 17: 10603. https://0-doi-org.brum.beds.ac.uk/10.3390/su141710603

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop