Next Article in Journal
Geochemical Classification of Global Mine Water Drainage
Next Article in Special Issue
Erratum: Porter, B., et al. A Platform for AI-Enabled Real-Time Feedback to Promote Digital Collaboration. Sustainability 2020, 12, 10243
Previous Article in Journal
Maintaining Sustainable Practices in SMEs: Insights from Sweden
Previous Article in Special Issue
An Investigation on the Use by Academic Researchers of Knowledge from Scientific Social Networking Sites
 
 
Erratum published on 9 March 2021, see Sustainability 2021, 13(5), 2975.
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Platform for AI-Enabled Real-Time Feedback to Promote Digital Collaboration

1
Riff Analytics, Newton, MA 02459, USA
2
College of Professional Studies, Northeastern University, Boston, MA 02115, USA
*
Author to whom correspondence should be addressed.
Sustainability 2020, 12(24), 10243; https://0-doi-org.brum.beds.ac.uk/10.3390/su122410243
Submission received: 1 November 2020 / Revised: 4 December 2020 / Accepted: 6 December 2020 / Published: 8 December 2020
(This article belongs to the Special Issue Digital Technologies for Collaborative Knowledge Networks)

Abstract

:
This paper explores the effect of AI-enabled real-time feedback on group dynamics and individual behavior. While feedback interventions have been employed for several years to trigger behavioral change, the lack of instantaneous feedback and the required infrastructure are limiting the widespread use of these interventions. The methodology we describe offers immediate pointers to participants through the use of the Meeting Mediator (MM), an online intervention tool that shows the conversational balance of participants and offers immediate feedback to team members, with limited intermediation of the researchers. Both the experimental group—exposed to the MM—and the control group completed two tasks, which involved making a series of complex decisions as a group in the form of two moral reasoning tasks. Results confirmed that participants exposed to the MM experienced approximately twice as large of an increase in self-assessed dominance over the control group as those who were exposed only once. This effect is also present on repeated exposures, and becomes more pronounced with each subsequent exposure. When participants were exposed to the MM either in the first task or in the second task, their performance increased, though we found no positive impact when groups were exposed several times to it. Overall, this experiment demonstrates the benefits of using AI-enabled tools to promote effective collaboration and sustainable growth in corporate settings and online education environments, which requires the development of critical thinking and self-reflection skills.

1. Introduction

In order to build more sustainable organizations, leaders need to leverage individual and collective knowledge by offering various stakeholders access to collaborative technologies that allow multiple perspectives to be included in the decision making process [1]. Additionally, to ensure a sustainable competitive advantage, leaders need to foster methods that support real-time dialogue and lead to mutual understanding on the basis of trust and transparency [2]. The method we describe in this paper allows for more transparency of the collaborative process facilitated by technologies. It helps support a broader path to sustainability based on the view that technological innovation should go beyond a simple fix to environmental and social problems. Digital transformation has impacted the way people and organizations interact with each other, promoting a more transparent exchange of information that can potentially result in real sustainable growth, where organizations incorporate multiple perspectives that will change the way businesses interact with society and the environment.
Today we live in what has been called the world’s largest experiment of working-from-home (WFH). Data from McKinsey [1] suggest that productivity of remote teams has exceeded expectations with 80% of people enjoying WFH, and 41% feeling more productive than before. While some companies like Facebook are letting their employees work from home, the majority of workers have been reporting that adapting to remote work has been a challenge and has impacted the sense of community [3]. In order to better engage remote teams, leaders will have to embrace collaborative technologies that help employees become better digital collaborators. AI and other digital technologies have the potential to support a more sustainable economic and social growth by creating transparent and collaborative processes. In this paper we present a methodology and an AI-powered tool that supports the development of the skills required to build a more inclusive and sustainable future for all.
Promoting effective collaboration both in corporate settings and in online environments requires the development of skills such as critical thinking, self-reflection, and co-construction of knowledge. The challenge that many digital transformation initiatives have encountered is an excessive focus on the specific technology to implement, rather than undertaking the hard work of upskilling workers, broadening their mindset and helping them become better digital collaborators.
Today an increasing number of teams collaborate online and interact by leveraging multiple communication media, though adapting to remote work has been a challenge as reported by several surveys. What is still missing is the opportunity for individuals to focus on and become aware of their own contribution to the collective effort. Most of the methods used in the past are time consuming and require heavy involvement of researchers or facilitators. Our method offers an innovative solution to promote self-reflection during real collaboration opportunities, thanks to AI-generated real-time feedback indicators.
Several empirical studies on group dynamics have demonstrated that real time feedback and visualization of individual and team communication styles positively impact the effectiveness of digital collaboration [4,5]. Feedback interventions have been used for almost a century to impact performance and improve outcomes [6], though many of the experiments had inconsistent results and were biased by poor methodology, insufficient sampling, or inaccurate operationalization. Over the past two decades, feedback sessions have seen a surge of interest among industrial and social psychologists, who see them as effective methods to promote individual and organizational change [6,7]. The mediating factors most frequently reported are the processes of self-reflection and social awareness that occur when individuals are provided the opportunity to think about the impact of their actions on others [8,9].
In this paper, we describe the results of an experiment that explores the relationship between directly observed video/text chat behaviors, group outcomes, and the perception of dominant behaviors. Our goal is to offer empirical evidence to support a positive association between increasing awareness of individual communication behaviors and team performance. To measure the impact of real time interventions, we used the Meeting Mediator, an online intervention tool that shows the conversational balance of participants [10].
The structure of the paper is the following: we first provide an overview of the literature on group dynamics and the benefits of visualizing individual and team communication behavior. After discussing and providing support for the hypotheses, we describe the experiment and present the tools and functionalities of the Riff platform and the Meeting Mediator tool. We then present and discuss our results and offer insights on theoretical and managerial implications.

2. Literature Review

Traditional feedback sessions are useful processes to accelerate behavior change and improve organizational performance [6,7]. Researchers have been using methods such as surveys and email-based social network analysis to help organizational members reflect on their individual or team communication behavior. A risk of traditional network intervention is that some participants may not feel comfortable having their position in the network visible to others, for fear of being targeted by the leadership as being “too peripheral” or “too central” or for trying to increase their network position to gain political power [11].
Recently, social network interventions have leveraged the power of network data to improve learning, increase efficiency in organizational processes and trigger organizational change [9]. As Cross and colleagues noted after conducting several social network interventions, simply asking participants to spend a few minutes—individually or in teams—and identify what they observe in a social network map, has a positive implication on team performance [12] (p. 11).
While these experiments demonstrate how crucial it is to provide an opportunity for self-reflection in order to trigger individual and organizational change, they often do not provide participants with live feedback that could immediately start a process of self-awareness. In many cases, the change triggered by the intervention occurs days or weeks after the observations [13]. The method we use in this paper overcomes this limitation by offering real time feedback, providing an immediate “mirroring” opportunity for learning and improvement.
Other methods such as the diarization approaches are less attractive, as they require a significant infrastructure investment in the use of multiple microphones, as well as specific training with voice samples. The method we use in this study emphasizes the individual nature of behavior change, building on the assumption that a real time reflection on your own communication can trigger individual change and influence how you interact with others, and how dominant or collaborative you are. The results are visible only to participants of the meeting session, and are not immediately visible to supervisors or other stakeholders. De Montjoye et al. [14] found that only the strongest ties in both internal and external networks have a direct effect on performance, when team members were involved in complex decisions. By visualizing these ties, individuals and groups become aware of these dynamics and develop the level of trust and mutual understanding that have been associated with increased team performance.
Several studies in the field of computational social science have explored the impact of “mirroring” and self-reflection on individual and group behaviors [5,15]. Using wearable electronic sensors called sociometric badges, Pentland and his team at MIT were able to measure how people communicate in real time and to computationally describe the characteristics of the most effective teams [4,14]. They found that great teams share some key coordination mechanisms: they communicate frequently; they have members who talk and listen in equal measure; they engage in frequent informal communication; and they explore ideas and information outside the group [16,17]. By observing human conversations in both co-located and remote contexts, Pentland and team found that measuring the energy, timing, and variability of human interactions offers insights on unconsciously processed indicators they refer to as “honest signals”, which offer a reliable signal of our behavioral tendencies [18] (p. 4). Some of these signals include influence, mimicry, activity, and consistency. Influence is measured by the extent to which one person causes the other person’s pattern of speaking to match their own pattern. Mimicry is a spontaneous copying of one person by another during a conversation, leading to an unconscious back and forth of smiles, head nodding, as well as interruptions. The activity signal is measured by observing increased movements that derive from higher interest or excitement during an interaction. Finally, consistency is measured by observing even or irregular movements, which may indicate the presence of different thoughts and emotions in the speaker. Higher consistency is a signal of mental focus, while greater variability may indicate openness to influence from others.

3. Hypotheses Development

Our first hypothesis relies on the assumption that providing individuals and groups with the opportunity to self-reflect on their communication behavior can generate a change in their communication style and improve group outcomes. This is generally supported by the well-known Hawthorne (or observer) effect [19], which suggests that environmental factors, such as having an audience, being observed or recorded, or observing yourself in the mirror, have the potential to encourage self-awareness and improve performance. As Wicklund and Duval [20] demonstrated in a multiexperiment study using mirrors to induce self-awareness, overall performance improved for the groups that experienced self-reflection by using a mirror during the study. A recent longitudinal study at a large global service company used email-based social network analysis and a method called virtual mirroring to assess the impact of self-reflection on customer satisfaction [4]. The authors demonstrated that allowing employees to learn about their own online communication behavior was associated with increased performance. More recently, Porter and Bozkaya [21] explored the effect of introducing live feedback on learner performance in the context of an online course, looking at student persistence in the program, and election to complete supplemental readings and assignments. The findings from their experiments show positive correlations with strong statistical significance between live interactions and all performance measures studied. Gesell, Barkin, and Valente [13] conducted a social network study where group leaders would receive a network map and specific data-driven recommendations on how to increase group connectivity. After four sessions, the group leaders would be instructed not to alter their teaching methods. Their sessions empirically guided program activities and resulted in increased group cohesion.
In addition to a positive impact of self-reflection on performance, we argue that repeated feedback opportunities generate a positive impact on the collective outcomes. Recent studies in the field of cognitive neuroscience applied to behavioral change have demonstrated that as individuals participate in new activities, they are training their brains to create new neural pathways, and these pathways become stronger with repetition until the behavior becomes the new normal [22,23,24,25]. Self-awareness might not be sufficient to trigger the desired change when an individual lacks fundamental skills [26]. Therefore, it is important to encourage readiness to change through a repeated exposure to the tools that can lead to change. These studies confirm that behavioral change strongly depends on how often individuals perform an action, with positive feedback reinforcing the desired action. Based on the empirical evidences discussed above, we hypothesize that:
Hypothesis (H1).
Repeated opportunities for real time self-reflection increase group performance.
Our second hypothesis is based on the assumption that the opportunity to self-reflect on your own communication style increases the perception of specific relational behaviors. In particular, we are interested in exploring how participants perceive the protagonistic attitude during virtual meetings, which we operationalize as dominance (i.e., “someone is talking a lot and others are allowing this to happen”). The concept of dominance has often a negative connotation, as it is equated with physical, psychological, or verbal aggressiveness. The concept is used interchangeably with constructs of power and status. In our study we do not equate dominance to a class of destructive, attacking behaviors that may include verbal abuse or physical violence [27]. Our construct of dominance is aligned with the Burgoon and Hale classification [28], which positions dominance-submission (also labeled as power and relational control) as one of two dimensions among 12 fundamental categories of relational communication used by individuals to understand their interpersonal relationships. The construct of dominance that we use in this study is based on an interpersonal perspective and is defined as a “relational, behavioral, and interactional state that reflects the actual achievement of influence or control over another via communicative actions” [28] (p. 315). In the context of communication, dominance is not considered as a personality trait, but rather a dynamic state in which a combination of individual traits and situational factors play together demanding or encouraging dominant behavior [29]. The asymmetry of power creates a situation where an individual has more freedom of expression while the other person is left with anticipating the desires of the more dominant one. This interpersonal dominance can be observed through visual and verbal clues. For example, using a first version of the Meeting Mediator and sociometric badges, Kim and colleagues [30] observed how dominant people’s behavior significantly differs from that of nondominant people. The authors were able to identify 76% of the participants who were perceived to be “dominant” by themselves based on their distinctive speaking time, average speech segment length, and speech energy. A meta-analysis conducted on five databases and focused on studies exploring the association of dominance, power, speaking time, and leadership, found that speaking time is a strong reliable indicator of dominant behavior [31]. As illustrated in two other meta-analyses, the amount of time an individual talks is related to leadership and emergent power [31,32,33]. Based on these evidences, we propose the following hypothesis:
Hypothesis (H2).
Repeated opportunities for real time self-reflection increase perception of dominance during virtual meetings.
Figure 1 illustrates the conceptual framework with related variables and hypotheses. As explained by the theory of self-perception [34,35], individuals become aware of their own attitudes and emotions through the observations of their own behavior and by assessing the circumstances in which this behavior occurs. In order to isolate the role of self-observation, we measured performance and dominance both objectively and via self-assessment.

4. Research Design and Data Collection

To measure group interaction and promote self-awareness we used the Riff Platform, an AI-enabled video and text chat platform originally developed at the MIT Media Lab. The platform consists of multiple user-facing applications and supporting backend tools, three of which were used for this experiment: Video Chat and Meeting Mediator; Meeting Metrics; and Diagnostic Surveys. The Meeting Mediator offers participants real-time feedback about specific indicators of their interaction, via continuous updates lagging 1–2 s behind the live conversation (see Figure 2). The Meeting Mediator detects important social signals that occur during a conversation based on speech patterns derived from vocal data.
The metrics used to assess participants’ interaction were: engagement, influence, and dominance. Engagement was indicated by the shade of purple of the node in the middle of the visualization, which showed the total number of turns taken in running 5-min intervals throughout the chat. When the central node is dark purple (see Figure 2a), it means there have been many turns, while light purple means fewer turns. Influence was measured by the location of the purple node, which moved toward the person who has taken the largest number of turns in running 5-min intervals throughout the chat. Dominance represented the degree to which a person was allowed to become the “protagonist of the conversation”, while other participants were allowing this to happen. Dominance is visually represented by the thickness of the grey “spokes” running from the central purple node out to each of the participant nodes. Figure 2b shows the metrics of the Meeting Mediator visualized back to each participant.
The platform provides feedback to participants involved in online tasks and generates an opportunity for immediate self-reflection—a sort of mirroring—that may lead participants to change their behaviors in the direction of increased collaboration and productivity.
We conducted a randomized controlled trial design to test the hypotheses around the effects of the Meeting Mediator. We recruited 160 volunteers and organized them in groups of four. Participants might have known each other prior to the experiment, though this was not required. Volunteers were primarily international students on US college campuses, or individuals who were born and raised in emerging economies. The sample was balanced in terms of gender, age, race/ethnicity, primary language, and education. Participants received compensation for their involvement in the experiment, which was a little more than the average going rate, possibly helping ensure the recruitment of subjects who were willing to all appear at the same time.
Once participants were invited to a Riff video chat, the software assigned the group to one of four possible conditions. These conditions determined whether the Meeting Mediator was initially present for all members of the group, and whether it would be present in the second task. Other conditions included no exposure to the Meeting Mediator during any task, or exposure during both tasks. The presence of the meeting mediator was assigned randomly.
To assess the group performance, we asked participants to conduct two moral reasoning tasks [36] and used rubrics to assess the outcomes. We called these tasks “Task Red” and “Task Blue”. Task Red was always the first task that groups encountered. Once participants joined a Riff video chat, the software assigned the group to one of four possible conditions: A, B, C, or D, which determined whether the meeting mediator was initially present and whether it would be present during the second task or not. Conditions were assigned sequentially. The first group to be assembled was assigned to condition A. The second group to condition B. The third to condition C. The fourth to condition D, and then the fifth to condition A again.
When entering the video chat, participants would see the usual Riff video chat view of the faces of their teammates, their own face in the top left, and the meeting mediator (if present) in the bottom left. In the bottom right spot, a scrollable box reported the text that participants would read to complete their task. Different passages of text were displayed for Task Red and Task Blue. The scrollable box displayed the text as a bitmap image, to prevent users from selecting and copying it easily. Participants could type words in an editable text box in the bottom left side of the screen.
Prior to using the platform and working together on these tasks, participants completed a self-assessment of conversational dominance based on the second factor of behavioral dominance [28]. After each video chat session, and in between the two tasks, participants completed a short survey, which included questions from the Networked Minds Social Presence Inventory [37], a self-assessment of how dominant they felt during each conversation, and questions pertaining to social presence. In order to assess each participant’s socio-emotional acuity, we also included in the survey a modified version of the Reading the Mind in the Eyes test [38], where participants were asked to read emotions of others just by looking at their eyes. We excluded the “About you” section which covered questions on gender, age and level of education, and other sociodemographic details already covered in our survey. The questions we included aimed at assessing “how dominant they felt during the meeting”, “how present and engaged they felt”, “how well they performed during the meeting”, and “how much they enjoyed working with their group”.

5. Results

Our findings demonstrate that exposure to the Meeting Mediator caused participants to feel more dominant during meetings (Figure 3). Remarkably, there was a stronger effect for repeated exposure: participants who were exposed to the Meeting Mediator during both tasks experienced approximately twice as large an increase in self-assessed dominance over the control group (which was never exposed) as those who were exposed only once, and this difference was statistically significant (p = 0.02, two-sample t-test, n = 160). Error bars show a 95% confidence interval, with a statistically significant difference between the control group and the group exposed twice.
Participants reported a significant change in how they perceived the dominant behaviors of others when exposed to the Meeting Mediator (Figure 4). Being exposed to the Meeting Mediator twice resulted in a significant 15% increase in the perception of dominant behaviors of others having a positive effect on the group’s conversation (p = 0.004, two-sample t-test, n = 160), versus an 11% average improvement for a single exposure. Hence, our results support our first hypothesis as repeated opportunities to real time self-reflection increase perception of dominance. Self-reflection via the Meeting Mediator produced a 9.4% improvement in how participants perceived the dominant behaviors of others. Error bars show a 95% confidence interval, with a statistically significant difference between all treatment groups and the control group.
To explore the effect of exposure to real time feedback, we differentiated between performance of groups exposed only once to the Meeting Mediator (either during the first task or the second task), performance of groups who had no exposure to the Meeting Mediator, and performance of groups who were exposed to the Meeting Mediator in both tasks. When the Meeting Mediator was shown either in the first task or in the second task (but not both times or neither time), there was a significant positive effect on performance. As a result of our small sample, we could not conduct any regression analysis, though we calculated the delta on the score assigned to the completed tasks.
Figure 5 illustrates the variance in group performances on both tasks. When participants were not exposed to the meeting mediator in any of the tasks they performed, 50% of the teams performed a little better, while the other 50% performed a little worse. This condition simulates how people perform in a traditional, non-mediated situation, which could explain the split results in terms of performance. When participants were exposed to the Meeting Mediator only during the first task, they were given a real-world opportunity to practice how to manage a live conversation and how to control their communication style. The delta in performance for groups exposed to the Meeting Mediator only in the second task was also positive, with only one team performing marginally worse. In the two mixed treatments (Meeting Mediator in either task 1 or task 2) 15 out of the 18 groups performed either substantially better or a little better, which supports our assumption that the Meeting Mediator might have a positive impact on performance. At the same time, continued exposure to the Meeting Mediator did not impact performance, leading to only a partial support of our first hypothesis.
Several possible confounding variables increase the complexity of measuring the impact of online collaboration on group performance. These include demographic factors, dominance, social presence, the increased cohesion that makes group members work well together, the individual intelligences of group members (i.e., collective intelligence), and the motivations of individual group members. To account for this, we created a comprehensive causal model for the variables of interest, though the results were not sufficient to establish causal relationships or isolate the effects of the Meeting Mediator.

6. Discussion and Future Directions

In this experiment we looked at the impact of real time, AI-enabled feedback on both perception of dominant behaviors and group outcomes in virtual settings. While other studies found a strong correlation between group performance and live feedback [5,21], our results show that the Meeting Mediator might have an impact on performance only under specific circumstances. When teams are provided with real time feedback during one of the tasks, their performance might be positively impacted by immediate personal awareness. In agreement with past studies on the effect of the Meeting Mediator for co-located groups [4], we found no change in the objective performance of groups exposed several times to the Meeting Mediator. We also found no change in subjective self-assessment of the group performance on the task, and no change in their perceived enjoyment of working with their groups or using the software. At the same time, a single exposure to the Meeting Mediator increased self-reported feelings of dominance by a small amount, whereas two exposures increased it by roughly twice that amount. This result confirms findings from previous experiments that used the Meeting Mediator for in-person groups and found a similar correlation [5].
There are several possible explanations for these findings. A first reason might be connected to the combined effect of dominance and stage of development of these groups, whose members had never worked together on such a complex task and were somewhere in between a forming and a storming stage in the Tuckman’s model [39]. As demonstrated by Kim and colleagues [9], groups that are in the brainstorming phase with clear dominant people tend to generate fewer ideas and feel less productive.
As we were constrained to a small sample size for logistical reasons, we had to settle for less conclusive results. Nevertheless, the experiment described in this paper demonstrates the benefits of using AI-enabled feedback sessions to provide personalized feedback and improve awareness of communication patterns in an online environment. The Meeting Mediator is an example of the power of interpretation of human interaction to improve the human condition [11]. By offering immediate feedback, these digital tools powered by AI can trigger behavioral change and offer immediate pointers to participants which can help build and develop soft skills required to succeed in a new digital environment, disrupting the education industry through the use of metrics and personalized feedback. Through the use of platforms like Riff, participants receive personalized insights about their collaborative behaviors and patterns, which help them to become effective leaders and collaborative team members.
The results of our controlled trial suggest a positive overall effect from using the Meeting Mediator in online courses. Platforms like Riff and the use of immediate virtual feedback offer higher education institutions and professional development organizations an easy way to use mentoring systems to help learners and working professionals improve remote team collaboration. The methodology we present in this paper demonstrates how platforms based on computational social science can help assess the impact of professional education initiatives and workforce training, especially when they involve team activities and peer learning.
The self-reflection and individual growth opportunity offered by this methodology can greatly benefit the online educational industry, which are by nature complex and offer a challenging environment for causal impact studies. As today more and more institutions are offering virtual team work and online learning as a component of the curricula, it is important to provide learners with immediate, live feedback that can help them improve and not fall behind given the lack of face-to-face interaction. As more and more higher education institutions are leveraging new digital technologies and building opportunities for a new digital academic entrepreneurship [39,40,41,42], it will be important to design real-time opportunities for self-awareness and sense-making.
The first limitation of this study is represented by the sample size, as this experiment involved 160 participants recruited among international students or individuals who were born outside of the United States. It will be important to replicate the study with a larger sample and design the trial over a longer period of time, to assess the effect of prolonged exposure to the platform and identify possible associations with the variables we collected, including perceived dominance, group performance both perceived and externally assessed, collective intelligence, social presence, and demographics data.
Due to deployment conditions, we did not explore the impact of various approaches to group formation and matching of groups to Teaching Assistants or other course mentors. In future studies it would be important to also focus on the impact of support from sociodemographic characteristics on dominance and its interplay with group outcomes, starting with gender and cultural/national differences [31,40].
Given the importance of interruption on the dynamics of small group interactions, which is often associated with gender, group composition, and hierarchy [41], future research should include the design and test of a real-time intervention that shows a person’s interruption rate and its effect on performance, efficiency, perception, cohesion, or other team efficacy measures. Future work should also include a comprehensive analysis of the Riff metrics that can be used to predict group performance, including dominance and bias, engagement, interruption, interjection/affirmation, with the goal to introduce new real-time interventions in the video meeting context. To establish the effectiveness of this method within learning contexts, an important next step would be to complete a randomized controlled longitudinal trial, in which students are randomly assigned to complete online classes with Riff Chat, and to complete the same class using a control tool like Open edX, Canvas, or Moodle, without Riff’s focus on team-based learning.
A methodology, such as the one presented in this paper, based on real-time feedback and virtual engagement, is beneficial in several other contexts to prepare working professionals for real-life situations. With the help of feedback tools, professionals can be trained to recognize and overcome negative effects of dominance and understand the impact of their communication behavior on task performances [43]. Digital platforms like Riff and the Meeting Mediator offer the right combination of user-facing components and provide relevant and prompt feedback to participants with the potential to enhance the learning experience and positively impact digital collaboration. Both in the educational environment and in the professional world, these platforms can support entrepreneurial learning environments by stimulating peer learning and self-reflection on the effect of digital collaboration on group work [44]. As global institutions strive to pursue the United Nations Sustainable Development Goals (SDGs), access to remote learning will become crucial to ensure inclusive and equitable education and promote lifelong learning opportunities for all. This is aligned in particular with SDG 4 and SDG 8 which focus on reducing the digital gap and ensure lifelong learning and safe work opportunities for everyone. Tools like the Meeting Mediator will help online learners improve self-reflection and engagement as their communication with peers and instructors will become increasingly more mediated by technology. Additionally, as organizations adopt flexible work arrangements such as Working From Home (WFH) and Working From Anywhere (WFA), leaders will need to design engagement opportunities for remote teams to improve digital collaboration, increase productivity and promote employee motivation. Tools such as the Meeting Mediator will be essential in supporting the co-creation of knowledge, the development of critical thinking and self-reflection necessary to promote a full and productive employment and decent work for all (UN SDG 8).

Author Contributions

Conceptualization, methodology, formal analysis, and investigation: B.P.; writing—original draft preparation, F.G.; writing—original draft preparation, review and editing, F.G.; funding acquisition, B.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Science Foundation, award number #18433991.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

References

  1. Boland, B.; De Smet, A.; Palter, R.; Sanghvi, A. Reimagining the Office and Work Life After COVID-19; Pivot Interiors, Inc.: Santa Clara, CA, USA, 2020; Available online: https://www.mckinsey.com/business-functions/organization/our-insights/reimagining-the-office-and-work-life-after-covid-19 (accessed on 10 November 2020).
  2. Sarkis, J.; Cordeiro, J.J.; Brust, D.A.V. Facilitating sustainable innovation through collaboration. In Facilitating Sustainable Innovation through Collaboration; Springer: Dordrecht, The Netherlands, 2010; pp. 1–16. [Google Scholar]
  3. Bartik, A.W.; Cullen, Z.B.; Glaeser, E.L.; Luca, M.; Stanton, C.T. What jobs are being done at home during the COVID-19 crisis? Evidence from firm-level surveys (No. w27422). Natl. Bur. Econ. Res. 2020. [Google Scholar] [CrossRef]
  4. Kim, T.J.; Chang, A.; Holland, L.; Pentland, A. Meeting mediator: Enhancing group collaboration with sociometric feedback. Conf. Hum. Factors Comput. Syst. Proc. 2008. [Google Scholar] [CrossRef]
  5. Gloor, P.; Fronzetti Colladon, A.; Giacomelli, G.; Saran, T.; Grippa, F. The impact of virtual mirroring on customer satisfaction. J. Bus. Res. 2017, 75, 67–76. [Google Scholar] [CrossRef]
  6. Kluger, A.N.; DeNisi, A. The effects of feedback interventions on performance: A historical review, a meta-analysis, and a preliminary feedback intervention theory. Psychol. Bull. 1996. [Google Scholar] [CrossRef]
  7. Hattie, J.; Timperley, H. The power of feedback. Rev. Educ. Res. 2007. [Google Scholar] [CrossRef]
  8. Di Micco, J.M.; Hollenbach, K.J.; Bender, W. Using visualizations to review a group’s interaction dynamics. Conf. Hum. Factors Comput. Syst. Proc. 2006. [Google Scholar] [CrossRef]
  9. Kim, T.; Hinds, P.; Pentland, A. Awareness as an antidote to distance: Making distributed groups cooperative and consistent. In Proceedings of the ACM Conference on Computer Supported Cooperative Work, CSCW, New York, NY, USA, 11–15 February 2012. [Google Scholar] [CrossRef]
  10. Lederman, O.; Mohan, A.; Calacci, D.; Pentland, A.S. Rhythm: A Unified Measurement Platform for Human Organizations. IEEE Multimed. 2018. [Google Scholar] [CrossRef]
  11. Valente, T.W. Network interventions. Science 2012. [Google Scholar] [CrossRef]
  12. Cross, R.; Borgatti, S.P.; Parker, A. Making Invisible Work Visible: Using social network analysis to support strategic collaboration. Calif. Manag. Rev. 2002, 44, 25–46. [Google Scholar] [CrossRef]
  13. Gesell, S.B.; Barkin, S.L.; Valente, T.W. Social network diagnostics: A tool for monitoring group interventions. Implement. Sci. 2013. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  14. De Montjoye, Y.A.; Stopczynski, A.; Shmueli, E.; Pentland, A.; Lehmann, S. The strength of the strongest ties in collaborative problem solving. Sci. Rep. 2014. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  15. Lazer, D.M.; Pentland, A.; Watts, D.J.; Aral, S.; Athey, S.; Contractor, N.; Freelon, D.; Gonzalez-Bailon, S.; King, G.; Margetts, H.; et al. Computational social science: Obstacles and opportunities. Science 2020, 369, 1060–1062. [Google Scholar] [CrossRef]
  16. Pentland, A.S. Social Physics: How Good Ideas Spread-the Lessons from a New Science: Analytics for Success; Penguin: New York, NY, USA, 2014. [Google Scholar]
  17. Pentland, A. Honest Signals: How They Shape Our World. J. Artif. Soc. Soc. Simul. 2009. [Google Scholar] [CrossRef]
  18. Gillespie, R. Manufacturing Knowledge: A History of the Hawthorne Experiments; Cambridge University Press: New York, NY, USA, 1993. [Google Scholar]
  19. Wicklund, R.A.; Duval, S. Opinion change and performance facilitation as a result of objective self-awareness. J. Exp. Soc. Psychol. 1971, 7, 319–342. [Google Scholar] [CrossRef]
  20. Porter, B.; Bozkaya, B. Assessing the Effectiveness of Using Live Interactions and Feedback to Increase Engagement in Online Learning. arXiv 2020, arXiv:2008.08241. [Google Scholar]
  21. Astle, D.E.; Scerif, G. Using developmental cognitive neuroscience to study behavioral and attentional control. Dev. Psychobiol. 2009. [Google Scholar] [CrossRef]
  22. Berkman, E.T. The neuroscience of goals and behavior change. Consult. Psychol. J. 2018. [Google Scholar] [CrossRef]
  23. Lally, P.; Gardner, B. Promoting habit formation. Health Psychol. Rev. 2013. [Google Scholar] [CrossRef]
  24. Munakata, Y.; Casey, B.J.; Diamond, A. Developmental cognitive neuroscience: Progress and potential. Trends Cogn. Sci. 2004. [Google Scholar] [CrossRef] [Green Version]
  25. Zimmerman, B.J. Becoming a self-regulated learner: An overview. Theory Pract. 2002, 41, 64–70. [Google Scholar] [CrossRef]
  26. Dennis, A.R. Information exchange and use in small group decision making. Small Group Res. 1996. [Google Scholar] [CrossRef]
  27. Burgoon, J.K.; Johnson, M.L.; Koch, P.T. The nature and measurement of interpersonal dominance. Commun. Monogr. 1998. [Google Scholar] [CrossRef]
  28. Dunbar, N.E.; Burgoon, J.K. Perceptions of power and interactional dominance in interpersonal relationships. J. Soc. Pers. Relatsh. 2005. [Google Scholar] [CrossRef]
  29. Kim, T.; Chang, A.; Holland, L.; Pentland, A.S. Meeting mediator: Enhancing group collaboration using sociometric feedback. In Proceedings of the ACM Conference on Computer Supported Cooperative Work, CSCW, San Diego, CA, USA, 8–12 November 2008. [Google Scholar] [CrossRef]
  30. Schmid Mast, M. Dominance as Expressed and Inferred Through Speaking Time A Meta-Analysis. Hum. Commun. Res. 2002. [Google Scholar] [CrossRef]
  31. Mullen, B.; Salas, E.; Driskell, J.E. Salience, motivation, and artifact as contributions to the relation between participation rate and leadership. J. Exp. Soc. Psychol. 1989. [Google Scholar] [CrossRef]
  32. Stein, R.T.; Heller, T. An empirical analysis of the correlations between leadership status and participation rates reported in the literature. J. Personal. Soc. Psychol. 1979. [Google Scholar] [CrossRef]
  33. Bem, D.J. Self-perception theory. Adv. Exp. Soc. Psychol. 1972, 6, 1–62. [Google Scholar]
  34. Bandura, A. Self-efficacy mechanism in human agency. Am. Psychol. 1982, 37, 122. [Google Scholar] [CrossRef]
  35. Dukerich, J.M.; Nichols, M.L.; Elm, D.R. Moral Reasoning in Groups; Strategic Management Research Center, University of Minnesota: Minneapolis, MN, USA, 1987. [Google Scholar]
  36. Biocca, F.; Harms, C.; Gregg, J. The networked minds measure of social presence: Pilot test of the factor structure and concurrent validity. In Proceedings of the 4th Annual International Workshop on Presence, Philadelphia, PA, USA, 21 May 2001. [Google Scholar]
  37. Khorashad, B.S.; Baron-Cohen, S.; Roshan, G.M.; Kazemian, M.; Khazai, L.; Aghili, Z.; Talaei, A.; Afkhamizadeh, M. The “Reading the Mind in the Eyes” test: Investigation of psychometric properties and test–retest reliability of the persian version. J. Autism Dev. Disord. 2015, 45, 2651–2666. [Google Scholar] [CrossRef]
  38. Dukerich, J.M.; Nichols, M.L.; Elm, D.R.; Vollrath, D.A. Moral Reasoning in Groups: Leaders Make a Difference. Hum. Relat. 1990. [Google Scholar] [CrossRef]
  39. Tuckman, B.W.; Jensen, M.A.C. Stages of small-group development revisited. Group Organ. Stud. 1977, 2, 419–427. [Google Scholar] [CrossRef]
  40. Ridgeway, C.L.; Ellyson, S.L.; Dovidio, J.F.; Brown, C.E. The Look of Power: Gender Differences and Similarities in Visual Dominance Behavior. Gender Interact. Inequal. 1992. [Google Scholar] [CrossRef]
  41. Smith-Lovin, L.; Brody, C. Interruptions in group discussions: The effects of gender and group composition. Am. Sociol. Rev. 1989, 54, 424–435. [Google Scholar] [CrossRef]
  42. Rippa, P.; Secundo, G. Digital academic entrepreneurship: The potential of digital technologies on academic entrepreneurship. Technol. Forecast. Soc. Chang. 2019, 146, 900–911. [Google Scholar] [CrossRef]
  43. Cohen, I.; Brinkman, W.P.; Neerincx, M.A. Effects of different real-time feedback types on human performance in high-demanding work conditions. Int. J. Hum. Comput. Stud. 2016, 91, 1–12. [Google Scholar] [CrossRef] [Green Version]
  44. Passaro, R.; Quinto, I.; Thomas, A. Start-up competitions as learning environment to foster the entrepreneurial process. Int. J. Entrep. Behav. Res. 2017. [Google Scholar] [CrossRef]
Figure 1. Conceptual framework and hypotheses.
Figure 1. Conceptual framework and hypotheses.
Sustainability 12 10243 g001
Figure 2. Riff online meeting; (a) illustrates the live conversation during the Meeting Mediator and (b) illustrates two key metrics of a Riff Video chat (Speaking Time and Influences) visualized back to participants on the Metrics Dashboard.
Figure 2. Riff online meeting; (a) illustrates the live conversation during the Meeting Mediator and (b) illustrates two key metrics of a Riff Video chat (Speaking Time and Influences) visualized back to participants on the Metrics Dashboard.
Sustainability 12 10243 g002aSustainability 12 10243 g002b
Figure 3. Mean self-reported conversational dominance as a function of exposure to the Meeting Mediator.
Figure 3. Mean self-reported conversational dominance as a function of exposure to the Meeting Mediator.
Sustainability 12 10243 g003
Figure 4. Mean self-reported score for the positive impact of dominant personalities on a meeting as a function of exposure to the Meeting Mediator.
Figure 4. Mean self-reported score for the positive impact of dominant personalities on a meeting as a function of exposure to the Meeting Mediator.
Sustainability 12 10243 g004
Figure 5. Variance of group performance during the four conditions, from absence of the Meeting Mediator to its presence during both tasks. The top two graphs show the group performance without M.M. (left) and with the M.M. present in both tasks (right). The two graphs on the bottom show the group performance with the M.M. present only during Task 1 (left) or Task 2 (right).
Figure 5. Variance of group performance during the four conditions, from absence of the Meeting Mediator to its presence during both tasks. The top two graphs show the group performance without M.M. (left) and with the M.M. present in both tasks (right). The two graphs on the bottom show the group performance with the M.M. present only during Task 1 (left) or Task 2 (right).
Sustainability 12 10243 g005
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Porter, B.; Grippa, F. A Platform for AI-Enabled Real-Time Feedback to Promote Digital Collaboration. Sustainability 2020, 12, 10243. https://0-doi-org.brum.beds.ac.uk/10.3390/su122410243

AMA Style

Porter B, Grippa F. A Platform for AI-Enabled Real-Time Feedback to Promote Digital Collaboration. Sustainability. 2020; 12(24):10243. https://0-doi-org.brum.beds.ac.uk/10.3390/su122410243

Chicago/Turabian Style

Porter, Beth, and Francesca Grippa. 2020. "A Platform for AI-Enabled Real-Time Feedback to Promote Digital Collaboration" Sustainability 12, no. 24: 10243. https://0-doi-org.brum.beds.ac.uk/10.3390/su122410243

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop