Next Article in Journal
Effect of Sensory Feedback on Turn-Taking Using Paired Devices for Children with ASD
Next Article in Special Issue
Enhancing Trust in Autonomous Vehicles through Intelligent User Interfaces That Mimic Human Behavior
Previous Article in Journal
The Communicative Effectiveness of Education Videos: Towards an Empirically-Motivated Multimodal Account
Previous Article in Special Issue
The Impact of Multimodal Communication on a Shared Mental Model, Trust, and Commitment in Human–Intelligent Virtual Agent Teams
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Perceptions on Authenticity in Chat Bots

Department of Management, Communication & IT, MCI Management Center Innsbruck, Innsbruck 6020, Austria
*
Author to whom correspondence should be addressed.
Multimodal Technol. Interact. 2018, 2(3), 60; https://0-doi-org.brum.beds.ac.uk/10.3390/mti2030060
Submission received: 19 June 2018 / Revised: 6 September 2018 / Accepted: 12 September 2018 / Published: 17 September 2018
(This article belongs to the Special Issue Intelligent Virtual Agents)

Abstract

:
In 1950, Alan Turing proposed his concept of universal machines, emphasizing their abilities to learn, think, and behave in a human-like manner. Today, the existence of intelligent agents imitating human characteristics is more relevant than ever. They have expanded to numerous aspects of daily life. Yet, while they are often seen as work simplifiers, their interactions usually lack social competence. In particular, they miss what one may call authenticity. In the study presented in this paper, we explore how characteristics of social intelligence may enhance future agent implementations. Interviews and an open question survey with experts from different fields have led to a shared understanding of what it would take to make intelligent virtual agents, in particular messaging agents (i.e., chat bots), more authentic. Results suggest that showcasing a transparent purpose, learning from experience, anthropomorphizing, human-like conversational behavior, and coherence, are guiding characteristics for agent authenticity and should consequently allow for and support a better coexistence of artificial intelligence technology with its respective users.

1. Introduction

Social media, digital messaging and other, comparable substitutes for human interaction, increasingly change the way we behave in social as well as in cultural manners (http://bits.blogs.nytimes.com/2015/11/04/in-2016-digital-transformation-goes-mainstream-idc-predicts). Prevailing technological trends often act as an activator for such shifts in human behavior. For example, following the move from (locally installed) desktop applications to software accessed through browsers and websites, the last years may be characterized by a particular increase of mobile, i.e., on the go, content consumption, triggered by the sustained propagation of (smart)phones and other portable media consumption devices [1]. This shift is based inter alia on the conclusion that users apply social rules to their interaction with systems, even though they know that such may be unsuitable [2].
Progressing from here, the next, rather significant paradigm shift is seen in the emergence of intelligent virtual agents, in particular messaging agents or (chat) bots. Essentially, these agents can be defined as computer programs, which are capable of reading and writing messages autonomously, similar to how we humans perform this task [3]. Referring to the potential of intelligent agent technology, Beerud Sheth, CEO and co-founder of Teamchat, a San Francisco-based start-up specialized in smart-messaging APIs, highlights that “just as websites replaced client applications then, messaging bots will replace mobile apps now”. So if “messaging is the new platform”, then “bots are the new apps” (https://techcrunch.com/2015/09/29/forget-apps-now-the-bots-take-over/). Also venture capitalist Benedict Evans predicts this important change towards mobile messaging stating that “old means that all software expands until it includes messaging; new means that all messaging expands until it includes software” (http://ben-evans.com/benedictevans/2015/3/24/the-state-of-messaging). Consequently, it appears that these types of intelligent messaging agents are at the edge of becoming the new means of communication for organizations and privates alike. Yet, although the technological progress of building agent technology has been significant in recent years, insights on how to make these bots socially accepted have barely scratched the surface. Reasons for this lack of knowledge might be found in the multidisciplinarity of the topic. That is, understanding the characteristics of what makes a messaging service intelligent seems not so much a technical but rather a social, ethical or even philosophical problem to solve – one that may need to begin with a more general understanding of (human) intelligence and a subsequent definition of the type of intelligence that is expected from an (artificial) agent.

1.1. From the Brain to the Mind

In order to start a discussion about what it is that makes interactions intelligent, we first need to reflect upon what makes us humans interact and communicate in a way that deems adequate to a given situation. Such a reflection might start with a rather simplistic definition of the functioning of the human brain and how it is connected to the human mind. Kurzweil argues that the brain can be understood as being a hierarchical pattern recognizer containing approximately thirty billion neurons situated in the human neocortex [4]. Patterns can be learned, predicted, recognized and implemented into other patterns. Since processing, however, happens simultaneously and relentlessly within the brain, there is neither a beginning nor an end in this processing chain. According to Kurzweil, patterns are tripartite—composed of an input, a name for the activation of the pattern, and connections to both higher- and lower-level patterns. He further argues that the human pattern recognition module performs probability estimations of input, its size, and the importance of its parameters, so as to determine the likelihood of a correct representation in the mind. Even though patterns are multidimensional, the hierarchical structure of the neocortex allows the assumption that recognition is based on one-dimensional pattern inputs, such as lists of data [5].
With respect to the human mind, it is particularly the capability of learning which seems essential to the human brain. As Minsky puts it: “The principal activities of brains are making changes in themselves” [6]. The human neocortex is training itself and learning new patterns continually in order to make sense of the input information. The learning process includes building connections among patterns as well as strengthening existing connections if patterns are triggered simultaneously [7]—a mechanism which has already been adopted by modern machine learning approaches based on artificial neural networks [4].
Learning, i.e., the storing of information, can be based on non-direct as well as direct thinking. The non-direct process stores patterns as “circuitous sequences of associations” whereas direct thinking uses lists and sub-lists [4]. Particularly during direct thinking processes, the mind is confronted with a cultural or ideological codex guiding thoughts. Kurzweil emphasis this set of rules by highlighting that “many of these taboos are worthwhile, as they enforce social order and consolidate progress”. Thinking may thus be summarized as being a process that uses connected patterns and clusters of patterns as well as narratives and stories to educate and train the mind to achieve a given goal. This short, and admittedly rather simplistic, description of the principle processes performed by the human brain may serve as relevant foundation for creating a better understanding of an even more difficult human characteristic, i.e., (human) intelligence.

1.2. Intelligence and Its Link to being Human-Like

Referring to Salovey and Mayer one may find a number of definitions and interpretations of human intelligence and how they evolved throughout history [8]. One of the first definitions by Descartes supposed that intelligence is “the ability to judge true from false” [9]. Later, Wechsler’s understanding of intelligence was based on “the aggregate or global capacity of the individual to act purposefully, to think rationally, and to deal efficiently with his environment” [10]. Although this definition already included the distinction between mechanical, abstract as well as social intelligence, additional dimensions were still missing. In 1983 it was finally Howard Gardner who challenged the then predominate definition of intelligence and how such may be tested, by arguing that intelligence does not root in one single trait (i.e., there is not always one single answer to a given question) but it is better explained by a model of multiple intelligences [11]. This multitude of intelligences was subsequently further refined and enhanced by Albrecht who re-framed it as the six multiple smarts [12] composed of:
  • Abstract Intelligence (i.e., symbolic reasoning)
  • Practical Intelligence (i.e., getting things done)
  • Emotional Intelligence (i.e., self-awareness and self-management)
  • Aesthetic Intelligence (i.e., the sense of form, design, music, art and literature)
  • Kinesthetic Intelligence (i.e., whole-body skills, dancing or flying a jet fighter)
  • Social Intelligence (i.e., dealing with people, or as Albrecht puts it “the ability to get along with others and get them to cooperate with you” [12])
With these six dimensions of intelligence in place, Albrecht argued that their combination and its resulting synergies form the portrait of a true “Renaissance Person”—an appearance which needs to be resembled when one aims at building an agent whose goal is to be perceived human-like.

1.3. Intelligent Agents

Alan Turing’s famous imitation game poses the question: “Can machines think?” [13]. In 2015, Mikolov and colleagues stated, that the time for intelligent machines is here. Sufficient computational power and immense amounts of data, complemented by complex machine-learning methods allow for the creation of sophisticated general-purpose intelligent systems, whose focus is on smart communication and learning [14]. Similarly, Tecuci stated that the purpose of today’s artificial intelligence is to create intelligent agents, which are capable of achieving goals by knowing their environment and the stakeholders they have to deal with, as well as memorizing the gained information and improving their behavior through learning [15]. Or as Lieberman put it already 20 years ago: “it follows that there must be some part of the interface that the agent must operate in an autonomous fashion. The user must be able to directly observe autonomous actions of the agent and the agent must be able to observe actions taken autonomously by the user in the interface” [16].
Taking these basic rules as a guideline, we currently see intelligent interface agents starting to inhabit social media ecosystems. They are already capable of autonomously producing content as well as taking part in basic interactions on various social media platforms [17]. They aggregate content from different sources and automatically respond to inquiries for brands and companies in customer care settings. The literature on social acting agents has also grown dramatically, primarily driven by advances in technology. One of the most notably systems with regard to the representation of social intelligence is Rea, a real estate agent in a virtual world, that people can query about buying properties. The system uses intonation, gaze direction, gesture and facial expression to support the conversation [18].
Yet in doing so, these agent technologies may cause a number of new challenges. For example, social botnets have the ability to expose private data through exploiting different vulnerabilities found with social media users. They may post, comment, moderate and share content [19], thereby potentially affecting humans’ perceptions of reality [20,21]. This type of malicious behavior also adds to the increasingly negative connotation to artificial intelligence (AI) technology and thus highlights that, if we want for these intelligent agents to be trusted, we need to guarantee the trustworthiness of their autonomous doings. In other words, we do not need intelligent, but rather socially intelligent agents.

1.4. Socially Intelligent Agents

As described above, humans already do interact with agent technology both at work as well as in their private lives. During these interactions, the software agent usually tries to mimic human traits, although these efforts often remain unnoticed by the user [17]. To this end the Turing test still counts as the standard measure for human-like communication behavior of intelligent systems, as it not only requires intelligence to appear human-like, but also asks for additional social capabilities to be present [13]. As Persson and colleagues put it: “The ultimate purpose with socially intelligent agents is not to simulate social intelligence per se, but to let an agent give an impression of social intelligence” [22].
Adding to this discussion, Albrecht [12] defined a number of key aspects for social intelligence. Those include clarity, situational awareness, empathy, presence, and authenticity. For him, situational awareness is described as the ability to understand the situation and its circumstances, creating some sort of social radar which is used to detect happenings in any given situation. Studies exploring situational and context awareness of intelligent agents further found that, “context awareness allows applications to adapt themselves to their computing environment in order to better suit the needs of the user” [23]. Persson et al. also highlight that socially intelligent technology needs to understand its own presence and the context around itself, so that it can take humans as well as other agents into consideration. To achieve this, they took advantage of so-called primitive psychology and life-preserving aspects such as needs, desires, sensations and pain [22].
Contributing to the development of social intelligent agents, researchers also aimed at a better understanding of user-agent relationships and their influence on the interaction. Coon, for example, explored different relationship distances, ranging from stranger to companion, optimizing the frequency and level of engagement [24]. Similarly, Bickmore and Schulman focused on different user-agent relationship distances to support health counseling at various intimacy levels [25]. Finally, trust and trust building seems to play a crucial role when developing agents which should exhibit some sort of social intelligent behavior. To this end relevant previous research includes Gratch’s work on virtual rapport, which shows that non-conscious behavioral feedback, such as mimicry and backchanneling, increases trust and consequently user-agent engagement in conversations [26], as well as Matsuyama and colleagues’ work sowing that relationships may also be tighten through a better analysis of a user’s verbal and non-verbal cues, creating some sort of agent awareness [27].
While from the above we can see that previous research has investigated some of the aspects contributing to social intelligence (i.e., situational awareness, empathy, presence), the concept of agent authenticity seems so far been left out.

1.5. Authenticity of Agents

Agent authenticity was already a topic in the 1960s, when Joseph Weizenbaum first introduced his ELIZA [28]. While, in principle, ELIZA was not much more than an interactive diary, it was one of the first computer enabled technologies able to build up some sort of human-technology relationship by showing interest. Later, scientists coined the term Darwinian buttons to describe relevant social actions such as tracking an individual’s movement, making eye contact, or gesturing kindly in acknowledgement of another persons’ presence [29,30]. Turkle emphasizes that, since the advent of computers people have been searching for criteria of authentic relationships, although today, computer companionship seems already normal. Even more so, she argues that “as robots become a part of everyday life, it is important that these differences are clearly articulated and discussed”. Here authenticity plays an important role. Yet, although research agrees on its importance [31], empirically tested measurement methods are so far missing [32,33].
Authenticity defines a person’s honesty and sincerity [12]. It is about establishing cooperation, preventing manipulation and being true to oneself and others. Moreover, having respect, staying true to one’s values and playing fair makes a person (and potentially also a system) authentic. Albrecht describes it with the German expression of being a “Mensch” (i.e., being a human) [12]. Derived from the Latin word authenticus and the Greek word authentikos being authentic further means to be trust-worthy, authoritative, and acceptable [34]—characteristics which are not only important in AI but also in general consumer behavior [35]. For example, Pine and Gilmore deem authenticity the new consumer sensibility [36], making it an important topic in marketing [37], tourism, leadership as well as in education, where it already led to the development of several relevant research models and frameworks [38]. In software development, however, and here particularly in the development of autonomous agents, authenticity considerations as such are often missing. Johnson and Noorman emphasis this lack by saying that “responsibility issues should be addressed when artificial agent technologies are in the early stages of development” [39].
Although the overall concept of agent authenticity seems to be neglected by engineers there has been work on single traits influencing authentic agent behavior. That is, researchers have evaluated the effect of transparency on agents’ argumentation capabilities [40] and decision making [41,42]. Also, conversational agents’ ability to anthropomorphize was subject to several studies (e.g., [43,44]), as well as their conversational behavior (e.g., [27,45]). Finally, engineers have been working on making agents learn from experience allowing them to build up some sort of contextual and (to some extend) social awareness, and consequently act autonomously and coherent (e.g., [46,47,48,49,50,51]). While all these research efforts focused on single characteristics of intelligent, potentially authentic agent behavior, they did not aim at understanding agent authenticity as a holistic concept and whether such may be compared or inspired by human authenticity.
In the past, people drew hard lines when talking about machines being cognitive [30]. Today, however, computer culture accepts affective computing and sociable machines as well as flesh-machine hybrids, as long as they do not come too close i.e., as long as they do not make us feel uncomfortable [52,53,54]. Being authentic is just one more piece of the puzzle. In order to become authentic, however, it is necessary to anthropomorphize both in the role as a single entity as well as in being an enterprise representative, i.e., when acting in the name of a legal institution [36]. Only when this type of anthropomorphic behavior is achieved will the perception of brand and company identities be supported. In other words, conversations within everyday life are the most powerful form of (consumer) seduction, for which inauthentic conversational behavior creates a perception of phoniness.
Gundlach and Neville [55], Grayson and Martinec [56] as well as Beverland et al. [57] developed frameworks for the positive perception of authenticity. Yet, authenticity is deemed a social concept and thus for agents to advance they would need to be equipped with internal representations of relevant social information [58]. This may be achieved through social networks enhanced by individual beliefs, goals and intentions of an intelligent system. Agent-based social modeling has to specify this model, make basic assumptions, create interrelations and rules, and build models and research designs for relevant tests, simulations and experiments. With this type of social computing we should then be able to develop agents that are eventually capable of acting (agent) social, rather than trying to imitate humans. One question that remains, however, concerns the type of social characteristics which would lead to the initial break through. In other words, what are the critical traits of authenticity we should start with?

2. Materials and Methods

In order to better understand the authenticity demands for intelligent agents we talked to a number of experts in artificial intelligence and socio-ethics. The goal was to verify and further explore the multi-characteristic construct of authenticity and its implications for the domain of autonomous, conversational agents. Interviews were conducted according to McCracken’s qualitative long interview method [59]. They were open-ended, following past studies, and focused on the understanding and conceptualization of domain-specific authenticity dimensions [60,61].
Potential interview partners were selected among the authors of relevant recent literature, focusing on the philosophic and socio-ethic field of authenticity as well as on artificial intelligence. Inclusion criteria were based on their contributions to the field, with their latest contribution dating back no longer than 3 years, as well as their respective impact expressed by citations according to Google Scholar. Starting with an initial sample of 35 authors we used a snowball sampling method to extend our reach. Eventually a total of 68 potential interviewees were approached via email of whom 12 agreed to participate in the study (9 male and 3 female). All experts either already obtained a Ph.D. in their respective fields or were in the final process of doing so. Also, all of them indicated previous experiences, knowledge about, and interactions with messaging services as well as intelligent systems. Table 1 lists our interview participants, their respective areas of expertise, the year of their last contribution to the field, as well as the number of citations which Google Scholar connected to them at the time of the interview.
All interviewees gave their informed consent for recording and inclusion of their input before they participated in the study. The study was conducted in accordance with MCI’s guidelines outlining ethical considerations regarding research with human participation. The fulfillment of the respective protocol was approved by the MCI Research Ethics group.

3. Results

Interviews lasted between 15 and 45 min, were audio recorded, fully transcribed (note: for reasons of readability transcriptions were denaturalized) and subsequently analyzed applying McCracken’s long interview method. Results point to five relevant agent authenticity traits which are further discussed below.

3.1. Be Transparent

Companies are building up their chat bots for who-knows-what-reason. Obviously, they have their own purpose and so there are probably differences in authenticity based on the application of the bot.
(P11)
In 60 statements, data show that authentic messaging agents should showcase a transparent purpose. The design purpose reaches from providing specific functions to assisting users, having a belief model, being predictable and allowing users to create confidence in the decisions of the agent. Various statements showed that some of these factors are already incorporated in today’s systems. The characteristic of having a transparent purpose also includes the capability of acting transparently and having an intention, highlighted by P07 as “being able to provide information on how the bot reached a certain conclusion”. This means that in order to offer transparency, an agent has to provide legitimacy for its actions, supporting a certain type of predictability with respect to decision making and its rule-based value and belief system. All of which prevents so-called black-box behavior. Agents have to show “how they behave in specific situations, how they make decisions. […]; what we people do is not always predictable, but machines or agents have some rule base, which means they should not behave on their own […]; they should be programmed so as to correspond to some sort of ethical standard” (P10). This type of predictable behavior also relates to important other aspects such as security and objectivity, where a user should be able to rely on an expected agent behavior. Also, our experts see the need for an agent to have some type of internal sortation, i.e., a defined attitude. This internal order should reflect its creator/owner. Such provides relevant information about the agent’s purpose. Ultimately, the agent is perceived as a “machine that is representing a specific organization” (P11). Thus, “if an agent doesn’t tell you who its creators or owners are, and what its purpose is, it creates an inconclusive context of interaction; consequently, people may easily feel uncomfortable” (P07).
Table 2 lists the number of statements related to being transparent for each interviewee. Interview participants were split into those coming from the machine-learning/AI domain and those coming from the socio-ethics domain. While given the numbers it does seem that transparency is a rather socio-ethical topic, a comparison between groups did not show a significant difference (t = −2.25, p = 0.06).

3.2. Learn from Experience

Basically, it has to learn something, but since it has to learn, it may also make mistakes. Mistakes like children do—or not only children. Although, for some reason you don’t want the artificial intelligence to make mistakes because it is supposed to be sort of perfect.
(P08)
A total of 59 unique ideas give a comprehensive insight into how the learning from experience characteristic creates authenticity in messaging agents. As showcased by the quote above, the learning process follows a path very similar to human learning. This includes cooperation with other agents as well as humans in order to learn from experience, including trial and error. Thus, assisted by human intervention, authentic messaging agents should learn from experience, patterns, and iterations. While our experts clearly highlight that “deep learning relies on large amounts of data” (P05) they also think that authentic messaging agents should be capable of developing their goals beyond the often-obscure application of rules. This is, the continuous prevention of errors provided by the adherence to strict rules and black-lists (i.e., rules specifying what not to do or not to say), although supporting predictability, is not always beneficial when it comes to fostering authenticity. An authentic agent should rather openly behave within a certain margin of error, which indicates social intelligence. P01 supports this view by stating that “if there is no margin of error, the AI is either a genius or it does not want to show any traces of hesitation, uncertainty or potential fault”, which might be perceived deceptive. Yet, while an agent should be transparent about potential misinterpretations and errors, learning from experience also means that it should reflect upon its erroneous actions and as a consequence prevent them in the future.
A second important aspect of authentic learning from experience relates to context-aware learning, where an agent continuously adapts behavioral patterns to what it is able to perceive from its environment. For example, “if it knows from previous behavior and behavioral patterns how you would react and how you would usually use your phone, it would know that after 8 pm calls would normally be only between you and your wife”(P10). Context-aware learning means that it would not only react to this understanding but also make this newly acquired knowledge transparent, avoiding pure black-box behavior. Essentially, it should be able to “explain the workings and rational of its system so that we can judge whether its belief model is well founded” (P04).
Table 3 lists the number of statements related to learning from experience for each interviewee. Again, interview participants were split into those coming from the machine-learning/AI domain and those coming from the socio-ethics domain. A comparison between groups did not show a significant difference regarding the number of statements (t = 0.36, p = 0.73).

3.3. Anthropomorphize

It needs a pretty good knowledge base about what it’s like to be a human.
(P03)
More than fifty interview statements explain that a messaging agent has to anthropomorphize in order to be authentic. That is, the agent “has to understand human physiology, it has to understand history, and it has to understand culture because it is part of the user as well as the environment to which it is deployed to” (P03). Designing such a character, “main concerns are the consistency of all aspects of life history and emotional reactions to make the chat bot as real as possible” (P06). Based on this understanding, authentic agents should “create a human persona, which you can relate to and model for yourself” (P01). One may even speak of a need for charisma where an authentic agent would be expected to “correct a user’s interpretations and provide according responses” (P02). To fully anthropomorphize, authentic agents would also need to take care of individual differences related to users and their environment. This goes along with a certain culture-awareness and knowledge of the differences within a culture. Even more so, they should adapt to the local environment, adopting behavior which reflects the given culture.
Not only in human-human interaction but also in all human-computer interaction, the level of trust is important [18]. To this end a second aspect connected to anthropomorphic behavior concerns the existence of trust. That is, in order to “show proper social intelligence an agent needs to build up trust” (P01). One interviewee would even argue that in order to build up a trustful relationship with an agent one requires exposure; i.e., “there has to be something at stake; in a social relationship, something is at stake, and that’s what makes it exciting […]; to most of today’s bots you can say you are an idiot and they would not react to that or change their behavior; you could not do that with another person, as you would destroy your relationship” (P04).
Table 4 lists the number of relevant statements per participants. A comparison between groups (i.e., machine learning/AI vs. socio-ethics) did not yield any significant differences (t = 1.06, p = 0.32).

3.4. Behave Conversational

It should write like me, pause like me, and talk like me.
(P09)
Adequate conversational behavior is highlighted by 45 statements as being an important character of an authentic messaging agent. Elements of this behavior reach from having a mission and getting to the point, to not wasting time, filling in blanks, understanding quickly, bringing value to the interaction that goes beyond task-orientation, keeping the conversation moving, aligning expectations, and being surprising. However, conversational behavior also includes conversational awareness and conversational skills. These factors are described by turn-taking in conversations, meta-verbal-cues, the response to conversational signals, joined-attention or the handling of overlap in dialogs. To that end, one of our interviewees explained that “a lot of these things should happen in the system, a lot of non-verbal things, like when it should speak and how it should speak and where it should look at and so on” (P04). Previous research has also found that visual cues [62] or emoticons [63] may be used to enhance such conversational behavior. This type of intelligent interaction behavior leads to the perception of responsiveness and availability, and enriches a conversation by respecting prevailing values and manners. P08 for example states that we need to asks ourselves “what kind of values are we building and what values do we have ourselves; it’s like with children—you say whatever you want but they see you, how you are, and so they act accordingly; which means that agents should probably learn this from interacting with themselves as well as with other humans” (P08).
The number of relevant statements per participant are shown in Table 5. Differences between interviewees from the machine-learning/AI field and those from the socio-ethics field, were not found (t = 0.37, p = 0.72).

3.5. Be Coherent

I think when people think of a chat bot, when they interact with a bot, it’s all kind of lumped together into one; […] this human-like, and authenticity, and genuineness, and trustworthiness, and all of that; I think this is all kind of grouped together in people’s minds.
(P11)
Our interview data shows 22 instances referring to the characteristic of coherence. Coherence with respect to authentic agents describes the context and social awareness capability to memorize, to relate to common experiences and to establish common ground. Statements highlight that reacting to and through context, as well as making sense of social situations respecting the reactions of participants (both real and artificial) is a crucial attribute of authenticity. Also, an agent has to relate to prior experiences with human and artificial interlocutors. In doing so it creates a timeline of interactions and a respective memory of interactions, which allows for the acknowledgement of different standpoints and thus supports coherent and reproducible streams of argumentation. “You have to add a timeline during the conversation, so your answer can not only be based on what was previously mentioned, like right before; it has to be like a follow-up, not like single interaction steps” (P02). To help with this, “the agent should be aware of some user dependent traits […]; if it is aware of like, for example when the agent would be built into Facebook […]; to use this social context information to build common ground”. Using this information “it would know a lot of a user’s personal history and thus could give much more proper feedback” (P02).
Relevant statement numbers are shown in Table 6. Also, with respect to being coherent, numbers did not differ between participants from the machine learning/AI field and those from the socio-ethics field (t = 0, p = 1).

4. Verification of Interview Insights

In order to verify the insights gained through the above described interview study we set up a survey targeted at experts in relevant fields. Questions were open ended and positioned along the five authenticity traits identified by the interviews and relevant literature (cf. Table 7). In addition, we asked to provide a rating on how competent one feels in providing answers to these questions (from 1 = low: I am a novice to 5 = high: I am an expert). A total of 40 academics from leading institutions in the US, Europe and Asia were approached via email and asked to complete the survey as well as rate the relevance of Linguistics, Artificial Intelligence, Software Engineering, Psychology, Social Science, Human-Computer Interaction, Philosophy, and Ethics for their field of work (from 1 = not relevant at all to 5 = very relevant). Selection of these experts was again based on their research interest and recent contributions to the topic. People who had already participated in the preceding interview study were not contacted again. Yet, the survey was sent to a local interest group which meets regularly to discuss recent developments in Artificial Intelligence. Although feedback has only been received from five people (cf. Table 8 for participants’ background information and overall data on work relevance with respect to the above highlighted fields), all of whom gave their informed consent for including their data in scientific publications, the identified authenticity traits seem to validate.
The importance of transparency, for example, was highly endorsed. Although its degree of predictability may vary by task and context, it seems that particularly the understanding of data provenance and algorithms as well as deductive reasoning is what helps explain agent behavior.
As for the question whether a bot should mimic human behavior or rather develop its very own personality, answers show that such depends on the context. Systems whose primary goal is to represent an existing entity should exhibit respective behavior, although such may be perceived deceptive and thus potentially unethical (S02). On the other hand, systems which are strong enough to be perceived as independent entities, may very well develop their own personality traits, independent and distinct from their “mother institutions”. An example of such could be seen in Alexa who, although created by Amazon, clearly aims for having an independent appearance (participants’ overall competence rating with respect to answering questions regarding transparency: M = 3.20; SD = 1.79; Mode = 5; Median = 3).
With respect to learning, survey answers particularly highlight the need for an authentic agent to benefit from experience and further to change its communication behavior based on a given context. To that end, context awareness not only refers to the physical environment but also the speaker’s emotional state (participants’ overall competence rating with respect to answering questions regarding learning: M = 3.00; SD = 1.58; Mode = N/A; Median = 3).
An agent’s ability to anthropomorphize, however, seems to divide the experts in the field (participants’ overall competence rating with respect to answering questions regarding anthropomorphizing: M = 3.00; SD = 2.00; Mode = 1; Median = 3). On the one hand some feedback shows that human-like AIs are not necessary (S04 and S05) and potentially unethical (S02) if one is simply interested in completing a task or solving a problem. On the other hand, S05 highlights Watzlawick’s principles of communication, which greatly emphasize the relationship between message sender and receiver as being an influencing factor in the communication process [64]. Although the type of relationship that is required may depend on the given context. That is, building customer loyalty might require a stronger social connection (and thus a more human-like agent) than providing information on the weather (S04). Such is also in line with recent work on rapport modeling and social reasoning used in agents to provide more personalized information to human interlocutors [27], to strengthen the relationship between the system and the user and thus foster the exchange of relevant information [45], or to maintain users’ engagement [65].
In general it appears that the given task and context defines the expected conversational behavior, where the available ability level should range from answering simple questions to actively leading a dialog (S04). As for the application of conversational rules and values, feedback further shows that a truly authentic agent is able to distinguish between evidence-based statements (i.e., statements to the best of its knowledge) and guesses, and that it would make this distinction also transparent to its interlocutors (S01) (participants’ overall competence rating with respect to answering questions regarding conversational behavior: M = 2.80; SD = 1.48; Mode = 3; Median = 3).
Finally, as for the coherence factor, the provision of similar answers to similar questions and the use of coherent language structures as well as levels of abstraction, seem to be key expectations (e.g., mentioned by S04). Authentic agents should be equipped with values so that arguments and affiliated reasoning strategies are based on facts, logic and transparent opinions (S03). To this end, even the ability to de-escalate was named as a requirement for authentic behavior (S05). Whether agents should be able to establish common ground (another core principle of human dialog), however, triggers ethical concerns, as such would imply a cultural, social and conversational equivalency, which simply does not (yet) exist (S02) (participants’ overall competence rating with respect to answering questions regarding coherent behavior: M = 2.80; SD = 1.48; Mode = 3; Median = 3).

5. Discussion

The interview and survey results presented above show that authenticity in messaging agents is a multi-characteristic concept. As such it encompasses most, if not all aspects of what Albrecht describes as social intelligence [12].
First, authenticity relates to context awareness including context-aware learning, social context awareness as well as conversational awareness. This means that an authentic agent should react to and through context. Interactions should be context-dependent, which means that learning from context so as to maintain a conversation’s meaningfulness, is crucial. This includes gathering information from different sources such as social media, daily interactions, behavioral patterns as well as sensors to which the agent may have access to. According to Wang et al. [58], the gathering of social information such as social relations, role structures or influencers of a social environment, is also an important agent task. While Albrecht lists situation awareness as a particularly important dimension of social intelligence [12], other authors focus more on the coordination of events and decisions within an environment when talking about agent intelligence [66,67].
With respect to context awareness, it is mainly the ability of keeping a timeline of interactions and the capability of memorizing events, actions and decisions which help increase an agent’s authenticity. As Beverland found, links to the past are relevant to establish authenticity by creating a personal agent history, which also supports the grounding process [68]. Establishing common ground is necessary as it preserves the relationship between agent and user, and adds to an upright and ethical conversation. Authenticity builds on these relationships rather than on task-orientation, which makes trust another critical constituent to the agent-user relationship. Both the interview data as well as the literature show, that task-orientation does not add to authenticity [69]. Authentic agents establish trust rather through personal coherence or individual conversations with the user [70].
In addition to awareness and trust, experiences through and with the agent facilitates an overall authentic appearance. Learning from experiences and their context may even create a certain state of connectedness between a user and an agent. As described by Albrecht [12], Pine and Gilmore [36] as well as Chhabra [71,72] the continual challenge or cyclical process of being authentic is merely achieved through memorizing, learning from experiences and establishing connections.
Furthermore, our results show that when an agent learns to create individualized patterns, it should also explore the individual differences found in users. That is, depending on a user’s gender, age, culture or personal traits, an authentic agents needs to become a type of persona the user wants to interact with; an entity which remains “real”, true to itself and honest. Some of our interview participants described these as charisma building factors which have a great impact on the agent’s authenticity. Others called it anthropomorphizing, similar to Pine and Gilmore [36] who argue that anthropomorphizing is relevant for the authentic perception of consumers when thinking of a brand. In addition, Albrecht lists “staying true to oneself” as one of the four dimensions of authenticity [12].
Since an agent should know and understand itself, it requires an internal sortation regarding values, attitude, integrity, and its origin. This need for understanding its presence and the integration of a determined value and belief system is also found in previous research conducted by Persson et al. [22], Peterson and Seligman [31], Albrecht [12], Turkle [30] and Gundlach as well as Neville [55]. This helps when verifying an agent’s origin and integrity. Beverland [68] and Beverland et al. [73] argue that agents may even connect through values and identity so as to become an inherent part of a given cultural setting, which comes back to culture-awareness being an essential part of agent authenticity. In other words, an agent should incorporate cultural behavior, characteristics as well as regulations. Through the integration of cultural models and by adopting common sense principles, the agent builds its identity within a community. Doing so, it may be judged by the same strategies of social intelligence as humans are [22]. Also, moral assumptions about culture and given circumstances, and a personal code of conduct are considered factors influencing authenticity [12,74].
Finally, our interview and survey data highlights that creating transparent and predictable relationships increases the confidence level between interlocutors. Thus, an agent’s purpose has to be transparent so as to allow for the verification of behavior, i.e., decision making and action taking. Interviewees highlighted that increased predictability and knowing the intention of an agent increases the likelihood of an interaction. As Molleda and Jain [75] show, the commitment to an objective is a core requirement for authentic behavior in organizations. Such also applies to authentic agents (in particular if they act as representatives of organizations). Albrecht [12] defines this as raison d’être which may be achieved by having a defined mission statement, objectives, priorities and an action-reaction value map.
Thus, according to our results it seems that combining characteristics of anthropomorphizing and having a transparent purpose defines the core requirements to be met by an agent which aims to be authentic. To that end, Groves [76] as well as Schallehn et al. [77] state that, the behavior of an authentic agent is guided by its identity. However, to create an authentic messaging agent, behavior also needs to incorporate conversational aspects. Those include conversational skills such as listening intently, handling the dialog, turn-taking, backslapping, and keeping attention. Furthermore, the agent should be able to keep the conversation meaningful and interesting. This may be achieved by incorporating context into the conversation and adjusting it to the agent’s personal traits. Molleda calls this strategic communication efforts which have to be undertaken in order to create an authentic appearance [75]. Previous research in social science further shows that being able to articulate ideas, thoughts, views and actions in a way so that others can understand them is another indicator of social intelligence and may thus also be required [12,22].
Being able to lead or take part in conversations pushes the agent further towards incorporating non-verbal conversational skills. These are defined as meta-verbal-cues, which include knowing when to pause, adapting the pace of the conversation, following the trend of the conversation, showcasing availability, attention as well as responsiveness. Also, the agent should be able to communicate both synchronously as well as asynchronously, and it should have the ability to question a user’s input. Our interviewees also support previous research in that conversational behavior includes high quality language skills, the incorporation of meta-verbal-cues as well as the mastery of Darwinian buttons [12,30,78].
The ability to understand culture and circumstances as well as to create an agent persona requires the capacity of learning from experience. As agents need to be capable of handling both requests coming from the user as well as self-triggered actions, they often need to connect to APIs, interfaces, and other types of networks. This would allow for the outsourcing of abilities that are not frequently used or those that do not require significant storage and/or computing power. Showcasing this type of distributed agent, Choi and Yoo propose a way of communication between humans and agents using instant messaging and a unified platform for communication and cooperation [79]. This approach is similar to distributed AI, where the intelligence is contained in objects situated in different locations [80]. While in general such autonomy would support an agent’s ability to learn from experience, it lacks supervision and so it may not be clear which direction an agent would take and how relevant information would be integrated into the system, potentially offering manipulation and deception caused by faulty data as well as data neglect.
Although to err is human in an agent it may be interpreted as non-authentic behavior. Error in this context is defined as performing unpredictable or wrong actions, not acting according to its purpose, persona or communication strategies or acting within a wrong cultural model. Through learning agents may, however, understand and adopt human problem-solving strategies to prevent error. As with human education, they may create an action-value-map that shows how certain actions or behaviors are perceived. This map should be constantly updated and expanded, essentially depicting the agent’s personality.
In summary we may therefore argue that our interviews with 12 experts and a confirmatory survey with 5 participants point to a manifold concept of agent authenticity in which five key characteristics seem particularly prominent. Those characteristics are:
  • To have a transparent purpose: i.e., an agent has to showcase its intent and decision making in a transparent way so as to make its actions understandable and predictable. Acting transparently and creating confidence by representing the creator or originator of the agent allows for keeping the conversation under control.
  • To learn from experience: i.e., an agent requires two-way learning strategies where it autonomously learns from cultural, behavioral, personal, conversational and contextual interaction data.
  • To anthropomorphize: i.e., an agent has to act as a persona including personal values, attitudes, and culture to establish a relationship with a user. This includes the building of individualized experiences as well as the creation of trust.
  • To show strong conversational behavior: i.e., an agent should incorporate relevant communication strategies to successfully handle dialogs and adapt non-verbal interaction behavior such as intelligent reasoning and decision making (cf. recent work on rapport modeling and social conversational strategies [27,45,65]).
  • To be coherent: i.e., an agent has to keep up with a conversation and relate to previous elements and experiences. Furthermore, in order to build common ground, it has to be aware of the digital and natural context of the conversation and its (different) conversation partners.
While these five characteristics are generally interrelated, our interview and survey results highlight that coherence and learning from experience are the main contributors to authenticity. That is, keeping track of the conversation connects conversational behavior and coherence. Furthermore, building common ground aims to generate individualized trust and creates a relationship between interlocutors (both human and artificial). Representing a persona is tightly connected to the purpose of the agent as well as its internal sortation. Acting predictable and transparent creates trust. Finally, so as to close the circle, learning from the conversational behavior and interactions helps the agent advance its communication capabilities, and cultural and conversational awareness supports the development of an agent persona.

6. Conclusions

The results of the studies presented in this paper add to the existing body of knowledge in several ways. A primary contribution of our work concerns a theoretical definition of authenticity related to agent technology achieved through the conceptualization of constructs found in artificial intelligence as well as in socio-ethics. In doing this, we believe we contribute to theory building in the area of socio-ethics for emerging technologies. As mentioned by Shoemaker and colleagues [81] theoretical constructs are the building blocks for social science theory. In our study we found that authenticity may be defined as the interplay of learning by experience, anthropomorphizing, having a transparent purpose, showing (advanced) conversational behavior and being coherent. This insight should work as a guideline with which we believe, researchers will be able to improve socio-ethics for emerging technologies.
Another relevant contribution of our work should be seen in its connection of two research streams which, to our knowledge, has not been done in this way before. Although the combination of socio-ethics and emerging technologies is an existing research field, our analysis tried to interlink an expert-focused understanding of authenticity related to socio-ethics and high-level concepts inherent to the development of artificial agents. That is, our work focused on identifying characteristics and factors which will hopefully lead to the development of more authentic intelligent agents. In doing so, our study found that the creation of such authentic agents relies on several, highly interconnected characteristics. These interrelations further indicate complex connections within as well as between characteristics, which makes their harmonization an important challenge of future agent development efforts.
In summary we may thus argue that the presented studies should count as the beginning of a journey towards a better understanding of agent authenticity. We do recognize that the generated insights may suffer from a certain participant bias (note: all interview and survey participants have had previous contact and relevant experience with the development of artificial agents or have been researching within this field) and from the fact that they did presume the existence of a binary problem space (i.e., what is authentic vs. what is not) while authenticity might rather be measured on a continuum. Future work should thus counter these limitations as well as strengthen their insight. For example, the rather theoretical conceptualization of authentic agent behavior has to be developed further so as to eventually reach the state of a more profound theoretical framework. The dynamic association between the five characteristics may guide this theory building. Also, the interplay of these characteristics may be an interesting path for future explorations, as well as a user’s understanding of authenticity so as to evaluate the benefit from cultivating such authentic behavior. Future efforts may further expand the authenticity concept to other contexts by, for example, investigating the interplay of brand authenticity and agent authenticity for marketing purposes. Finally, while our work focused on messaging agents, future ambitions may apply these findings to voice, haptic and other interaction channels, which may eventually lead to the type of authentic robots Alan Turing had in mind when proposing his concept of the universal machine [13].

Author Contributions

M.N. researched the relevant work and conducted and analyzed the primary interview study. L.B. conducted the verification study. S.S. supervised both studies and edited the article in collaboration with A.G.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Sheth, B. Forget Apps, Now the Bots Take over, Tech Crunch. 2015. Available online: https://techcrunch.com/2015/09/29/forget-apps-now-the-bots-take-over/?guccounter=1 (accessed on 15 September 2018).
  2. Nass, C.; Steuer, J.; Tauber, E.R. Computers Are Social Actors. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Boston, MA, USA, 24–28 April 1994; ACM: New York, NY, USA, 1994; pp. 72–78. [Google Scholar] [CrossRef]
  3. Samadyar, Z. Intelligent agents: A comprehensive survey. Int. J. Electron. Commun. Comput. Eng. 2014, 5, 790–798. [Google Scholar]
  4. Kurzweil, R. How to Create a Mind: The Secret of Human Thought Revealed; Penguin Books: New York, NY, USA, 2013; p. 352. [Google Scholar]
  5. Hawkins, J.; Blakeslee, S. On Intelligence, 1st ed.; Times Books: New York, NY, USA, 2004; p. 272. [Google Scholar]
  6. Minsky, M. The Society of Mind; Simon and Schuster: New York, NY, USA, 1986; p. 339. [Google Scholar]
  7. Hebb, D.O. The organization of behavior. In Neurocomputing: Foundations of Research; Anderson, J.A., Rosenfeld, E., Eds.; MIT Press: Cambridge, MA, USA, 1988; Chapter 4; p. 752. [Google Scholar]
  8. Salovey, P.; Mayer, J.D. Emotional Intelligence. Imaginat. Cognit. Personal. 1990, 9, 185–211. [Google Scholar] [CrossRef]
  9. Laertius, D. Lives of Eminent Philosophers, Volume 2; Books 6-10; Harvard University Press: Cambridge, MA, USA, 2000; p. 704. [Google Scholar]
  10. Wechsler, D. The Measurement and Appraisal of Adult Intelligence; Williams & Wilkins Company: Philadelphia, PA, USA, 1958; p. 324. [Google Scholar]
  11. Gardner, H. Frames of Mind: The Theory of Multiple Intelligences, 3rd ed.; Basic Books: New York, NY, USA, 2011; p. 528. [Google Scholar]
  12. Albrecht, K. Social Intelligence: The New Science of Success; John Wiley & Sons: San Francisco, CA, USA, 2006; p. 289. [Google Scholar]
  13. Turing, A.M. Computing machinery and intelligence. Mind 1950, 59, 433–460. [Google Scholar] [CrossRef]
  14. Mikolov, T.; Joulin, A.; Baroni, M. A roadmap towards machine intelligence. arXiv, 2015; arXiv:1511.08130. [Google Scholar]
  15. Tecuci, G. Artificial intelligence. Wiley Interdiscip. Rev. Comput. Stat. 2012, 4, 168–180. [Google Scholar] [CrossRef]
  16. Lieberman, H. Autonomous interface agents. In Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems, Atlanta, GA, USA, 22–27 March 1997; ACM: New York, NY, USA, 1997; pp. 67–74. [Google Scholar]
  17. Ferrara, E.; Varol, O.; Davis, C.; Menczer, F.; Flammini, A. The rise of social bots. Commun. ACM 2014, 59, 96–104. [Google Scholar] [CrossRef]
  18. Cassell, J.; Bickmore, T.; Billinghurst, M.; Campbell, L.; Chang, K.; Vilhjálmsson, H.; Yan, H. Embodiment in Conversational Interfaces: Rea. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Pittsburgh, PA, USA, 15–20 May 1999; ACM: New York, NY, USA, 1999; pp. 520–527. [Google Scholar]
  19. Hwang, T.; Pearce, I.; Nanis, M. Socialbots: Voices from the fronts. Interactions 2012, 19, 38–45. [Google Scholar] [CrossRef]
  20. Boshmaf, Y.; Muslukhov, I.; Beznosov, K.; Ripeanu, M. Design and analysis of a social botnet. Comput. Netw. 2013, 57, 556–578. [Google Scholar] [CrossRef] [Green Version]
  21. Kramer, A.D.I.; Guillory, J.E.; Hancock, J.T. Experimental evidence of massive-scale emotional contagion through social networks. Proc. Natl. Acad. Sci. USA 2014, 111, 8788–8790. [Google Scholar] [CrossRef] [PubMed]
  22. Persson, P.; Laaksolahti, J.; Lönnqvist, P. Understanding socially intelligent agents—A multilayered phenomenon. IEEE Trans. Syst. Man Cybern. Part A Syst. Hum. 2001, 31, 349–360. [Google Scholar] [CrossRef]
  23. Lei, H. Context awareness: A practitioner’s perspective. In Proceedings of the 2005 International Workshop on Ubiquitous Data Management, Tokyo, Japan, 4 April 2005; UDM: Tokyo, Japan, 2005; pp. 43–52. [Google Scholar]
  24. Coon, W.M. A Computational Model for Building Relationships between Humans and Virtual Agents. Ph.D. Thesis, Worcester Polytechnic Institute, Worcester, MA, USA, 2012. [Google Scholar]
  25. Bickmore, T.; Schulman, D. Empirical validation of an accommodation theory-based model of user-agent relationship. In International Conference on Intelligent Virtual Agents; Springer: Berlin/Heidelberg, Germany, 2012; pp. 390–403. [Google Scholar]
  26. Gratch, J.; Okhmatovskaia, A.; Lamothe, F.; Marsella, S.; Morales, M.; van der Werf, R.J.; Morency, L.P. Virtual rapport. In International Workshop on Intelligent Virtual Agents; Springer: Berlin/Heidelberg, Germany, 2006; pp. 14–27. [Google Scholar]
  27. Matsuyama, Y.; Bhardwaj, A.; Zhao, R.; Romeo, O.; Akoju, S.; Cassell, J. Socially-aware animated intelligent personal assistant agent. In Proceedings of the 17th Annual Meeting of the Special Interest Group on Discourse and Dialogue, Los Angeles, CA, USA, 13–15 September 2016; Association for Computational Linguistics: Stroudsburg, PA, USA; pp. 224–227. [Google Scholar]
  28. Weizenbaum, J. ELIZA—A computer program for the study of natural language communication between man and machine. Commun. ACM 1966, 9, 36–45. [Google Scholar] [CrossRef]
  29. Turkle, S. Whither psychoanalysis in computer culture. Psychoanal. Psychol. 2004, 21, 16–30. [Google Scholar] [CrossRef]
  30. Turkle, S. Authenticity in the age of digital companions. Interact. Stud. 2007, 8, 501–517. [Google Scholar] [CrossRef]
  31. Peterson, C.; Seligman, M.E.P. Character Strengths and Virtues: A Handbook and Classification, 1st ed.; American Psychological Association: Worcester, MA, USA; Oxford University Press: Oxford, UK, 2004; p. 800. [Google Scholar]
  32. Lopez, F.G.; Rice, K.G. Preliminary development and validation of a measure of relationship authenticity. J. Couns. Psychol. 2006, 53, 362–371. [Google Scholar] [CrossRef]
  33. Sheldon, K.M.; Ryan, R.M.; Rawsthorne, L.J.; Ilardi, B. Trait self and true self: Cross-role variation in the Big-Five personality traits and its relations with psychological authenticity and subjective well-being. J. Personal. Soc. Psychol. 1997, 73, 1380–1393. [Google Scholar] [CrossRef]
  34. Cappannelli, G.; Cappannelli, S.C. Authenticity: Simple Strategies for Greater Meaning and Purpose at Work and at Home; Clerisy Press: Cincinnati, OH, USA, 2005; p. 229. [Google Scholar]
  35. Beattie, J.; Fernley, L. The Age of Authenticity: An Executive Summary; Cohn & Wolfe: New York, NY, USA, 2014. [Google Scholar]
  36. Pine, B.J.; Gilmore, J.H. Keep it real. Mark. Manag. 2008, 17, 18–24. [Google Scholar]
  37. Leigh, T.W.; Peters, C.; Shelton, J. The consumer quest for authenticity: The multiplicity of meanings within the MG subculture of consumption. J. Acad. Mark. Sci. 2006, 34, 481–493. [Google Scholar] [CrossRef]
  38. Gulikers, J.T.M.; Bastiaens, T.J.; Kirschner, P.A.; Gulikers, J.T.M.; Bastioens, T.J.; Kirschner, P.A. A five-dimensional framework for authentic assessment. ETR D 2010, 52, 67–86. [Google Scholar]
  39. Johnson, D.G.; Noorman, M. Recommendations for future development of artificial agents. IEEE Technol. Soc. Mag. 2014, 4, 22–28. [Google Scholar] [CrossRef]
  40. Moulin, B.; Irandoust, H.; Bélanger, M.; Desbordes, G. Explanation and argumentation capabilities: Towards the creation of more persuasive agents. Artif. Intell. Rev. 2002, 17, 169–222. [Google Scholar] [CrossRef]
  41. Haynes, S.R.; Cohen, M.A.; Ritter, F.E. Designs for explaining intelligent agents. Int. J. Hum.-Comput. Stud. 2009, 67, 90–110. [Google Scholar] [CrossRef]
  42. Kakas, A.; Moraitis, P. Argumentation based decision making for autonomous agents. In Proceedings of the Second International Joint Conference on Autonomous Agents and Multiagent Systems, Melbourne, Australia, 14–18 July 2003; pp. 883–890. [Google Scholar]
  43. Seeger, A.M.; Pfeiffer, J.; Heinzl, A. When do we need a human? Anthropomorphic design and trustworthiness of conversational agents. In Proceedings of the Sixteenth Annual Pre-ICIS Workshop on HCI Research in MIS, AISeL, Seoul, Korea, 10 December 2017. [Google Scholar]
  44. Cheng, A. Chat, Connect, Collapse: A Critique on the Anthropomorphization of Chatbots in Search for Emotional Intimacy. Scripps Senior Theses. 1107. 2018. Available online: http://scholarship.claremont.edu/scripps_theses/1107 (accessed on 15 September 2018).
  45. Romero, O.J.; Zhao, R.; Cassell, J. Cognitive-inspired conversational-strategy reasoner for socially-aware agents. In Proceedings of the 26th International Joint Conference on Artificial Intelligence, Melbourne, Australia, 19–25 August 2017; AAAI Press: Palo Alto, CA, USA, 2017; pp. 3807–3813. [Google Scholar]
  46. Franklin, S.; Patterson, F., Jr. The LIDA architecture: Adding new modes of learning to an intelligent, autonomous, software agent. Pat 2006, 703, 764–1004. [Google Scholar]
  47. Shawar, B.A.; Atwell, E.S. Using corpora in machine-learning chatbot systems. Int. J. Corpus Linguist. 2005, 10, 489–516. [Google Scholar] [CrossRef]
  48. Serban, I.V.; Sankar, C.; Germain, M.; Zhang, S.; Lin, Z.; Subramanian, S.; Kim, T.; Pieper, M.; Chandar, S.; Ke, N.R.; et al. A deep reinforcement learning chatbot. arXiv, 2017; arXiv:1709.02349. [Google Scholar]
  49. Sun, R. The CLARION cognitive architecture: Extending cognitive modeling to social simulation. In Cognition and Multi-Agent Interaction: From Cognitive Modeling to Social Simulation; Cambridge University Press: Cambridge, UK, 2006; pp. 79–99. [Google Scholar]
  50. Laird, J.E.; Newell, A.; Rosenbloom, P.S. Soar: An architecture for general intelligence. Artif. Intell. 1987, 33, 1–64. [Google Scholar] [CrossRef]
  51. Anderson, J.R.; Matessa, M.; Lebiere, C. ACT-R: A theory of higher level cognition and its relation to visual attention. Hum.-Comput. Interact. 1997, 12, 439–462. [Google Scholar] [CrossRef]
  52. Breazeal, C. Designing Sociable Robots; MIT Press: Cambridge, MA, USA, 2002; p. 281. [Google Scholar]
  53. Brooks, R.A. Flesh and Machines: How Robots Will Change Us; Vintage Books: New York, NY, USA, 2003; p. 260. [Google Scholar]
  54. Picard, R.W. Affective Computing; MIT Media Laboratory Perceptual Computing Section Technical Report No. 321; MIT Media Laboratory: Cambridge, MA, USA, 1995. [Google Scholar]
  55. Gundlach, H.; Neville, B. Authenticity: Further theoretical and practical development. J. Brand Manag. 2012, 19, 484–499. [Google Scholar] [CrossRef]
  56. Grayson, K.; Martinec, R. Consumer perceptions of iconicity and indexicality and their influence on assessments of authentic market offerings. J. Consum. Res. 2004, 31, 296–312. [Google Scholar] [CrossRef]
  57. Beverland, M.; Lindgreen, A.; Vink, M.W. Projecting authenticity through advertising: Consumer judgments of advertisers’ claims. J. Advert. 2008, 37, 5. [Google Scholar] [CrossRef]
  58. Wang, F.Y.; Zeng, D.; Carley, K.M.; Mao, W.; Johnson-Lenz, T.; Cyert, R. Social computing: From social informatics to social intelligence. IEEE Intell. Syst. 2007, 22, 79–83. [Google Scholar] [CrossRef]
  59. McCracken, G. The Long Interview; A Sage University Paper Volume 13 of Qualitative Research Methods; SAGE Publications: Thousand Oaks, CA, USA, 1988; p. 88. [Google Scholar]
  60. Morhart, F.; Malär, L.; Guèvremont, A.; Girardin, F.; Grohmann, B. Brand authenticity: An integrative framework and measurement scale. J. Consum. Psychol. 2013, 25, 200–218. [Google Scholar] [CrossRef]
  61. Bruhn, M.; Schoenmüller, V.; Schäfer, D.; Heinrich, D. Brand authenticity: Towards a deeper understanding of its conceptualization and measurement. Adv. Consum. Res. 2012, 40, 567–576. [Google Scholar] [CrossRef]
  62. Rezabek, L.; Cochenour, J. Visual cues in computer-mediated communication: Supplementing text with emoticons. J. Vis. Lit. 1998, 18, 201–215. [Google Scholar] [CrossRef]
  63. Walther, J.B.; D’Addario, K.P. The impacts of emoticons on message interpretation in computer-mediated communication. Soc. Sci. Comput. Rev. 2001, 19, 324–347. [Google Scholar] [CrossRef]
  64. Watzlawick, P.; Bavelas, J.B.; Jackson, D.D. Pragmatics of Human Communication: A Study of Interactional Patterns, Pathologies and Paradoxes; WW Norton & Company: New York, NY, USA, 2011. [Google Scholar]
  65. Bickmore, T.; Schulman, D.; Yin, L. Maintaining engagement in long-term interventions with relational agents. Appl. Artif. Intell. 2010, 24, 648–666. [Google Scholar] [CrossRef] [PubMed]
  66. Kornienko, S.; Kornienko, O.; Levi, P. Collective AI: Context awareness via communication. IJCAI Int. Joint Conf. Artif. Intell. 2005, 5, 1464–1470. [Google Scholar]
  67. Weiser, M. The Computer for the 21st Century. ACM SIGMOBILE Mob. Comput. Commun. Rev. 1999, 3, 3–11. [Google Scholar] [CrossRef]
  68. Beverland, M. Crafting brand authenticity: The case of luxury wines. J. Manag. Stud. 2005, 42, 1003–1029. [Google Scholar] [CrossRef]
  69. Beverland, M.; Luxton, S. Managing integrated marketing communication (IMC) through strategic decoupling: How luxury wine firms retain brand leadership while appearing to be wedded to the past. J. Advert. 2005, 34, 103–116. [Google Scholar] [CrossRef]
  70. Wood, A.M.; Linley, P.A.; Maltby, J.; Baliousis, M.; Joseph, S. The authentic personality: A theoretical and empirical conceptualization and the development of the authenticity scale. J. Couns. Psychol. 2008, 55, 385–399. [Google Scholar] [CrossRef]
  71. Chhabra, D. Defining authenticity and Its determinants: Toward an authenticity flow model. J. Travel Res. 2005, 44, 64–73. [Google Scholar] [CrossRef]
  72. Chhabra, D. Positioning museums on an authenticity continuum. Ann. Tour. Res. 2008, 35, 427–447. [Google Scholar] [CrossRef]
  73. Beverland, M.; Farrelly, F.; Quester, P. Brand-personal values fit and brand meanings: Exploring the role individual values play in ongoing brand loyalty in extreme sports subcultures. Adv. Consum. Res. 2006, 33, 21–28. [Google Scholar]
  74. Trilling, L. Sincerity and Authenticity, 2nd ed.; Harvard University Press: Cambridge, MA, USA, 1972; p. 188. [Google Scholar]
  75. Molleda, J.C. Authenticity and the construct’s dimensions in public relations and communication research. J. Commun. Manag. 2010, 14, 223–236. [Google Scholar] [CrossRef]
  76. Groves, A.M. Authentic British food products: A review of consumer perceptions. Int. J. Consum. Stud. 2001, 25, 246–254. [Google Scholar] [CrossRef]
  77. Schallehn, M.; Burmann, C.; Riley, N. Brand authenticity: model development and empirical testing. J. Prod. Brand Manag. 2014, 23, 192–199. [Google Scholar] [CrossRef]
  78. Buendgens-Kosten, J. Authenticity. ELT J. 2014, 68, 457–459. [Google Scholar] [CrossRef] [Green Version]
  79. Choi, J.; Yoo, C.W. Connect with things through instant messaging. In Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Springer: Heidelberg, Germany, 2008; Volume 4952, pp. 855–860. [Google Scholar]
  80. Magedanz, T.; Rothemel, K.; Krauseo, S. Intelligent agents: An emerging technology for next generation telecommunications? In Proceedings of the Fifteenth Annual Joint Conference of the IEEE Computer Societies, Networking the Next Generation, San Francisco, CA, USA, 24–28 March 1996; Volume 2, pp. 464–472. [Google Scholar]
  81. Shoemaker, P.J.; Tankard, J.W.; Lasorsa, D.L. How to Build Social Science Theories; SAGE Publications: Thousand Oaks, CA, USA, 2004; p. 240. [Google Scholar]
Table 1. Interview participants and their respective areas of expertise.
Table 1. Interview participants and their respective areas of expertise.
Part. No.OccupationArea of ExpertiseLast Contrib.Citations
P01PhD StudentMachine Learning20160
P02PhD StudentArtificial Intelligence20160
P03Associate ProfessorMachine Learning20161.279
P04Assistant ProfessorSpeech, Communication and Technology20151.223
P05PhD StudentSocial Bots2016167
P06Research AssistantArtificial Intelligence20150
P07Assistant ProfessorHuman-Computer Interaction2016217
P08PhD StudentArtificial Cognitive Systems201610
P09Associate DeanSocio-Ethic Technologies2015319
P10Program DirectorUser Experience and Complex Systems2016724
P11Assistant ProfessorHuman-Computer Interaction2016111
P12Research FellowEthics and Emerging Technologies20162.115
Table 2. Number of statements per interviewee related to being transparent.
Table 2. Number of statements per interviewee related to being transparent.
Machine Learning/AISocio-Ethics
Part. No.No. of StatementsPart. No.No. of Statements
P010P0718
P023P085
P030P096
P045P103
P052P115
P064P129
Sum14 46
Table 3. Number of statements per interviewee related to learning from experience.
Table 3. Number of statements per interviewee related to learning from experience.
Machine Learning/AISocio-Ethics
Part. No.No. of StatementsPart. No.No. of Statements
P0115P071
P022P084
P032P097
P043P106
P058P116
P062P123
Sum32 27
Table 4. Number of statements per interviewee related to anthropomorphizing.
Table 4. Number of statements per interviewee related to anthropomorphizing.
Machine Learning/AISocio-Ethics
Part. No.No. of StatementsPart. No.No. of Statements
P0110P070
P022P083
P033P092
P0417P108
P054P110
P062P127
Sum38 20
Table 5. Number of statements per interviewee related to behaving conversational.
Table 5. Number of statements per interviewee related to behaving conversational.
Machine Learning/AISocio-Ethics
Part. No.No. of StatementsPart. No.No. of Statements
P010P070
P020P085
P037P098
P0411P100
P056P114
P061P123
Sum25 20
Table 6. Number of statements per interviewee related to being coherent.
Table 6. Number of statements per interviewee related to being coherent.
Machine Learning/AISocio-Ethics
Part. No.No. of StatementsPart. No.No. of Statements
P012P070
P025P085
P030P095
P044P100
P050P111
P060P120
Sum11 11
Table 7. Survey questions positioned along the identified five authenticity traits.
Table 7. Survey questions positioned along the identified five authenticity traits.
General
⚬ In your opinion, what characterizes an authentic messaging agent?
Transparency
⚬ From your point of view, is it important for an authentic messaging agent to behave transparent; i.e., justified and predictable? Please explain your answer.
⚬ In your opinion, what characterizes transparent agent behavior?
⚬ In your opinion, should an authentic agent mimic the behavior of its creator (i.e., the company it represents) or should it rather develop its own ‘personality’? Please explain your answer.
Learning
⚬ Do you believe that a messaging agent has to learn in a similar fashion as humans do in order to be perceived authentic? Please explain your answer.
⚬ In your opinion, what characterizes an agent’s learning capabilities?
⚬ From your point of view, is context awareness an important feature of authenticity? Please explain your answer.
Anthropomorphizing
⚬ Do you believe that a messaging agent has to anthropomorphize in order to be perceived authentic (note: to anthropomorphize = resemble human form)? Please explain your answer.
⚬ In your opinion, what characterizes anthropomorphic behavior in a messaging agent?
⚬ In your opinion, what type of ‘social’ relationship should a messaging agent aim to build up in order to be perceived authentic?
Conversing
⚬ In your opinion, how ‘conversational does a messaging agent need to be in order to be perceived authentic? Please explain your answer.
⚬ In your opinion, what are the conversational skills you would expect from an authentic messaging agent?
⚬ In your opinion, what are conversational rules and values an authentic agent would need to adhere to?
Coherency
⚬ From your point of view, what describes coherent agent behavior?
⚬ In your opinion, what is required of an authentic messaging agent in order to establish common ground?
⚬ From your point of view, what is required of an authentic messaging agent to generate reproducible streams of argumentation?
Table 8. Survey participants, their respective areas of expertise and the overall relevance of fields connected to their area of work.
Table 8. Survey participants, their respective areas of expertise and the overall relevance of fields connected to their area of work.
Survey Part. No.OccupationArea of ExpertiseLast Contrib.Citations
S01ResearcherArtificial Intelligence & HCI2017367
S02ProfessorRobot Ethics20181.926
S03ResearcherPsychology, Philosophy20177
S04ResearcherCognitive Science20186.565
S05IT ConsultantSocial Science20170
Fields relevant for their work (1 = not relevant at all; 5 = very relevant):
Linguistics: M = 2.60; SD = 1.34; Mode = 2; Median = 3
Artificial Intelligence: M = 3.60; SD = 1.67; Mode = 5; Median = 4
Software Engineering: M = 3.80; SD = 1.64; Mode = 5; Median = 4
Psychology: M = 3.00; SD = 1.58; Mode = N/A; Median = 3
Social Science: M = 2.40; SD = 1.67; Mode = 1; Median = 2
Human-Computer Interaction: M = 3.80; SD = 1.64; Mode = 4; Median = 4
Philosophy: M = 3.40; SD = 1.82; Mode = 5; Median = 4
Ethics: M = 3.20; SD = 1.79; Mode = 5; Median = 3

Share and Cite

MDPI and ACS Style

Neururer, M.; Schlögl, S.; Brinkschulte, L.; Groth, A. Perceptions on Authenticity in Chat Bots. Multimodal Technol. Interact. 2018, 2, 60. https://0-doi-org.brum.beds.ac.uk/10.3390/mti2030060

AMA Style

Neururer M, Schlögl S, Brinkschulte L, Groth A. Perceptions on Authenticity in Chat Bots. Multimodal Technologies and Interaction. 2018; 2(3):60. https://0-doi-org.brum.beds.ac.uk/10.3390/mti2030060

Chicago/Turabian Style

Neururer, Mario, Stephan Schlögl, Luisa Brinkschulte, and Aleksander Groth. 2018. "Perceptions on Authenticity in Chat Bots" Multimodal Technologies and Interaction 2, no. 3: 60. https://0-doi-org.brum.beds.ac.uk/10.3390/mti2030060

Article Metrics

Back to TopTop