An illustration of a human and machine head

Altercations with A.I. or correcting a chat bot

Recent research from PBS

Research shows we correct misunderstandings with humans more often than with artificial agens, such as chat bots or A.I. 

It has to do with who we imagine we’re talking to, the image of that other person or machine in our head.

If you don’t bother to correct Siri or you just google it yourself when Alexa is completely confused, research from our department shows that you’re not alone. Kevin Corti and Alex Gillespie, from the Department of Psychological and Behavioural Science at the LSE, have found that people are more likely to make corrections or attempt to repair misunderstandings when they’re talking to other people rather than with chat bots or A.I., also called “artificial agents”.

Corti and Gillespie say intersubjectivity is at the heart of this effort, or rather this lack of effort. Interacting with another, whether it’s a human or a machine, brings two perspectives into relation with each other, and that’s precisely what intersubjectivity describes. It’s the interaction or the relationship between different, subjective perspectives. Corti and Gillespie’s new study builds on a cutting edge area of social psychology that aims to understand how humans react to and behave around machines versus humans.

If you think that the human face or the presence of a person must be a part of human interaction, again you’re not alone. In this study Corti and Gillespie had observed how people reacted to humans and machines whether or not they were faced with a human or a machine. Surprisingly, the main thing that increased or decreased someone’s effort to repair misunderstandings is whether they thought they were talking with another person or with a chat bot. The presence of a person didn’t make a difference.

 

 

Told they’re talking to a human

Told they’re talking to a machine

Faced with a human

Thought they were talking to another person, but the person in front of them repeated responses from Cleverbot which they heard through an ear piece.

Thought they were talking to a chatbot, and they were told the person in front of them was repeating responses from Cleverbot through an ear piece.

Faced with a machine

Thought they were talking to another person, but they communicated with Cleverbot through a chat window on a computer.

Thought they were talking to a chatbot, and they communicated with Cleverbot through a chat window on a computer.

 

In every condition of the experiment, participants were communicating with a chat bot, cleverly named Cleverbot. Some were told that they were talking to a chat bot and others were told they were talking with another person. Within those two groups, half communicated by typing into a dialogue box on a computer and the other half spoke across a table with someone who had an earpiece which allowed them to “echo” the chatbot’s responses.  And yet the presence of a person didn’t matter, and as it turns out the strange responses of a chatbot coming from a stranger sitting across the table didn’t matter either. It has to do with who we imagine we’re talking to, the image of that other person or machine in our head.

Perhaps it’s because we’ve grown accustomed to communicating with family and friends via the internet or our phones, but this research tells us that we don’t need to see the person in front of us in order to engage in dialogue. We actually don’t even need the other person to make any sense in order for us to make effort at understanding each other. When it comes to artificial intelligence, on the other hand, we’re not quite there yet. If we’re ever going to have a meaningful chat with Siri, not only will the technology behind AI have to improve its conversational capacity, but likewise, humans will have to anticipate more conversational capacity and make more effort.

 

This blog post is based on the findings from this academic article:

Corti, & Gillespie. (2016). Co-constructing intersubjectivity with artificial conversational agents: People are more likely to initiate repairs of misunderstandings with agents represented as human. Computers in Human Behavior, 58, 431-442.