The Lead South Australia

News leads from South Australia

Get The Lead in your inbox. Subscribe

Will talking to AI voice assistants re-engineer our human conversations?

Ideas

Just as texting changed written communication, talking bots could change the way we communicate with each other.

Print article Republish Notify me

Sign up to receive notifications about new stories in this category.

Thank you for subscribing to story notifications.

When you’re lost, Siri can be your best friend. But if she can’t retrieve the right address from your contacts, she can drive you crazy.

And so it is with the legion of virtual personal assistants that are entering our lives. From Amazon’s Alexa to Google’s Home, people are busy talking to intelligent machines as never before.

It’s estimated that more than 60% of internet traffic is now generated by machine-to-machine, and person-to-machine, communication. IT advisory firm Gartner has predicted that by 2020 the average person will be having more conversations with robots than with their partner. (Sometimes we don’t even know we are doing it).

And just as texting changed written communication, talking bots could change the way we communicate with each other.

Talk is social

The late sociologist Diedre Boden wrote that human sociability is created through “talk, talk, talk and more talk”.

Talking person-to-person is not only how we exchange information, but also how we used to carry out many tasks, such as ordering pizzas, booking plane tickets and confirming meetings. And it’s these tasks that we are increasingly subcontracting to robots.

When we communicate face-to-face there is an expectation of mutual attentiveness, but these norms could be wholly deconstructed if we were to have the majority of our conversations with non-humans.

Unlike face-to-face talk, chatbots do not require us put effort into making the conversation polite or interesting. We don’t need to be charming, amusing, or assert our intelligence.

Bots don’t need to like us, even if we have a need to be liked. In fact, this would wildly complicate matters. A machine will simply extract the information it needs to create an appropriate response.

It is possible that talking to machines all the time could re-engineer the way we have conversations. We could end up with the linguistic equivalent of emojis. As an article in the New York Times recently put it, interacting with robots could “mean atrophy for our social muscles”. If they’re just machines, why bother with pleasantries?

The scientific research on this is still unclear. Some studies have found people can actually be remarkably cordial to robots, while other research suggests we’re liable to be rude and curt when we know our conversational partner isn’t human. We could get used to bossing things around, and this behaviour could bleed into everyday life.

Remembering our manners

Tech companies are already trying to head off this problem. After fielding concerns from parents, Amazon created a politeness mode for its Echo devices that gently reminds its users to say “please.”

And some chatbots are being developed to go even further and mimic human emotion. For example, clinical psychologist Alison Darcy built a talking bot to help people with depression and anxiety. The delightfully named Woebot spoke to 50,000 people in its first week of deployment – more than a human psychologist could speak to in a lifetime.

In a study with 70 young adults, Darcy found that after two weeks of interacting with the bot, the test subjects had lower incidences of depression and anxiety. They were impressed, and even touched, by the software’s attentiveness.

One of the subjects told Darcy’s team: “Woebot felt like a real person that showed concern”.

Glitches and misunderstandings

In 1950, scientist Alan Turing designed an experiment to answer one of science’s most enduring questions: Is it possible to create a robot that could be mistaken for a human?

To date, the answer has mostly been no.

The reason for this is that AI devices respond to speech by drawing from an enormous database of code, scripted utterances and network conversation. So they can rarely respond to the unexpected shifts in, and immense complexity of, human conversation, save in minor ways.

Brian Christian, author of two books about AI, says of such machine talk is like a soup.

“What you get, the cobbling together of hundreds of thousands of prior conversations, is a kind of conversational purée. Made of human parts, but less than a human sum,” he says.

At this stage, we can best get a glimpse into the differences between day-to-day talk and automated machine conversation when something goes awry, or there is a technical glitch.

Take, for example, the story of a family in Portland Oregon whose Amazon Alexa interpreted a background human conversation in the family home as answers to its questions. Alexa subsequently sent a recording of the conversation to a person in their list of contacts, just as (it thought) it had been requested to.

AI is all around us

Even though we might be having less of them, human conversations aren’t going to decrease in significance anytime soon.

Nevertheless, the ubiquity of the smartphone has essentially liquefied our social world, which almost always includes a level of digital engagement with others outside the immediate social context. This has created a complex, contradictory mix of being present with others, even when they’re not physically there.

AI is not about the future – our lives are already saturated in it. Chatbots, softbots, and virtual personal assistants are becoming an integral part of our daily lives, even if we are not always aware of their role.

If talking to chatbots and virtual personal assistants becomes the new normal, we should be aware of the ways they could change how we talk to each other, and how we relate to ourselves.

One thing is certain. AI is having a profound impact on what it means to be human.


Professor Elliott’s new book, The Culture of AI: Everyday Life and the Digital Revolution, is published by Routledge.The Conversation

Anthony Elliott, Dean of External Engagment and Executive Director of the Hawke EU Jean Monnet Centre of Excellence, University of South Australia and Julie Hare, Honorary Fellow, University of Melbourne

This article is republished from The Conversation under a Creative Commons license. Read the original article.

This is a Creative Commons story from The Lead South Australia, a news service providing stories about innovation in South Australia. Please feel free to use the story in any form of media. The story sources are linked in with the copy and all contacts are willing to talk further about the story. Copied to Clipboard

More Ideas stories

Loading next article