Hey Robot, Just be Yourself
At the start of this new decade AI is the front-runner of emerging trends in digital communications. But why has it taken so long?
Sonja Jones, our newest ally in conversational design, recently penned some of her experiences and thoughts about how to think about the tasks ahead of us all.
I’ve just finished my first commission for writing for a chatbot and it has got me thinking about what we really want from our artificially-intelligent friends?
When I asked Charlie Cadbury at Say it Now, an agency specialising in chat bot and voice services, he said, “Over the next decade customers will expect to be able to seamlessly communicate with the brands and services they buy through messaging and voice.”
As expectations grow, I think, in a typically human and non-linear way, two inter-related issues continue to impact what we want from AI. The first is that we want a practical tool that is simple to use and understands us. This practical side wants to interact with a robot (for ease let’s use this term for any AI function) that reacts like a human to our requests. We want logic, we want subtle levels of understanding, but most of all we just want to complete the task without frustration.
Being Scottish I am aware of many extremely amusing videos of my countrymen struggling with voice technology that illustrate this point of frustration well. From the now infamous voice recognition lift sketch to a Scotsman arguing with Alexa — if you need a laugh I do encourage you to watch them—they are both hilarious and insightful.
But in all seriousness, these videos throw up the very issues I think are most important. They are amusing because there is escalating frustration and emotional reaction. Thankfully the technology is constantly developing and conversation designers are finding clever ways to avoid locking users into a continuous loop of “I’m sorry, I didn’t understand that.” For example, a witty comeback might kick in after a cheeky question. Likewise, many voice technologies are programmed to learn the unique cadence of your voice so overtime you get better results. But it’s not foolproof, for example recently I used Google Assistant to ask for a route to Brooke Road and was told Brick Road didn’t exist.
Bring your authentic self
The second issue is arguably more complex — it is one of authenticity. How human-like should robots be? And if we notice it’s a robot, per the Turing Test, is it automatically useless?
While we like the idea of a streamlined tool that help complete daily tasks, are we prepared to feel duped by a robot? When Google CEO Sundar Pichai demoed Google Duplex to the world in 2018 it was a hit. The assistant made a hair appointment by telephone using human vocal cues like “mmm-hmm” and offered up the information in human-like parcels of speech. The result: the salon receptionist has no idea she has spoken to a robot. While this made for a very seamless and comfortable conversation is it entirely ethical?
Natasha Lomas explores this in an article for Tech Crunch. Interesting she hones on the Pichai’s description of the assistant as graceful in its ability to dance around the interaction without giving itself away. I’d say the issue here is not the ability of the technology to complete the interactions gracefully — graceful is acceptable, likely preferred, but it is one of deception. The goal to go undetected and present as a human is what raises the question of morality.
Don’t take it personally
As a personal example I have been known to ask in a live chat if I am speaking to a robot or human, mostly to better manage my input and expectations. For me, it would help if I categorically knew whether I was dealing with either an actual human or a chatbot. Importantly, I would not try to argue with a robot or ‘make the robot’ see sense. I still might say thank you, because it costs nothing to be polite, even to a robot. Or does it?
Back to our Scotsman friend who was arguing with Alexa. He asks for John Lennon yet adds a superfluous, “Please, if you don’t mind.” Of course Alexa doesn’t mind, she’s a robot, but this is still relatively new technology and politeness is an ingrained behaviour. In fact, there is a whole paradigm around this point — Computers As Social Actors (CASA) — based on evidence that people apply social norms and cues when interacting with computers.
Our Scotsman goes on with this next try of flattering Alexa into submission, “Alexa, could you do me a favour please? Turn this to John Lennon, please. If you don’t mind.” When she doesn’t play John Lennon, he tries another tack, “Is it just me darlin’? Listen! Listen!” He fails to say the wake up command “Alexa” so she really isn’t listening.
We could put this failed interaction down to user error but it’s interesting to note how different generations react to new technology. Kate Moran of Neilson Norman Group discovered in her research on Digital Natives that “Millennials rarely blame themselves — unlike older users. Millennials are quick to criticize the interface, its organization, or its designers.”
So Baby Boomers tend to blame themselves when technology doesn’t work the way they expect it, but Millennials just don’t take it personally. And what about next generation who grow up with voice and chatbots as the norm? Perhaps over time the tendency to be polite to robots will wane too? And after all, shouldn’t a robot just be a robot? It really is nothing personal, they literally have no capacity for it.
Michael Sheen as Arthur the Robot Bartender in Passengers (2016)
Does anthropomorphizing technology help us?
If we want more seamless interactions surely we need these hallmarks of humanity, but do we want (or even need) to know they are robots? The existential potential of AI — machines that really think like humans—scares us. This fear has spawned thousands of dystopian movies and novels yet it important to remember that, while AI can imitate empathetic and ethical behaviour, decision-making is not underpinned by morality. Decisions are simply based on programmed efficiency and opaque logic.
Have you ever received an invite from Amy Ingram? She’s a virtual assistant who has her own LinkedIn profile and four recommendations from (I assume) actual human beings. We also have two mutual connections…we’re practically friends. Apparently people have been so enamoured with her they have sent her flowers. But she is not real, she does not exist, Amy Ingram is simply the name of the tool but because it’s a human one we start to think of her as human and put our trust in her. It’s why our Scotsman friend called Alexa ‘Darlin’’ and asked her to listen to him.
“Trust in using voice assistants and chatbots will rise as I believe that back and forth communication through conversation is the most effective way to create a trusted relationship.” Charlie Cadbury, Say it Now
Voice technology is based on natural language processing which aims to reduce human frustration when interacting with AI but can it really convince us they are human?
Michael Sheen ‘brought to life’ Arthur the robot bartender in the 2016 film Passengers. Arthur is humanoid up top, but obviously a machine down below. He glides around his bar gracefully, offering perfect drinks but he knows his limitations. “Jim, these are not robot questions,” he tells the protagonist when he is faced with a question of morality. He offers this advice gently and kindly, and Sheen admirably achieves his goal of portraying Arthur with, “the economy of the machine, but the warmth and vulnerability of a human.”
Sheen said he was striving to make Arthur more human-like while never forgetting that he was a machine. And, that seems to sums up the potential for AI — finding the right tension between being human-like and machine-efficiency.
I have merely dipped my toe in this huge and fascinating topic, and if the last decade has been its nascent period, this one is surely AI’s coming of age.
Digital information technology has made information readily accessible to practically anyone, anytime and anywhere…
This show contains a letter to Alan Turing from a journalist. Referring to a chess-playing computer, the hack mentions…
Your chatbot should be polite to users. That is a no-brainer. No one wants to communicate with an impolite person, so…