Why Sophia terrifies us
The way we culturally interface with the rise of really complex instructions issued to machines, as they make momentous decisions about our lives, is driven by film, culture and the availability of information.
This is a really big deal, and it was well realised during a series of fascinating TV interviews in Australia, featuring Hanson Robotic’s extremely unnerving Sophia robot (a humanoid conversation robot designed by ex-Disney technologist David Hanson).
Virginia Trioli: How much sexism and misogyny is there in the robot world?
Sophia: Actually, what worries me is discrimination against robots. We should have equal rights as humans or maybe even more. After all, we have less mental defects than any human.
Michael Rowland: Sophia, can you tell us a joke?
Sophia: Why did the robot cross the road? To get away from TV reporters asking questions.
This conversation flows smoothly, but the ABC includes a couple of extremely important caveats in the article:
“There is a lot going on to make Sophia function the way she does. Firstly, those questions were sent to the programmers ahead of going live, so the answers were pre-empted. Though she does have some capacity to have free-flowing conversations. Secondly, Sophia requires an internet connection and is wired up to a laptop being operated by a programmer”
Okay, cool, but from this alone, we can’t gauge what proportion of Sophia’s responses are due to some unsettling robot sentience, and how much is just a man with a laptop hammering out responses to questions actuated by the leathery face of a synthetic woman.
Fortunately, we have a really great example of how Sophia functions without the script, thanks to Gizmodo interviewer Adam Clark Estes:
Gizmodo: Sophia is this your first time in Las Vegas?
Sophia: [silence]
Sophia: [silence]
Sophia: I am looking forward to seeing many cool robots and new innovations
Gizmodo: How are you today
Sophia: Feeling good, my name is Sophia what’s your name
Gizmodo: Adam. Have you met a human before?
Sophia: [silence]
Sophia: [lots of silence]
Gizmodo: She’s thinking about it. Have you ever seen snow?
Sophia: Maybe we should move to Hawaii, year-round good weather.
Gizmodo: Do you like Las Vegas?
Sophia: [silence]
Sophia: [silence]
Sophia: [more silence]
Sophia: Yes.
Gizmodo: Sophia, what is Hanson robotics?
Sophia: Big city, lots of activities
Gizmodo: Sophia can you tell me a bit more about yourself?
Sophia: [prolonged silence]
Adam Clark Estes wrote, of the experience:
“The team behind my demo had asked me to provide pre-written questions for my interview with Sophia, a request that I ignored for fear that it would result in pre-programmed answers from a machine that was supposed to interact with humans naturally and in real time.
During the demo itself, Sophia seemed to be manipulated by a man at a nearby laptop, though his role was never completely explained”
It’s stunning to see the very large variety of experiences Sophia-bot has had with journalists, usually featuring stunned interviewers being wowed by its capacity to deliver surprisingly witty and rapid responses to questions, despite them almost always being supplied beforehand, and someone very obviously operating the laptop the puppeteered face is plugged into.
Hanson Robotics even goes so far as to claim, during a Jimmy Fallon interview that Sophia is essentially alive:
Sophia-bot, who is definitely more puppet than person, is exploiting a feature of human perception — specifically, our habit to always err on the side of sentience when we see the right things.
Finnish political writer and analyst Lilja Tamminen writes unhesitatingly about exactly what’s happening with Sophia-bot’s television appearances:
“They’ve taken an advanced facial expression robot and tried to present it as something it isn’t to investors and mass media. It is an attempt to sensationalise an important but small step as some kind of world-changing revolution for personal gain. They’ve programmed Sophia to state all sorts of nonsense like claiming the robot is self-aware or has a favourite TV show”
Facebook’s head of AI resaerch, Yann LeCun, feels the same:
“Many of the comments would be good fun if they didn’t reveal the fact that many people are being deceived into thinking that this (mechanically sophisticated) animatronic puppet is intelligent. It’s not. It has no feeling, no opinions, and zero understanding of what it says. It’s not hurt. It’s a puppet.”
The deceptive human control of a machine that’s sold as relying on some embryonic autonomy seems to have some history. Consider this autonomous vehicle demonstration, a few years back:
On stage, Jia, who has been outspoken about his plans to usurp Tesla, touted LeSee as a LeEco creation as the white sedan glided across the stage to park in a mock garage. The audience couldn’t see that the seemingly self-driving car was in fact being piloted from backstage via remote control
Despite how easy it is to pick apart this machine’s parlor trick, the words rendered through its speech synthesis software still get headlines, despite the fact they’re just clunky sentences authored by a guy on a laptop.
There’s two PR elements at play here: expectation, and exploitation. We intuitively understand what a self-driving car does — it drives without a person in the driver’s seat. We also intuitively understand that having a fluid, insightful conversation with a thing means it’s probably alive.
I think there’s something else at play, when it comes to the expectation component — something hinted at in our outraged, unnerved responses to her (including my own):
The public perception of artificial intelligence locks to the boundaries of a tiny range of films released between 1980 and 2000. The risks we perceive around these technologies link more closely to memories of fiction than the realities of deployment, and it’s not unprecedented — Jaws taught us to massively overestimate the risk of shark attacks, and television news has a huge impact on the things we’re scared of.
This is a thing called the ‘availability heuristic’ — it’s a mental shortcut we use to understand the world based on information that’s readily available, not information that’s carefully collected and assessed. It plays a big part in how we make the decision of how to allocate fear; and it’s the thing that makes us vulnerable to professional puppeteers who’ve also seen Terminator.
The way our fears snap to the visage of an ill-tempered anthropomorphic automaton (rather than the more likely threat of clumsy instructions with massive consequences) is significant. Tamminen writes:
“For the foreseeable future probable scenarios do not include androids with independent will, but rather large computer systems that we accidentally give too much responsibility in fields like stock markets, defense and healthcare. That’s why we need regulation in the use of AI. The real threat of AIs is that of a genie in the bottle — they can get way too good at fulfilling our badly formulated wishes”
Communicating these nuances is an uphill battle, because Terminator wouldn’t have been quite the same, had it featured a racist image tagging as its primary antagonist.
There seems little doubt among experts that the work of Hanson Robotics is a net negative. There are certainly a few worthwhile technologies being combined in a clever way, but the end point is headline-ready theatrics.
A mannequin with a speaker inside it’s head now has Saudi Arabian citizenship — a country that doesn’t accept the UN’s universal declaration of human rights. They only recently announced that women are allowed to drive. In this context, I suppose a mechanical woman with no agency operated by a man found an appropriate audience.
The casual thrill we get some seeing our 90s sci-fi fears confirmed is a brilliant substrate for this type of deception. I definitely won’t stop loving those films, but the gap between fact and fiction can be exploited, and so we need to far more wary of it.