Human impersonation technology: why post-release ethical retrofitting isn’t good enough

(AP Photo/Jeff Chiu) / Quartz

If a machine needs to deceive us to work well, should it be allowed to work at all? Because it’s 2018, this is a thing we need to ask now.

Google’s ‘Wavenet’ technology, which learns how to generate speech using training examples of recorded speech and neural networks, has been around for a few months now. This blog post from 2016, on their site, includes some unsettling examples of their technology babbling, breathing and stalling.

Google put this technology to practice in a presentation at their 2018 developer conference, Google I/O. Sundar Pichai, CEO, played recordings of Google Assistant independently booking haircuts and meals:

The almost endearing moments in that phone call, where Duplex hesitates, mutters, and even seems to get impatient drew the strongest negative and positive reactions. The audience at Pichai’s talk seems inordinately delighted, cheering, clapping and hollering.

Much of the broader reaction has been somewhere between amused, unnerved and outraged:

An important sub-plot in this reaction was whether Google’s Duplex developers intentionally set out to deceive callers by omitting mention of its robotic genesis in its normal operation:

It definitely seems, from reading Google’s Wavenet and Duplex blogs, that their intention was to create a smooth, naturalistic conversational experience, and that an element of deception (through omission and mimicry) was a vestigial side-effect of that end goal:

“The system also sounds more natural thanks to the incorporation of speech disfluencies (e.g. “hmm”s and “uh”s). These are added when combining widely differing sound units in the concatenative TTS or adding synthetic waits, which allows the system to signal in a natural way that it is still processing. (This is what people often do when they are gathering their thoughts.) In user studies, we found that conversations using these disfluencies sound more familiar and natural”

They’d clearly observed that we’re more guarded and hostile when we’re fully aware of the computerised nature of a conversation partner.

So what’s more important? Natural conversation and a misinformed human, or stilted conversation and an informed human? Is deception a worthy price to pay to avoid the cost of a person making a 45 second phone call to their local Chinese restaurant?

Did anyone ask these questions before the dazzling demonstration? If they didn’t, what does it say about their methods, their audience, and their industry?

In response to the backlash, Google added some features. Despite the demos aired at I/O featuring small business owners who were most likely unaware of the nature of the caller (and possibly unaware their voice was going to be recorded and broadcast to millions), Duplex will now inform subjects:

“We understand and value the discussion around Google Duplex — as we’ve said from the beginning, transparency in the technology is important. We are designing this feature with disclosure built-in, and we’ll make sure the system is appropriately identified. What we showed at I/O was an early technology demo, and we look forward to incorporating feedback as we develop this into a product.”

Natasha Lomas at TechCrunch (whose full article is well worth your time) points out something really important:

“For Duplex the transparency that Pichai said Google now intends to think about, at this late stage in the AI development process, would have been trivially easy to achieve: It could just have programmed the assistant to say up front: ‘Hi, I’m a robot calling on behalf of Google — are you happy to talk to me?’”

The admission here is, essentially, that ‘early tech’ demos don’t feature ethical or impact considerations — even when they’re deployed on members of the public, and subsequently broadcast. To me, this brings to mind Uber’s recent autonomous vehicle fatality, in which a vehicle was being tested on public roads, and failed to stop for a pedestrian the car’s sensors had detected due to software settings being tuned to ignore objects like small plastic bags.

It also brings to mind the inverse story of Sophia — a rubbery puppet operated by a human, that is pretending to be an intelligent machine, and getting a lot of uncritical press coverage in the process.

To pointedly exclude simple considerations like declaring the nature of a caller, or the purpose of the call, reveals something important: too many creators are happy to essentially outsource the ethical and societal shaping of their technology to third parties, and reactively shape their creations around any problems that get noticed, and tweeted / written about.

With this approach, the ethical considerations of a technology become a liquid that simply fills the shape of whatever is deemed permissible through the mechanism of reaction and outrage.

This raises a few major problems, in my mind:

  • Any ethical considerations that do get implemented are essentially deemed by those of us with big platforms (blue tick tech-Twitter celebs, journalists, loud-yellers) — and excludes those without the same platform we enjoy (almost everyone outside our cloud of content).
  • The burden of impact-noticing is placed on those who also suffer the impacts.
  • Ethics-by-outrage sometimes means there’s a big delay between the deployment and the outrage and the subsequent ethics, and plenty of terrible impacts happen in those gaps (see: Facebook).
  • If a company genuinely wants to be ethical (It’s clear that developers at Google really do — Deepmind, in which Wavenet and Duplex was born, has a new ethics unit — many of the issues raised here are deeply considered), ethics-by-outrage is a far more expensive and illogical way to seek out important societal considerations (remember Google Glass?) — why not bake it in from the moment embryonic ideas take shape?

I really doubt the supportive cooing from the audience would have been any lesser, had Pichai put aside 25 seconds in his demonstration to elaborate on where they dropped their ethics-pin in their decision making processes.

Duplex has positive benefits for people with accessibility issues, disabilities and disadvantages. The potential for good is massive, but that good will be squandered if impact considerations aren’t baked in from the get go.

It’s very likely that Google will tread carefully with Duplex now, implementing careful controls to avoid abuse. But the fact a tech colossus with the size and influence of Google still considers reactive ethics-by-outrage, instead of proactive baked-in societal considerations the default setting for impactful new capabilities is really bad. That this happens in a year that’s already been filled with a series of high-grade scandals involving poor consideration of societal tech impacts is also somewhat stunning.

I hope the response to the backlash goes far beyond tweaks to their phone call machine. I hope it results in the inclusion of early, prominent and unapologetic thoughts on how these capabilities soak into the fabric of society.

--

--

--

Anecdata analysis, research, writing, caffeine. Science, tech and data communications professional in Sydney.

Love podcasts or audiobooks? Learn on the go with our new app.

Recommended from Medium

How Virtual Reality (VR) Works?

Northeast Ohio Continues Leading Automotive Innovation, Goodyear’s Smart Tires To ‘Connect’ Our…

Google I/O Keynote — helpful and for everyone!

The Best 20 Jaron Lanier Quotes

best call recorder andriod

New Metaphors for Cyberspace:

Hack Upstate’s Focus on Innovation is a Core Value at ITX Corp.

TRM 2 Is Successfully Finishing Its Beta Test 2.0

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Ketan Joshi

Ketan Joshi

Anecdata analysis, research, writing, caffeine. Science, tech and data communications professional in Sydney.

More from Medium

Why China is not imperialist

The Myth of Freedom of Speech

Equality and Equity

Work in Progress: Invariant Institutions and Translational Symmetry