Clever Hans

Sounding the shenanigans klaxon on AI fakery

In 1907, Oskar Pfunst decided to investigate the curious case of a weirdly intelligent horse — a creature performing tasks we’d previously assumed were the domain of human intelligence. The horse, Hans, could tap out the answer to questions in the dirt, amazing onlookers.

Pfunst figured out that Hans was detecting physical cues in the humans watching his feats — as he tapped, tension would build, and that tension was released when he reached the point at which he should stop tapping. So he stopped.

Hans was a clever, but not human clever. There were humans in this loop, humans that were unaware of their involvement.

This still happens. “Sophia” is a humanoid puppet, operated in real-time by a man, but this rubbery machine is presented as an artificially intelligent being with agency and identity.

Most recently, Burger King has just released a slew of ads that they’re claiming are fully written by an artificial intelligence, but are almost certainly dumb and organic.

There were quite a few articles celebrating Burger King’s ‘innovative’ ad campaign, with varying degrees of skepticism about the claim the ads were written by an AI. The press release is garbled buzzword salad:

“With the data provided, the algorithm was able to discover intricate patterns and gather insights for strategic and effective communication. The artificial neural network is a complex software architecture that essentially simulates how a human brain operates, allowing the A.I. to not only recognise patterns, but also identify which of these patterns are more successful towards a given goal”

It reminded me of a bunch of tweets written by a New York comedian that went properly viral, purporting to show scripts written by a bot ‘trained’ with content from commercials:

They’re relatively funny and they mimic the distracted and insightful nature of machine-generated text, but a big part of the humour comes from a lie about their origin — actual scripts written by AI would look very different. If their origin is meant to be a joke, the majority of people miss it, reacting with horror and amusement that computers can be so incisive and funny.

There’s something vaguely annoying about this. These jokes would fall flat if the tweet text read ‘I wrote a script that sounds like it’s written by a robot, and it made a bunch of social media managers at big corporations wet themselves with joy’.

The humour is fully reliant on people sincerely and innocently accepting they’re not being lied to, which is also the fallback — if you point out the deception the joke trades on, you’ve apparently missed the joke.

Not only is Burger King performing exactly the same maneuver, their global head of marketing thanked Patti for his involvement. It’s fake AI humour as a service.

Every blue-tick F-grade Twitter celebrity has found opportunities and work through viral tweets (me included). But the ethics of relying on public misunderstanding of one of the most significant technological tools of our generation are hazy at best.

A few responses acknowledged how likely it was the ads were faked, but put that down to some grandiose commentary on AI hype from the creative agency, ‘Dave Miami’. There’s a decent chance that this awkward logic is correct — that the creators will leap from behind the curtain, scoffing proudly at those who fell for their ruse, but completely oblivious to how ludicrously circular this reasoning is.

What does the lie prove, other than their willingness to exploit the good will of an audience and general unfamiliarity with new technology?

This *happens* on Twitter. Some idiot runs a “parody” Paul Keating (a former Australian Prime Minister) account that relies on people assuming that he’s the real Paul Keating, as revealed in the replies to this tweets.

More ambiguously, SBS presenter Lee Lin Chin’s irreverent and somewhat horny Twitter persona is mostly written by a white male Sydney comedian, which still feels creepy. I once told my friend about this, and she was heartbroken, nearly to tears. I never tell anyone about this in person, now.

Something irks me when a piece of humour can’t exist without the layering of a lie. It’s lazy and irritating.

As a few people pointed out to me when I tweeted about this, there are *so* many examples of genuinely hilarious collaborations between the creative novelty of machine generated text and human curation — all done openly, without relying on any deception.

Often it’s difficult and laborious, done as hobby work or crowd-funded. An example below , created using a platform similar to the predictive text on your phone, but curated by human writers— “Harry Potter and the Portrait of What Looked Like a Large Pile of Ash”:

I’m a big fan of Botnik, and I’m pretty much constantly in tears at their work. I challenge you to read their reviews of Jurassic World 2 without being physically destroyed by laughter.

Like so many portions of AI space, the real stories seem to emerge when art and science hit the right notes — when humanity and machine tools mix at the right levels. The CEO of Botnik says:

“From the start, the idea has been something like ‘find techniques from computer science that would be fun to write with, and build tools that implement them in the most fun way’

I expect we’ll continue to see A.I. tech that tries to help people do things as fast as they can while making the fewest possible decisions. I think one of Botnik’s roles is to resist that impulse and ask: ‘What decisions don’t we want to give away? What decisions do humans actually enjoy making?’”

Every time you think you’re seeing the realisation of some captivating form of non-human intelligence, consider whether you’re simply observing the reflection of humanity. Sometimes, it’s accidental, like the case of Hans the clever horse, soaking up the body language of onlookers. Increasingly, it’s intentional and cynical. Either way, we need to be wary.

The way we consume and understand the risks and benefits of a tool like artificial intelligence matters — both in terms of overestimation and underestimation of risk. Cheap tricks break something that’s already fragile. It might seem like a petty thing to moan about it, but god, it’s annoying.




Anecdata analysis, research, writing, caffeine. Science, tech and data communications professional in Sydney.

Love podcasts or audiobooks? Learn on the go with our new app.

Recommended from Medium


Visualizing with OpenCV

NVIDIA AI Workshop on Research, Trends and Practices

Literature Review

CAST AI: Companies Spend Three Times More Than They Should on Cloud Costs

The Future of Chatbots: Top statistics to look out for in 2021

4 Simple Steps To Develop A Chatbot From The Scratch

Leveraging Chatbots In The E-Commerce World

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Ketan Joshi

Ketan Joshi

Anecdata analysis, research, writing, caffeine. Science, tech and data communications professional in Sydney.

More from Medium

U R the Meat of the Metaverse

Yanagi Soetsu — Japanese Philosopher and Art Critic

These novels won both Hugo and Nebula Awards

The Case for Supply and Demand