When it was announced to the editorial team that the theme of the April issue was going to be AI, and that approximately half our content would be produced by or off artificial intelligence algorithms, a murmur of interest arose.
In terms of print publishing, a feat like this has probably not been done, at least not in Singapore. The novel idea, together with the implication of a divided and therefore reduced workload, was welcomed. Until it hit us. You could almost see it, the notion dawning on us all simultaneously:
What if the AI writes exactly like us?
This has been a natural human fear ever since AI came on the scene—the fear of being replaced. AI is undoubtedly increasingly commonplace, its prevalence accelerating over the last couple of years. It is not just Siri calling you a cab or Google Maps recommending the fastest route. It is as simple as the autocorrect on your lying text that you're already on your way there.
We cannot deny the superiority of AI in many aspects, from task efficiency, prediction accuracy and freedom from human error. Even the ability to create seems to be no longer something that only humans can do. Professor Ahmed Elgammal’s Creative Adversarial Network was fed 80,000 artworks and programmed to generate some of its own. In 2016, its artworks were shown alongside pieces created by human artists at Art Basel Hong Kong.
Following the vein of this narrative, it should not be difficult to guess the outcome. That’s right, the majority of the Art Basel attendees could not distinguish the authorships of the artworks, mistaking those produced by AI for original human compositions. In fact, the AI depictions were considered more appealing.
So with all that is at stake, what is left of humanity that is uniquely ours?
What distinct feature do we possess that cannot be duplicated? While the output of paintings and even melodies allows our robotic threat to get away with the guise of making something fundamentally creative, there’s one area they can’t quite fake. Being funny.
Read the pages by our friendly AI, Squire, and you will find that the biggest difference is the intended humour of its written copy. If some phrases read funny, it is unlikely that it was trying to be. Apart from responding with pre-input gags, the only other way AI can crack a smile on your face is when it doesn’t mean to. Case in point: those #robotfail videos.
But how does humour as a premeditated component in AI carry any significance?
In social situations, humour is a means to engage. Firstly, it’s a way to get people to pay attention and remember. As many brands have discovered, doing it right solidifies your product’s place in the minds of consumers. Computational humourist Carlo Strapparava has established that many companies hire humour consultants to influence purchasing decisions.
Through a trip to Palestine amongst other locations as part of a global humour research study, Professor Peter McGraw and journalist Joel Warner learned that humour was also a way to defuse tension. In their book The Humor Code, they uncover how humour acts as a signal within a group that a situation was not as serious and relieves stress for the individual.
Now imagine AI having the capacity to execute all this on its own. Like TARS in Interstellar, it requires a cultivated combination of self-awareness, spontaneity and linguistic subtlety. Creating a social robot alone (not necessarily a funny one) is an accomplishment in itself.
Navigating what is seemingly simple to us requires a great deal of reverse-engineering for a robot. Just to communicate verbally, the following agenda ensues: transcoding the received voice into signals, then words and sentences that have to be compared against a huge data set to decipher meaning. Not just the meaning but the intonation, behaviour, gestures and expressions have to be understood. From there, an answer is prepared based on past interactions.
As Professor Nadia Thalmann, creator of Singapore’s humanoid social robot Nadine, reveals: “Social robots are aware only of a very specific part of the environment. For now, no social robot in the world is fully aware of a global environment.”
Nadine, who works as a receptionist, can co-operate with humans at a level not possible for prior robots. She is able to express moods and emotions, recognise and take documents handed to her, and file them perfectly.
Microsoft’s breakthrough chatbot Xiaoice, who is a bit of a celebrity with millions of worldwide users, her own TV show, and upcoming autumn/winter collection, now has a full duplex voice sense. This means she can hold a proper two-way conversation without the lag of waiting for a complete statement then processing the information before responding. Instead, she listens and talks simultaneously (as opposed to “Tell me a joke”).
Alas, improvement in articulation does not equate to a greater understanding of the full nuances behind an entire conversation. If anything, AI is simulating reasoning or knowledge.
“It has little to do with true emotions or even social intelligence,” Thalmann asserts. “This is a question of producing hardware and software that give the illusion that the robot is sensible and can behave socially, when in fact it is all fake.”
Above all other factors, the biggest challenge of teaching a robot to actively participate in a joke is to replicate a concept that we ourselves don’t fully grasp.
Comedy is a complex build-up highly dependent on context
And then there are the different types of comedy: wordplay, parody, irony, or for the millennial, memes. Puns alone can be broken down into lexicon, schemata and template, according to Professor Kim Binsted and Graeme Ritchie, who developed riddle generator JAPE (Joke Analysis and Production Engine). Sadly, the top jokes were deemed pathetic and unoriginal by judges. One of the crowning punchlines?
What kind of tree can you wear?
A fir coat.
Irony has also come under the microscope. SASI (semi-supervised algorithm for sarcasm identification), has achieved a 77 percent precision rate in detecting sarcasm in online product reviews (that’s 66,000 Amazon.com reviews in the database).
A personal favourite is research scientist Janelle Shane’s attempt to get a neural network called textgenrnn to invent its own jokes. Using the typical structure of classic wisecracks, it generated these gems:
What do you call a cat does it take to screw in a light bulb?
They could worry the banana.
What do you call a pastor cross the road?
He take the chicken.
We’re back to the accidental humour of our robotic friends. London’s Improv comedy troupe Improbotics Ltd asks the audience to guess which improv actor in the cast is puppeteered by AI. It’s clear, and also comic, when one randomly spouts, “I’m a communist!” or “Do you wanna buy a knife?”
You can start to see why wit is considered the frontier for AI. The mechanisms behind getting a laugh are indelibly profound.
And it all sounds intrinsically complicated, until you try to explain why a fart joke is funny.
Funnily enough, trace the root definition of the word ‘humour’ and you’ll find its pristine reference to bodily fluid. Or if you want the more refined definition, ‘juice of an animal or plant’.
The word bears association with moisture as physical and emotional states were once thought to be caused by ‘the four fluids’ (blood, phlegm, choler and black bile). Also things you won’t find in a robot.
Man has already tried to justify the rationales of humour through a number of theories and perhaps one day they can be coded into a robot’s system to assimilate banter that we can approve of as being funny.
Perhaps then the fictional Dr Alfred Lanning of Isaac Asimov’s I, Robot will be right. One day they’ll have secrets. One day they’ll have dreams. And to add, one day they’ll have humour.
Enjoyed the story? Subscribe to Esquire Singapore for more.