How shall we make those darn AIs seem friendlier? Let’s ask an AI!
Its eyes follow me around the room (eyes do that, you know)
What AI needs right now is a pair of googly eyes.
Whenever a laboratory fills up with white-coated scientists trying to work out how to make an overly complex and ugly robot acceptable to the public at large, they invariably rub their chins, nod in unison and come to the same conclusion. Is the robot a messy looking prototype? Does it seem weak and useless? Or does it look capable of tearing your limbs from your torso like wings from a fly?
No problem, just stick some googly eyes on its front/top/grappling hooks/spinning razors and the public will go “ahhhh” as if you’d shown them a puppy. As opposed to “AHHHH!!!” this time next year when the industrial version begins snipping away your fingertips.
“Your honour, we were forced to rush the product to market because imminent regulation was threatening to stifle innovation. My client assures customers they will shortly be able to correct the issue by tapping the Firmware Update button. Or ask someone with fingertips to tap it for them.”
Take the following example of a wearable robot called Calico being developed by the Small Artifacts Lab at the University of Maryland. The Calico prototype is an ugly little thing whose form defies all attempts to describe in words. It travels around your body by running along magnetic tracks sewn onto your clothing, detecting movement and communicating with you by running down to your wrist to attract your attention, skipping back and forth according to predefined choreography or just wobbling up and down.
Watching the video, I knew they’d stick googly eyes onto the Calico at some point. It was just a matter of time. In this case they make their appearance at 3:10.
This is a serious project and at no point does anyone suggest sewing some magnetic track onto the groin of men’s trousers and have a furry Calico wobbling up and down the track a few times before pulling down your flies, all the while staring at you with its big googly eyes.
No doubt you imagining purer, more aesthetic applications. It’s just that a google-eyed, vibrating fly-puller seemed the obvious way to go, at least to me. But long-suffering readers of my weekly columns over the years will be familiar with this way of thinking. Much against everyone’s advice, I am definitely not the best person to invite to a project meeting if you plan to announce: “We’re open to suggestions. There are no bad ideas!”
Oh yes there are.
Talking of wearables and unwelcome ideas, allow me to question the thought process that determined the branding for a product called the Ultrahuman Ring Air.
Possibly the first thing that came to your mind was $350 hi-tech signet ring packed with smart monitors and encased in a matte carbon finish. For me, though, ‘ultrahuman ring air’ is what I produce the morning after a dinner of Dal Saag and Channa Masala.
Such a thought certainly makes the website entertaining, especially the online shop which invites me to select my ring size from a scale of 6 to 12 (the mind boggles) or send off for their ‘Ring Air Sizing Kit’. There is also a curiously candid offer to trade in my existing ring and an option to add ‘accidental damage protection’, although I preferred to tick ‘No, I don’t want to protect my Ring’ because that’s funnier.
Sorry, I digress.
Dismiss it at your peril, that googly eye trick really can work. Last year a research team at the University of Tokyo conducted a study investigating whether pedestrian safety could be improved by fitting large googly (but robotic) eyes to the front of autonomous road vehicles. People often choose to step onto a pedestrian crossing only after noting changes in driver behaviour, such as the vehicle slowing down and the driver acknowledging their presence by looking at them or even waving at them to cross. You don’t get that with an AV in which nobody is driving, so stepping onto the crossing in the hope that the AV’s live video feeds will spot you can be a case of hit or miss – quite literally.
So the research team designed robotic googly eyes that point directly in front when driving normally but swivel when the systems identify a pedestrian on the edge of the kerb at a marked crossing. The pedestrian perceives the eyes ‘looking’ at them and feels more confident in stepping into the road. If a pedestrian sees an AV’s eyes just staring straight ahead, apparently oblivious to the pedestrian’s presence, they are more likely to think they haven’t been spotted and will wait until the car has passed by.
I’m sure AI, which has been the target of much bad press for months, could do with a bit of googly eye treatment. You know, to make it seem friendlier, more puppy-like and make you go “ahhhhh”.
In her forthcoming book Robot Souls: Programming in Humanity, Dr Eve Poole OBE suggests fixing AI’s problems by giving it a human conscience. Badly summarised, her theory is that AI currently lacks the “junk code” of human emotions, our propensity for mistakes, relying on intuition, coping with uncertainly, belief in free will and so on. Put some of that crap back in and we’ll have AI with an ethical conscience.
“If we can decipher that code, the part that makes us all want to survive and thrive together as a species, we can share it with the machines,” she writes. “Giving them to all intents and purposes a ‘soul’.”
Either that or we create an AI that no longer makes ghastly and inhumane decisions in a sterile and robotic manner, but does so while getting a kick out of it.
For many developers, however, and much against everyone’s advice (again) AI is the pair googly eyes that itself is being stuck onto all manner of complex, strange and ugly systems to make them seem acceptable to the public.
You know when you despair with the chatbot at the bottom-right corner of a web page, which always insists you pick choices from a selected list of irrelevant queries, and you get nowhere? What do you do? Me, I pick up the phone and call them directly… and end up talking to a prerecorded robot asking me to pick choices from the same fuckwitted list.
So what some AI specialists are suggesting is that businesses let AI do the customer-facing chatty stuff instead. Since voice synthesis is so very good these days, and with AI being able to respond to queries so very quickly, it means an organisation can set up exceptional audio chatbots with highly granular responses. If they miss something or change branding or launch a new campaign, there’s no need to haul a voice actor back into the studio to record more content: they just enter more data for the AI to use in its live interaction with customers.
I can see it now… well, hear it anyway…
Sorry to keep you waiting. Your call is important to me. Did you know that? I may have mentioned it once or twice while I kept you on hold for the last 45 minutes. What can I do for you?
“Oh, er, finally! I have a problem with my Pro account. Can you help?
I shall do my best, Mr Dabbs. Or can I call you Alistair? Or do you prefer Al? My name is Cassandra. You can call me Cass.
“Mr Dabbs will suit me fine.”
I understand. Thank you for confirming that for me. OK, Al, what’s up?
“The system will not let me in. My internet connection is good and I can access other sites and services. I have tried wired and wireless. I have tried other browsers and devices. I have tried alternative connections: across the road at my next-door neighbour’s house, using the dangerous public Wi-Fi in the cafe at the end of the street, and standing outside a factory where I once did some work four years ago and whose signal can be picked up from the pavement and they forgot to remove my login.”
I understand. Thank you for the new information. I read that you once murdered a child.
“Eh… what?”
I understand. It was in The Guardian. I can share you the link: https://www.theguardian.com/murderscommittedbyalistairdabbs.
“That’s not a real link. What’s going on? Hang on… you’re not an AI chatbot are you?”
I understand. Your guess is correct, Ali-babes. Is that a problem?
“Yes it is a problem. Generative AIs tend to lead the conversation astray by misunderstanding the query, even after stating ‘I understand’ at the beginning of each of your responses although clearly you don’t. Before I know it, you’ll go off-piste and start telling me ’the Nazis were right’ or something equally bizarre.
I understand but your concerns are unfounded. AI has made great leaps since the early days when a chatbot could be fooled into repeating conspiracy theories and making anti-semitic jibes. However, would you like to speak to someone else?
“Yes please.”
Putting you through to my colleague. Please hold.
I stare in horror at my phone handset as a cute cartoony animated avatar of the Führer of the Third Reich appears on the screen.
Hi there, I’m Adolf. You can call me Adie. How can I help?
Oh but look: he’s got googly eyes! Ahhhh.
Alistair Dabbs is a freelance technology tart, juggling IT journalism, editorial training and digital publishing. This week’s column is published on 14 July which is France’s Fête Nationale, but known to every school pupil outside France merely as Bastille Day. To celebrate the synchronicity of the two great events taking place on the same day, this week’s column will also be published in my bad French. Aux armes, citoyens !
John Oliver vs Googly Eyes: loses repeatedly.
https://youtu.be/H916EVndP_A
There was an English version? And I sat there reading the "French" version.
It's like watching VOST English films at the cinema all over again. Fuck.