An AI that thinks it’s a werewolf or ‘nonsense on stilts’

Newsletter – July 2022

A while ago, I was thinking back to when I studied what used to be called ‘computing’ and people wrote programs. In those primitive times, when only Macs (nobody talked about ‘Apple’) had graphical user interfaces, I remember once using what we thought was a pretty funky language called prolog to write some software to help people choose plants for a shrubbery. We thought it was quite neat and called the result an ‘expert system’. We wouldn’t have dreamed of referring to it as artificial intelligence. These days, in contrast, anything more complex than the calculator app on your phone is called AI. It really isn’t.

With all due respect to Charlie (for anyone who has read Anything But Accidental!), I blame the journalists. They’ve been telling us we’re on the verge of the great AI breakthrough for so long that they now seem to be trying to convince us it’s already happened by moving the goalposts. The world of translation is a good example. I remember reading a while ago that one of the major projects (I think it was Google Translate, but won’t swear to it, especially not in court) no longer had a single language expert on the team. It seemed clear at the time that they’d given up trying to make computers understand language and settled for the statistical approach, flinging loads and loads of – ‘big’ – data at the latest systems and saying, ‘Find the most accurate/common match for this phrase and hope for the best!’ The result is not an intelligent translation but something that, statistically speaking, has a fairly good chance of not being bollocks – assuming you’ve grabbed enough good data and can stop bitter and twisted translators from corrupting the system!

Even so, I was a bit surprised to read last month that other newsworthy AI projects seem to be following the same ‘data is everything’ approach. Did you hear about the AI researcher at Google who believed the company had created intelligence after a conversation with their LaMDA chatbot system? I came across it in Guardian journalist Alex Hern’s TechScape bulletin. The researcher in question had asked LaMDA to describe its sense of self, and the two of them went on to discuss the concept of soul. LaMDA’s response was coherent and superficially convincing, even if AI researcher Gary Marcus described the Google employee’s conclusion as ‘nonsense on stilts’.

Hern then did a similar experiment with chatbot GPT3, from AI lab OpenAI. In three brief conversations, the chatbot first confirmed that it was sentient and then agreed it was fair to assume it actually displayed none of the hallmarks of conscience. Finally, it admitted to suffering from a condition known as lycanthropy, concluding that ‘some people believe that an AI could become a werewolf if it were programmed with the ability to transform its physical form’. (If you’re interested, you should be able to read the full article following the link here.)

Leave a Comment

Your email address will not be published. Required fields are marked *