Skip to content

A little bit of AI magic

Article by James Flint

I started my career in technology in the 1990s, as an editor on the first, short-lived, incarnation of Wired UK. It was a crazy, vertiginous experience, coming into the industry just as the first dotcom boom was taking off. Back then the internet was still young, a promised land for all kinds of wide-eyed idealogues, and at Wired we spent a lot of time tracing the utopian heritage of this new digital frontier to the hippy hinterlands of California as captured in the songs of the Grateful Dead, the poetry of the Beat Generation, and books such as ‘The Electric Kool-Aid Acid Test’, Tom Wolfe’s chronicle of Ken Kesey’s attempt to create a new religion of transcendence through the widespread use psychedelic drugs,

 

“He drank the KoolAid” became Wired shorthand for a someone who’d swallowed so much of the company line that they started not to just believe it but embody it, a development that was signalled by tech marketeers beginning to call themselves ‘evangelists’. I still remember the incredulity that struck me when I first saw the term on someone’s business card, and the double take I had to make when I realised a few moments later, that they meant it seriously. Of course, it’s a term that is now in common, un-ironic use today.

 

For all my knowing distance, however, I quaffed a fair bit of the KoolAid too. It was hard not to – it was an exciting time, a time of change, of new possibilities, new ideas, new paradigms, and I was young: I drank a lot of things that today I’d consider thoroughly unpalatable, if not downright dangerous. I, too, rode the rollercoaster up, and by the time the millennium arrived and the boom had turned to bust, taking with it both Wired UK and my job I, like a lot of other people, was feeling pretty disillusioned with the whole leaky ship of technology and all who sailed in her.

 

In the lull before things got going again with the launch of Facebook in 2004 and the iPhone in 2007, I became a novelist and wrote several books about what was loosely called, at the time, cyberculture. One of them, 52 Ways to Magic America, was about a magician who starts an internet company; a story that gave me an excuse both to do a lot of research about stage illusionism (which I love) and to throw some shade at some of the more outrageous valuation-boosting behaviours I’d witnessed while working the dotcom beat.

 

It seemed to me, then and now, that many of these tricks had a lot in common with the kinds of misdirection used by conjurers to beguile an audience. Unfortunately not many people understood what I was on about, 52 Ways didn’t sell particularly well, and the book was soon largely forgotten by everyone including me. In the mid-twenty teens, however, I started to think about it again as the theme of misdirection and illusion in the tech industry got taken up by others such as technology ethicist Tristan Harris.

 

Harris practised magic himself as a child, studied at Stanford’s Persuasive Technology lab and became a tech entrepreneur before working as a Design Ethicist at Google. He eventually left Google to co-found the Centre for Human Technology, from which vantage point he has become an astute analyst of the manner in which big tech, particularly social media, uses techniques familiar from magic to capture our attention or prompt certain beliefs, and an extremely vocal critic of the detrimental effects that this is having on our democratic way of life.

 

Now the crank of time has turned again, a new tech boom is underway – that of generative AI – and the question of magical manipulation by technology companies has come around again. The most cogent statements of it that I’ve come across are in some blogs by Terence Eden and Baldur Bjarnason.

 

These writers explore the notion that many of the claimed capabilities of generative AI systems have much in common with classic psychic scams such as mind reading, faith healing and prophecy. Rather than any kind of genuine intuition or deductive reasoning, these models use purely statistical projection (a kind of inductive reasoning) to extrapolate patterns from the vast amount of correlations encoded by their neural nets.

 

Not that this technique isn’t powerful and potentially useful; it is. But hallucination isn’t something that can be eradicated from this kind of process – it’s a fundamental artefact of the way the systems work. The systems are, in other words, untruthful by default. Not because they lie, but because they have no concept of truthfulness at all.

 

They are, however, very convincing, and make it easy for humans to read more into their generative results than is in fact contained within them; to think, in other words, that a generic statement is somehow stating something specific. The category mistake we make when doing this is known as the Barnum or Forer effect, and we see it at play when a fortune teller tells you something meaningful about your life, or you read a horoscope and believe it accurately applies to you.

 

As Eden writes: “You can find dozens of videos online of people taking ‘personality tests’ which give them ‘intensely personal’ results. People read a series of bland and generic statements and feel like they have truly been understood. Some of them become emotional at having their personality revealed to them. Only then to be told that everyone gets the same results.”

 

When not making Forer statements that seem more meaningful than, in fact, they are, generative AI is often reproducing the material it has been trained on with very minor variation, and no attribution. Again, this ability isn’t altogether useless in itself – much of human culture is profoundly mimetic and primarily concerned with reproducing variations on familiar themes – but we tend to frown upon it when it’s done without conscious acknowledgement of its antecedents. And when it is deliberately done without acknowledgement, we criminalise it and call it plagiarism.

 

What generative AI is doing here is less the creation of new knowledge, and more a form of search – something that the main engines (ChatGPT, Claude, Gemini, CoPilot etc) have started tacitly admitting since Eden and Bjarnason wrote their blogs around this time last year, by adding attribution links to their models’ outputs.

 

The human brain has both evolved and learned in a physical environment, and the structure and regularity of that environment has provided it with a deep logic that we call common sense: an understanding of causality, of how things relate to other things, of the basic ways in which the world works. The brain is subject to hallucinations too (see below) but these structures help constrain our thoughts and encourage them to cohere in some kind of form.

 

Generative AI, which is generally trained on text or images alone, lacks that experiential structure and needs to be provided with some kind of substitute for it by deploying techniques such as fine tuning, retrieval-augmented generation (RAG), and knowledge graphs (GraphRAG). A key part of doing good AI governance – what at aiEthix we call “ethical AI by design” – is understanding these different options for constraining the behaviour of generative models and being able to recommend how to best apply them.

 

Rather than human-like artificial general intelligence (AGI), this approach is, for the foreseeable future at least, likely to lead to the creating of lots of use-case specific ANIs (artificial narrow intelligences) that are very focussed on the discipline in which they’ve been trained, but not able to generalise effectively outside of it.

 

This runs very contrary to the relentless focus within the industry on the supposed existential risks of superhuman AGI, which influential figures insist we’re on the verge of building and need to protect ourselves against at all costs. Should we read this as a cynical (and very effective) strategy for regulatory capture, a way of getting governments to produce legislation and funding aimed at some big, abstract, future risks rather than the humdrum, tedious and very present ones of the vast copyright and data protection violations that these models have built into their digital DNA? Or is it evidence that the people marketing this technology have fallen for the Forer effect themselves and that human minds, too, are susceptible to hallucination, in this case an almost religious fever dream?

 

Arthur Mensch, the CEO of French AI company, Mistral, one of the leading open-source challengers to the closed source models of the Silicon Valley giants, certainly thinks the latter. In a philosophically intriguing attempt to set the European AI industry apart from the US one, he recently told the New York Times that “the whole AGI rhetoric is about creating God. I don’t believe in God. I’m a strong atheist. So I don’t believe in AGI.”

 

Others, such as vocal deep-learning critic Gary Marcus, warn of the former. Personally, given the vagaries of the human mind, I don’t think it’s impossible that the cynical and the religious impulses exist side by side in the same AI company and also, by turn, in the same individual (Sam Altman and Elon Musk spring to mind). Either way, there’s no doubt that the tech industry is a well-practised at believing its own hype.

 

Sorry, I mean drinking its own KoolAid. Ken Kesey would have been proud. His spirit lives on.

 

 

Subscribe by email