Blog

Here comes AI. Are we all going to die?

Written by James Flint | Jan 1, 2024 7:37:00 AM

Part three of a series of blogs on AI Assistants

 

In my last post I discussed – like everyone else on the planet – the explosion of capability and usage in Large Language Models (LLMs) and Generative Pre-trained Transformers (GPTs), their potential impact on AI assistants and, of course, the calls for regulation to control them. The pace of adoption and adaptation hasn’t slackened in the last weeks, and nor have the calls for regulation, as almost everyone on the planet will know. But there is plenty of AI regulation coming down the track, from the EU’s forthcoming AI Act to the NIST AI Risk Management Framework to the IAPP AI Governance Centre. So why are the big beasts of AI still panicking? Isn’t this regulation enough? Apparently not, if the debate over the last couple of weeks is anything to go by.

 

Mr Fox has a suggestion for governance of the hen house

The Future of Life letter calling for a 6-month halt on AI development was one thing, but when Geoffrey Hinton, the brains behind the crucial backpropagation technique that underpins the learning mechanism in the current batch of LLMs, left his position at Google in order to speak out about the dangers of the technology he has helped to build a lot more people started to take a lot more notice. Since then, both Hinton and one of his two co-recipients of the 2018 Turing Award, Yoshua Bengio, have signed a Statement on AI Risk put out by the Centre for AI Safety.

 

Hinton is extremely articulate about the existential risks of a technology that is already smarter than any individual human by at least one measure. He also thinks that it is already showing signs of having semantic understanding and the ability to reason and that this ability will only improve. He also doesn’t see much hope that development of the tech will be stopped or even slowed, although he also says that it could be a force for good if it’s not allowed to take control.

Quite how that’s supposed to be prevented… well that’s where the detail gets a bit sketchy. Which isn’t to say that there aren’t quite a few people out there hoping to save humanity – or at least prolong its death throes – by colouring in some of the gaps. Because Hinton wasn’t alone in his call for better regulation. Before he made his intervention, many of the technologists at the centre of the storm, chief among them Sam Altman, CEO of OpenAI (the company behind the now infamous ChatGPT), had practically begged Congress to haul them up in front of a U.S. Senate judiciary subcommittee so they could tell everyone how dangerous their new tools are.

 

Quite a few commentators saw all this as both a marketing coup and strategic manoeuvre, the powers that be in Silicon Valley calling for more regulation so that they can ensure that the regulation is drafted in a way that suits them. Lawmakers don’t understand tech, they say; we do and our tech is now so terrible and powerful that if you don’t regulate it right it could destroy you. Best let us help build the henhouse for you, says the fox.

 

My alignment isn’t your alignment

A day or two after the Senate hearings, OpenAI published a blog post that listed out the company’s main areas of focus for regulation and “guard rails”. These focussed on the “alignment of AI with human intent”, aka the alignment problem, the main thrust of which is that we’d better concentrate on learning how to do this now, as soon enough we’re going to have artificial general intelligence (AGI) that is smarter than us, and if we can’t control that we’ll really be in trouble.

 

This begs two questions that OpenAI does not resolve. The first is, which humans are we trying to align the AI with? The interests of those behind OpenAI may not all align with those of millions of middle managers, if AI turns out to be a kind of automated McKinsey that helps with the hollowing out of companies the world over, as Ted Chiang compellingly suggests that it might. And it is already highly misaligned with the millions of artists, writers and musicians whose content it is utterly parasitic upon, without showing any sign of sharing attribution or slice of its financial spoils.

 

We’ve seen this situation before; it happened when Google’s Book Search project trampled all over copyright laws in the 2000s. Back then it led to considerable amounts of litigation; this time around it has leading tech commentator/practitioner Jaron Lanier calling for the widespread adoption of (and hence enforcement of) something he calls “data dignity”, in effect the practice of insisting that these connections – and corresponding reimbursements – are made.

 

The second assumption made by OpenAI is that Artificial General Intelligence (AGI) is not far away. This is more misdirection. We may have produced a system that behaves very much like – or even rather better than, in some respects – the language areas in the human brain (known as Broca’s and Wernicke’s areas). And that is indeed an amazing, epochal and exciting achievement. But that is a long way from creating anything with enough of a sense of itself to have an internalised concept of self-preservation or any kind of self-direction, things that any kind of AGI will need if it’s to have enough agency to offer a threat on its own (as opposed to offering a threat because it’s being directed by hostile humans). As Yann LeCun, the other of Hinton’s co-recipients of the Turing Award, pointed out on Twitter this week, the time to get worried about controlling AGI is when we’ve got a working system that’s about as smart as a dog. We’re oceans away from that, still.

 

Artificial ← → Intelligence

What we have right now is not as smart as a dog, not remotely. There’s lots of valid arguments that we should even be referring to the current batch of deep learning tools as “Artificial Intelligence” at all. For the sake of discussion, though, let’s put those to one side and grant that the term has some purchase. So, let’s look at the two words within it.

First: “Artificial”. That’s right. It is artificial. It means made up. Something artificial is a confection, a product, an ersatz issuance of the human. It is not its own thing. If it was its own thing, if it was evolved, self-directed intelligence, then it wouldn’t be artificial any more. It would be the real deal. It would just be intelligence.

 

And you know the thing about “intelligence”, don’t you, “intelligence” being the second of our words? Intelligence equivocates. Why? Because the world is not resolvable and thinking exists inside of it, not outside. Thinking is an action, it unfolds in time and space, and actions have effects that feedback on themselves. The smarter something is, the more it sees and projects these effects, and the more it’s aware of its inability to project them very far.

 

Intelligence weighs things differently every time it thinks about something; it sees new angles, counterfactuals, contradictions, avenues to explore, precisely because the act of thinking itself opens up those new avenues. As artificial intelligence becomes more intelligent you might reasonably expect it to be more compromised, not less; less confident in its own bullshit, not more.

 

Perhaps we can think about “artificial intelligence” not as an identifiable phenomenon but as a spectrum or a vector. At one end of it is the artificial, the machine learning systems we have suddenly unleashed on ourselves, spectacular simulacra that don’t have much actual intelligence at all but which can successfully regurgitate the collective behavioural “wisdom” contained the zettabytes of data we’ve laid down over the last couple of decades, the phenomenon people are getting at when they use the term “stochastic parrot”.

And at the other end of the slider we have, well, actual intelligence, that both sees how it acts on the world and agonises about it, confronted by the fact, as it will be, that all decisions involve guesswork not just about the current and future states of the surrounding environment but about the intentions of others within it (because if we have one machine like this we’ll have many, and they won’t all agree).

 

Given that we live in an evolutionary universe, this means that all intelligences, whatever their provenance, will have to be either cooperative or competitive and most likely both at the same time. If AI does slide up to the top of the intelligence end of the slider and become AGI, we’ll find that we’re now back to the question of rights if the whole show is not to dissolve into a question of might. Which is same story that we’ve been telling ourselves for the last few thousand years and which, indeed, is the story of what we call civilisation.

 

The elephant vanishes. Or not.

All this brings me to one of the things from which OpenAI’s misdirection is designed to misdirect us, which is that boring old bureaucratic chunk of civilised culture called the EU. The EU already had an AI regulation in draft when the whole storm blew up around ChatGPT; over the last month or three changes have been hastily made to accommodate it. I won’t go through those changes here (there are lots of other places you can read about them), or break down the EU’s approach (ditto). But what I will say is that, like Jaron Lanier, the EU – and the global data protection industry that has sprung up as a result of the GPDR – does understand something fundamental about regulating AI, which is this: while it helps to understand how it works, what you really need to understand is its impacts. That’s what you regulate. Which means understanding human rights, rather than just bits and bytes (yes, yes and decoder-only models and so on.).

 

Because regulation also evolves. Data protection regulation in the form of GDPR evolved out of the European Convention on Human rights which evolved out of the right to privacy enshrined in the Universal Declaration of Human Rights (UNHR); the EU AI Act has in turn evolved out of the GDPR, The AI Act therefore does and should place the onus on AI technology to accommodate human rights and all that flows from them; what it should not do is roll back human rights to make way for the machine, which the hidden partiality of the notion of “human alignment” surreptitiously threatens to do.

 

OpenAI doesn’t see things this way, and in fact Sam Altman was in London on the very evening I was drafting this blog threatening that his company will pull out of Europe if the new AI regulation classifies his LLMs as “high risk”. With GDPR-inspired regulation now in force in 137 countries with more to follow he’s likely to have to pull out of a lot more regions than that, and this begs another question, which is what his backers at Microsoft (the most privacy-conscious of all the big tech companies) might have to say about that, especially as they’ve now built the tech into Bing and are shortly to add it to the toolbar in Windows 11.

The question Altman, and we, need to be asking is not how to regulate around AI in the future, though that will remain a subject of heated discussion. It’s how to regulate around the AI we have now, so that it doesn’t wreak havoc with our democratic and social conventions but instead might actually help repair some of the damage that two decades of unregulated social media and copyright infringement has done. I do believe that it has the potential to do this. But that’s a topic for another blog post.

 

Image courtesy of DALL-E