Skip to content

The cathedral of ChatGPT

Article by James Flint

Part two of a series of blogs on AI Assistants

Back in March I began a short series of posts on AI assistants; since I published the first post, however, there has been an explosion of interest in the subject caused by the arrival of arrival of ChatGPT & GPT-4 in the public consciousness.

 

What’s happened is, by anyone’s standards, pretty extraordinary. Suddenly we have systems that are outperforming humans on a whole range of hitherto human specific tasks, from writing essays and software code to passing exams while communicating in a way that is credibly capable of passing the Turing Test. As a result of this, ChatGPT became the fastest adopted new technology in history, reaching 100 million users within the first two months of its release in November of last year.

 

Reams of material have already been written about the impact this will/might have on jobs and productivity, as well as on the future of humanity more generally. Staid researchers at Microsoft have published a serious paper claiming to identify “sparks of early general intelligence” in the output of GPT-4 – the kind of claim that brought Google engineer Blake Lemoine widespread ridicule and got him sacked from his job when he made them about another system just a few short months ago.

 

Thousands of tech industry luminaries have been so alarmed by the developments that they signed up to an open letter calling for a six month pause in AI development while the world takes stock (or, more cynically, while they try to catch up - signatory Elon Musk has, since signing, founded his own AI company, X.AI with the intended aim of producing his own “TruthGPT” chatbot, and put in an order for as many GPUs - the chips required to train and run neural networks - as Nvidia will sell him, which suggests that his commitment to the pause-cause was somewhat disingenuous).

 

Reactions against the suggestion of a pause were equally strident, however. Yann LeCun, Chief AI Scientist at Meta, tweeted that the pause letter reminded him of the Catholic Church’s opposition to the printing press in the Middle Ages.

 

This rang a bell. Back in the 1990s, when I was a tyro editor at Wired magazine and digital technology was still young, the internet was constantly being compared to the printing press. It was a trope that, at the time, worked well on several levels. But does it still? Are large language models (LLMs) such as ChatGPT and their visual cousins – image generators such as MidJourney, Dall-E and Stable Diffusion – really like the printing press? Are they really about the reproduction and dissemination of information, like the Internet before them (bearing in mind that the Internet still exists and is doing a fine job, still, of just that)? It strikes me that they’re not; that in fact they’re more about the agglomeration and reification of knowledge, the putting of knowledge to work so that it can output new knowledge, or at least new experiences, if new knowledge is too grand a term (though some deep learning tools, notably those built by Deep Mind, have produced this).

 

If we’re going to stick with the Catholic Church in the Middle Ages as a source for our similes, then haven’t the parallels been reversed since the 90s? Isn’t it the tech industry itself that’s more like the Church now, rather than the objectors? Aren’t LLMs rather more like cathedrals or mosques, hugely powerful socio-economic engines that draw in information, expertise and resources on a vast scale in order to further a particular agenda, than printing presses threatening to undermine an entrenched hierarchy and corresponding system of belief?

In this context, the pause letter was more like a petition from a worried caucus of worthies and nobles who are not at all sure they want this new behemoth built slap in the centre of town, given the disruptive effect that’s likely to have on their own interests and livelihoods. And even if they weren’t just trying to slow things down so they could start rival AI operations of their own, did they really think they had any realistic chance of putting the power back in its box? Given that this is software we’re talking about, the essence of which (unlike cathedrals) is to be replicable, shutting it down is not going to be easy. Multiple systems (e.g. Google Bard) and domain specific versions like BloombergGPT are already on the scene.

 

And bear in mind too, that although it takes tens or even hundreds of millions of dollars in server time to train these models; once trained the weights can be exported and run on much smaller systems, even on laptop computers and mobile phones as is the case with with Facebook's LLaMA.

 

If a ban, or even a pause, is impractical, how about some damage limitation? Another of the pause letter luminaries, Gary Marcus, tweeted this on April 3rd [Gary Marcus tweet].

 

In my TED Talk, I may add 3-6 quotes asking pointed rhetorical questions about AI policy, like this fab quote from @CarissaVeliz

“Shouldn't all #tech companies be obligated to be more transparent about the #data they are using to train their systems?”

 

I’m a big fan of Gary’s and have been following his sceptical-but-constructive writing about AI and machine learning for some time now, but I did have to tweet back to point out that there is already quite a large body of data protection aka privacy regulation out there that insists on just this. It might be weak enough in the US for OpenAI to have been able to ignore it to date, but Garante, the Italian data protection authority, has already banned the chatbot Replika from processing the personal data of Italians and flagged concerns about the production of incorrect data and lack of age restriction to the creators of ChatGPT.

 

The UK’s ICO has spoken out about the applicability of GDPR to machine learning training data sets, personally identifiable data entered in prompts or as context that is made available for the LLM to work with – (a long document that’s been uploaded for it to summarise, for example), and the European Data Protection Board (EDPB) has already set up a task force dedicated to the matter.

 

On top of that, there’s the forthcoming EU AI Act, The Artificial Intelligence Act to consider. Already published in draft form, the Act has a relatively sensible regulatory framework for AI pretty well worked out and is well on its way to become law, a process that will no doubt be hastened by the current kerfuffle – something I’ll discuss in more detail in my next post.

In the meantime it’s worth considering, amidst all the hype (and a lot of it is hype, designed to help these tools define a business model that isn’t currently particularly clear; remember that Google Assistant and Alexa have never made any money), that regulation of LLMs is a perfectly realistic possibility, and that by looking back at the progress of data protection legislation over the last few years it’s fairly easy to see a future where LLMs that abide by regulatory guidelines are given a green light to operate and those that don’t, aren’t, at least in jurisdictions where privacy is taken seriously. And it is also worth mentioning that those that take data protection seriously also outperform those that don’t, as not only are the companies that run them not being fined all the time, but the quality of their training data, and thus of their outputs, is much better.

 

Subscribe by email