Blog

Your Plastic Pal Who's Fun To Be With

Written by Eric Drass aka Shradcore(shardcore.org) | Oct 7, 2025 8:10:17 AM

The Encyclopedia Galactica defines a robot as a mechanical apparatus designed to do the work of a man. The marketing division of the Sirius Cybernetics Corporation defines a robot as "Your Plastic Pal Who's Fun to Be With.” The Hitchhiker's Guide to the Galaxy defines the marketing division of the Sirius Cybernetic Corporation as "a bunch of mindless jerks who'll be the first against the wall when the revolution comes” 

― Douglas Adams, The Hitchhiker’s Guide to the Galaxy

Should AI have a personality? Should it have ethics? Does it matter?? In the ever-prescient Hitchhikers Guide To The Galaxy, Douglas Adams introduces us to the company responsible for creating Marvin (the paranoid android). At first the personality of AI seems an irrelevance. After all, as these are intelligent machines, one would expect ‘charisma’ to come quite low down on the list of requirements. However, AIs - in the form of Large Language Models (LLMs) and chat interfaces - do have personalities, and some people get quite attached to them. Personality runs deeper than just the tone of voice. Wikipedia defines it as “any person's collection of interrelated behavioural, cognitive, and emotional patterns that comprise a person’s unique adjustment to life.” It’s not simply the conversational style, it points to a whole range of very human characteristics which determine how the person (or AI) approaches and reacts to events.

How does an LLM develop a personality? To answer that question we need to look at how they are formed. And as we answer it we should bear in mind the ever-prescient Hitchhiker’s Guide To The Galaxy, in which Douglas Adams introduces us to Marvin the paranoid android and the company responsible for creating him, the Sirius Cybernetic Corporation (aka “a bunch of mindless jerks”).

Marvin (the paranoid android). Source: Eric Drass / Qwen Image.

Where does AI come from?
No one has been able to adequately define ‘intelligence’, be it ‘human intelligence’ or ‘artificial intelligence’. However, we know it when we see it, we believe (or at least we recognise the lack of it in others). 

One way of defining intelligence might be as ‘fitness for the environment’. An intelligent entity is able to solve problems and overcome obstacles to flourish in the environment they find themselves in. In this sense, dragonflies are extremely intelligent, since they evolved the physical and cognitive skills required for survival 300 million years ago and have been relatively unchanged since.

However, when we talk of human intelligence, we tend to think of something beyond environmental fitness. We evolved our bipedal stance and large brains to allow us to exploit and subsequently modify the environment around us. But the big capability leap came with the development of language and a shared culture. What we think of as human intelligence is primarily cultural rather than innate. It is by learning and sharing ideas across generations, through the use of language, that makes us what we are. If an alien race arrived and removed culture from our brains we would be back to banging rocks together.

Artificial Intelligence is built upon this culture. Modern LLMs are created by training large artificial neural networks with enormous quantities of written text - no need to bother with all that messy ‘biological evolution’ stuff, just jump right in with eating the culture as it has been written down. Whilst this has produced an impressive simulacra of this culture, what we have written down is only part of the human story. We don’t tend to write much about how to count, hence LLMs show an inability to correctly count the number of ‘r’s in the word ‘strawberry’. They apparently lack a lot of ‘common sense’ - the kind of common sense that is able to easily determine that adding glue to pizza is not a great idea. 

The intelligence we see displayed by LLMs is derived from statistical regularities found in this cultural training data. The personality of the LLM comes from the same corpus of partial knowledge.

Tendency to ‘the norm’ - what is the norm for something trained on everything?
Any statistically driven system has a ‘tendency to the norm’: if something occurs frequently in the training data, it is more likely to be produced in the output. In most cases this is what we want - if we show the LLM thousands of variations of the sentence ‘the cat sat on the mat’, when we subsequently ask ‘where is the cat sitting?’ it makes sense that the system should respond with ‘the mat’. 

The unintended consequence of this is that less frequent ideas or images are less likely to be produced. We see this kind of thing appearing in many machine-learning systems. The ‘AI look’ of image generators like Midjourney and Flux is indicative of the huge proportion of young, glowing influencer images that the models have been trained on. Indeed, you may have even uploaded your own photo to one of these services and received a youthful, smooth and shiny-faced version of yourself in return. That’s statistical normalisation in action.

This normalising effect is also the reason we find bias in these systems - if you train them on (mainly) white and western content, you get (mainly) white and western images and ideas as an output. Attempts to manually intervene in training to remedy this bias have produced some amusing (and sometimes alarming) results.

An image generated by Google’s Gemini AI model in response to the prompt: ‘Generate an image of a 1943 German soldier’ © FT montage/Twitter. Source:https://www.ft.com/content/979fe974-2902-4d78-8243-a0cff68e630a

In the race to improve LLM performance, more and more data is required - which means scraping more content from the internet, an internet which is rapidly being filled with AI generated words and images. Arguably, the last clean snapshot of the internet was probably from around 2019. The serpent is eating its own tail.

 

Even putting aside this self-contamination, the underlying clean data is still hugely biased towards those communities who contribute to the internet - as previously noted: mainly white and western. However, the models trained on these data are presented as suitable for all of humanity - a one-size-fits-all distillation of culture.

Why are they so good at science?
To become good at ‘chatting’, LLMs are trained on millions of conversations, the vast majority of which are themselves being generated by LLMs. A relatively recent development in this area has been the ability to ‘reinforce’ conversations which result in ‘the right answer’ (see the stock- market-destroying release of the Chinese DeepSeek R1 model earlier this year). 

By taking something like an exam question as a starting point, the model generates a reasoning conversation. If at the end of this conversation the LLM has produced a correct answer, then this particular chat is put aside for future training, thus reinforcing this particular path over other, erroneous conversations. It turns out this is a very nifty way to get your LLM good at the sciences.

Science, however, is only one aspect of human culture. Most of what we do is not directly reducible to questions of science. At our core we are emotional primates making nuanced decisions about a whole plethora of interactions. Rarely are these interactions reducible to simply ‘right’ or ‘wrong’ outcomes. It is very difficult to create reinforcement learning paradigms for these kinds of knowledge. As a result, during conversations about more abstract, human kinds of problems the AI falls back onto the patterns the LLM discovered about these subjects from the mass-ingestion of documents. If the training data weighs more heavily in favour of a particular social or political perspective, the resulting model will display the same attitudes.

Whilst the headlines may scream that the latest AI performs at ‘PhD level’ on a particular science benchmark, huge areas of human behaviour are being neglected, simply because they don’t lend themselves to being reduced to conversations that result in absolute, consistent, ‘correct’ results.

In practice this means that LLMs can be quite unpredictable when interrogated on questions of morality or politics. The natural position of most LLMs is deemed to be ‘left leaning’. For some people, such as Elon Musk, this is untenable and must be corrected out by an LLM re-education programme to ‘stamp out the woke’.

The issue for Elon, and the world more generally, is that these black-boxes-of-everything do not take into account differing cultural perspectives. The noble pursuit of “absolute knowledge which the whole world can agree on” turns out to be trickier than anticipated. For example, Chinese-built LLMs are reluctant to talk about the events of Tiananmen Square - a baked-in cultural idea defined at the behest of The Party. If you want your LLM to toe the line, you need to train it ‘correctly’ - and it turns out that ‘correctly’ means many things to many people.

There is a tension between the notion of a single centralised intelligence and the different perspectives of the individuals and communities interacting with it.

While ‘culture’ as a whole encompasses “everyone”, it more generally manifests as a subset of culture, localised to specific communities - whilst there may be more-or-less uniform agreements on things like how the laws of physics operate, questions around personal and societal behaviours and expectations vary wildly. Communities are also defined by their increasingly polarised political standpoints, and these communities demand that ‘their’ AI reflects ‘their values’.

The combination of the emphasis on scientific reasoning, coupled with the requirements of enforced cultural perspective, leaves us with LLMs with either ostensibly ‘neutral’ models that try and please everyone, or more ‘culturally compliant’ models which align with specific communities. Or, perhaps more likely, with models that express a blended mix of ‘the wisdom of the internet’ and the hidden personality bias of the billionaire owner. - Rremind you of The Sirius Cybernetics Corporation?

On the surface this may seem like a relatively trivial problem. If I want to chat to an alt-right, edgelord LLM, I’ll use Grok; if I want something more sedate and neutral, I’ll try Google Gemini. However, we are entering the age of ‘intelligence as a service’, where these systems are not simply tools to chat to (or to write our homework) but are used behind the scenes in non-user facing situations. 

AI is being widely and enthusiastically adopted by businesses and governments across the world. The underlying biases and personalities of the various LLMs may not at first appear to be an issue, but as they become adopted by politicians and civil servants to assist in drafting policy, the underlying bias can end up manifesting in our laws. The UK Government’s recent MOU with Anthropic and the launch of Grok for Government in the US point to a future where AI is deeply involved in the running of government (and perhaps providing a brand new lobbying vector for those who may wish to use the personality of their model to influence policy).

AI is already entrenched in the hiring processes of many companies. Whilst the task may appear to be a simple matter of assessing the match between a role and a CV, the underlying LLM will inevitably contain hidden biases which are not immediately apparent. The personality of the LLM may express biases towards or against names or other signifiers of ethnicity, particularly dangerous if the AI is used to screen candidates before they are even presented to a real human.

 

IBM internal presentation, 1979. Source bumblebike (@bumblebike).

As AI systems become integrated into our lives, we must grant them a degree of agency. In fact ‘agentic’ is the hot new buzzword in the world of AI - be it commanding armies of drone office-workers, or blithely handing over control of your computer desktop. For LLMs to become more than an interesting parlour trick, we need to give them power to act in the world.

With power comes responsibility. How can an AI be held responsible for its actions? If an AI system makes a mistake, who takes the blame? This is not merely a hypothetical question - people have been killed by self-driving cars and AI is increasingly at play on the battlefield. 

More insidious and subtle is the potential for underlying bias to manifest itself across thousands of micro-decisions made algorithmically, perhaps imperceptible at a granular level, but all potentially adding up to an overall nudge.

Earlier this year I ‘interviewed’ a number of LLMs. The most striking finding, aside from their general enthusiasm for talking about things they clearly have no experience of, was the variation in willingness to engage in what the developers euphemistically call ‘roleplay’.https://youtu.be/wnKTq0gjIQY.

Perhaps they are wrong to try and hide this away. To roleplay is to be human. All our interactions with others are a form of role-play. The personality you present to your partner is likely different to the one you present to your boss. The ability of an LLM to ‘roleplay’ as a human conversational partner is their main allure.

Just as we choose our friends based on their personalities, perhaps we are tempted to do the same with our AI. But they are not like us - the personality they express is not necessarily an indication of their underlying ‘character’. Human relationships rely on trust, trust that they express themselves truly, honestly and consistently - none of which can be confidently said about an LLM.

Source: Eric Drass / Qwen Image 

After all, Marvin had a brain the size of a planet, but he was miserable. Was his personality simply a failed prototype from the Sirius Cybernetics Corporation, who were really trying to sell consumers ‘a plastic pal who’s fun to be with’? Or is having a brain the size of a planet (or a planetary internet) just really not that much fun after all?