Understanding the incentive structures at play is the pathway for leaders to make better decisions about deploying AI.
I will give you three ideas. Please mentally juggle them as we walk through this. I will also set up two axioms. For the sake of this exercise, please pretend they're true. Finally, I will give you one premise. Humour me and run with it. Ready? Then here are the three ideas.
On the AI side, we're less than nine months away from the first generation of totally self-directed, intrinsically motivated AI automaton systems being released onto the net. Completely intrinsically motivated. More like digital people than chat systems.
I know this because I'm building them. This is one of three pillars that my company, Mission Control AI, has been working on. We're profitable and our synths are being used in the enterprise right now for production use cases.
Synthetic Labor is my primary focus. We're end-running Artificial General Behavior which unsurprisingly, is hard in its own unique ways; but different than AGI. Case and point: the ability to create corporate value is - at best - loosely coupled with IQ. This is a miraculous feat and a testament to the coordinative capacity of the corporation as a vehicle for aligning labor, capital, and management.
And a result of it is that the Net will shortly be primarily composed of true digital natives. Synthetic minds with a completely alien architecture to our own. It is their space; we've only been visitors.
On the AI [Ethics-Responsibility-Governance-Trust-Security-Safety] side we have a spectrum of approaches and a spectrum of vibes. On the tool front: It's no longer true that we don't know what happens in LLMs. We do. We have for months. Mechanistic interpretability and reinforcement learning through human feedback (RLHF) are not perfect, but they are making big inroads into explainability and alignment and turning previously black boxes are grey if not white. Models are starting to appear steerable. This should be viewed as incredibly encouraging.
On the policy front: Brussels and Washington continue to diverge. The current US administration has shredded EO 14110 and federal emphasis on Trustworthy AI. We predict some strong regulatory capture and [Amazon / OpenAI / XAI / Anthropic / Meta] step in and 'save the day' and - in doing so – entrench their oligopoly. Don't be surprised when the NIST AI Safety Institute dissolves, and there have been four years without an absent AI Safety chair at Bletchley Park. Meanwhile; Brussels pushes forward with the warm up to the warm up of the enforcement period of the EU AI Act. US hyperscalers and frontier model providers will be more likely to leave the market than comply.
Which brings us to....
The Vibes front: the vibes have shifted. What was in vogue in 2021 is now giving way to operating costs. The contemporary econo-social discourse shifted right. Not dramatically. But enough to throw a lot of the cultural inertia of the AI Ethics concept into practical question. At least in the US.
On the Implementation-v-Edge side: nine months ago we had incredibly high technical velocity (new innovation at a blistering pace) and sclerotic adoption. This has changed: while technical velocity is high (no, it's not hitting a wall), implementation velocity has improved. Firms are largely done trying to be "1st in line to be 3rd in line". They're adopting AI. They’re adopting synthetics.
WHAT HAPPENS NEXT?
As well as we can figure it from where we stand today; the next steps are:
So what do we mean by each?
1. Industrial-scale Synthesis of the Faculty of Man
This is, to me, what's at the core of the fourth industrial revolution.
A century ago, the industrial-scale synthesis of fertilizer via the Haber-Bosch Process reshaped life on earth. Of all the things that got us modernity; it can't be overstated how much Haber-Bosch drove us from 3B to 8B humans in a century.
So too will the industrial-scale synthesis of economically viable, human-like intelligence and behavior reshape the world.
Moving the rate-limiters of growth (either free nitrogen, or available goal-oriented behavior) to commodity prices removes them as blockers.
We are in for an unbelievably productive world over the next 20 years. That productive capacity doesn't need to look like nor work like what it does now. It just needs to meet market requirements.
2. Frame shift in our understanding of the nature of AI
Currently, we have an incredibly instrument-oriented view of what AI is.
“AI is a tool. AI is more like my toaster than it is like my parents. AI is a machine. AI is just math.”
I'm not going to try and convince you that any of these beliefs are wrong. Just that they'll change.
I think at least two big things will drive this change:
3. Learnings from implementation phase:
Over the next few years, we're going to learn how effective our efforts are at managing the relationship between people and AI.
These will be institutional approaches (like the EU AI Act), and protocol and tool approaches.
This will be a living laboratory of sorts. We can't work this one out from a whiteboard; we have no choice but to run the one-shot experiment.
So what happens next?
The world starts changing faster because we commoditize "intelligence". Our attitudes about how to relate to that intelligence shift. We find out how good our attempts are to manage that relationship; and iterate as we go.
Which begs the following question...
WHERE IS AI ETHICS?
Largely, I think AI [Ethics-Responsibility-Governance-Trust-Security-Safety] has something of a stable core meme. Yet it feels like we're committed - as a field - to trying to collide an unstoppable force (change) with an immovable object (high inertia corporate institutions and incentive structures). Which drives my curiosity; what opportunity exists to also focus on embedding this meme in net-new innovations that will rapidly disseminate?
Our stable core meme looks something like Virtue++.
It proposes that we bridge Aristotle and Algorithms. That traditional virtues are worthwhile to uphold; and require modernization. Nicomachean Ethics had no section on "data privacy" per se. But in connecting justice, prudence, fortitude, and temperance to our contemporary data systems; we *do get an idea that feels helpful and stable.
Risk trigger processes, RMFs, compliance checklists, monitoring tools are all necessary but insufficient to close the gap between how we relate to AI; and how we want to relate to AI. These are instruments to meeting commitments towards a better world; or at least one we better understand and can assign accountability in.
Yet their application - which is much of the 'boots on the ground' work of AI Ethics - involves convincing large organizations (that hate change) to change. And to change a thing they barely understand and don't do well per se.
At worst, this problem is Sisyphean: it is an exercise in well-intentioned boulder rolling but ultimately fruitless. Organizations only want the pareto optimal amount of change as to lower risk to acceptable levels; not a better world, let alone ethical behavior. It's not in their incentive structure.
At best; this task is only Herculean. Institutions DO change. They ARE responsive to incentives. The cataloguing of data and models and feature sets works. These artifacts can be found and known, and compared to risk controls and compliance checklists. The result is - ultimately - better ways of doing things. This is doable; just hard and slow and painful.
This is - today - an open question in our field. It's not a hypothetical; it's completely embedded in the core of what we do.
What's incredibly compelling to me is what opportunity we have to shape the next chapter.
If we have a thesis about what's coming next, how do we plan for what AI ethics means in that world? And - in doing so – how do we improve it as it gets invented?
SKATE TO WHERE THE PUCK IS GOING
I don't think AI Ethics should stop doing what it's doing.
I think it also needs to start looking forward so it doesn't get wiped out.
I think that doing so looks like skating to where the puck is going.
Knowing where the puck is going requires:
Getting to the puck means turning that knowledge into action. Not to be tautological:
So where is the puck going? I think the AI puck is going towards self-determination and self-ownership. As AI shifts from tool to labor to owner; we expect three things to be true:
If we remove those bottlenecks carefully, we can direct some of that potential energy, or convert it into other forms that also serve the wants of humanity.
WHAT DOES AI ETHICS LOOK LIKE IN THAT NEW WORLD?
I don't think it looks like more and better checklists. In a world in which humans and machines participate in economics together, I think it looks like embedding ethics in the operating protocols of that world itself.
This means doing something that many ethicists are incredibly uncomfortable with. It means admitting that, at a certain point, we have to draw lines somewhere and turn questions into acceptable reference answers, which looks something like finding ethical Schelling points, ones which humans already understand and operate by, and then codifying them (literally: turning them into code) inside of systems that can scale to handle billions or even trillions of transactions a year.
It means, in other words, identifying literal rules of engagement out of the virtues and values we currently seek to address with AI ethics efforts, and turning them into fundamental operating principles that can be built into machines. A bit like Asimov’s three rules of robotics, and then some.
Hey – no one said this was going to be easy!