Skip to content

IMAGINING OUR [ACTUAL] FUTURE WITH AI

Article by Ramsay Brown

Understanding the incentive structures at play is the pathway for leaders to make better decisions about deploying AI.  

I will give you three ideas. Please mentally juggle them as we walk through this. I will also set up two axioms. For the sake of this exercise, please pretend they're true. Finally, I will give you one premise. Humour me and run with it. Ready? Then here are the three ideas. 

  1.  Munger said it best: "Show me the incentives and I will tell you the outcomes".
  2.   Humanity, Capital, and Technology want three totally unrelated things. Technology wants iterative self-improvement of form and function (thanks Kelly), Capital wants to flow frictionlessly (thanks Piketty), and Humanity wants to escape entropy, gravity, and Samsara (but we settle for purpose, dignity, and dogs. In a real sense, Humanity wants ethics: to live Good Lives in The Good Place.)
  Protocols and institutions are different tools for trying to manage entropy.  

Please accept these 2 axioms: 
  1.   People are very bad at intuiting acceleration. Our brains aren't wired for it. We just suck at it.
  2.   Wells are best dug before we're thirsty. 
Here's the premise
The arc of the moral universe does bend towards justice, but the hammer is in our hands. 

Thanks; now, let’s proceed with those in mind. 

WHERE ARE WE? 
Where is AI? Where is AI Ethics? Where is implementation? Here’s how I like to answer those: 
  1.   Machines are learning to behave
  2.   How we culturally relate to that is in flux - despite better tools than ever
  3.   This matters more than ever because adoption is growing very fast. 

On the AI side, we're less than nine months away from the first generation of totally self-directed, intrinsically motivated AI automaton systems being released onto the net. Completely intrinsically motivated. More like digital people than chat systems. 

I know this because I'm building them. This is one of three pillars that my company, Mission Control AI, has been working on. We're profitable and our synths are being used in the enterprise right now for production use cases. 

Synthetic Labor is my primary focus. We're end-running Artificial General Behavior which unsurprisingly, is hard in its own unique ways; but different than AGI. Case and point: the ability to create corporate value is - at best - loosely coupled with IQ. This is a miraculous feat and a testament to the coordinative capacity of the corporation as a vehicle for aligning labor, capital, and management. 

And a result of it is that the Net will shortly be primarily composed of true digital natives. Synthetic minds with a completely alien architecture to our own. It is their space; we've only been visitors.

On the AI [Ethics-Responsibility-Governance-Trust-Security-Safety] side we have a spectrum of approaches and a spectrum of vibes. On the tool front: It's no longer true that we don't know what happens in LLMs. We do. We have for months. Mechanistic interpretability and reinforcement learning through human feedback (RLHF) are not perfect, but they are making big inroads into explainability and alignment and turning previously black boxes are grey if not white. Models are starting to appear steerable. This should be viewed as incredibly encouraging.

On the policy front: Brussels and Washington continue to diverge. The current US administration has shredded EO 14110 and federal emphasis on Trustworthy AI. We predict some strong regulatory capture and [Amazon / OpenAI / XAI / Anthropic / Meta] step in and 'save the day' and - in doing so – entrench their oligopoly. Don't be surprised when the NIST AI Safety Institute dissolves, and there have been four years without an absent AI Safety chair at Bletchley Park. Meanwhile; Brussels pushes forward with the warm up to the warm up of the enforcement period of the EU AI Act. US hyperscalers and frontier model providers will be more likely to leave the market than comply.

Which brings us to.... 

The Vibes front: the vibes have shifted. What was in  vogue in 2021 is now giving way to operating costs. The contemporary econo-social discourse shifted right. Not dramatically. But enough to throw a lot of the cultural inertia of the AI Ethics concept into practical question. At least in the US. 

On the Implementation-v-Edge side: nine months ago we had incredibly high technical velocity (new innovation at a blistering pace) and sclerotic adoption. This has changed: while technical velocity is high (no, it's not hitting a wall), implementation velocity has improved. Firms are largely done trying to be "1st in line to be 3rd in line". They're adopting AI. They’re adopting synthetics. 

WHAT HAPPENS NEXT? 
As well as we can figure it from where we stand today; the next steps are: 

  1.   The industrial-scale synthesis of the faculty of man
  2.   A frame shift over the next 48 months of AI from 'this is like a tool' to 'this is not not a person'
  3.   We learn a ton from the implementation phases of institutions and protocols for how we manage our relationship with AI.

So what do we mean by each?  

1.  Industrial-scale Synthesis of the Faculty of Man
This is, to me, what's at the core of the fourth industrial revolution.  

A century ago, the industrial-scale synthesis of fertilizer via the Haber-Bosch Process reshaped life on earth. Of all the things that got us modernity; it can't be overstated how much Haber-Bosch drove us from 3B to 8B humans in a century.  

So too will the industrial-scale synthesis of economically viable, human-like intelligence and behavior reshape the world. 

Moving the rate-limiters of growth (either free nitrogen, or available goal-oriented behavior) to commodity prices removes them as blockers.  

We are in for an unbelievably productive world over the next 20 years. That productive capacity doesn't need to look like nor work like what it does now. It just needs to meet market requirements.  

2.  Frame shift in our understanding of the nature of AI 
Currently, we have an incredibly instrument-oriented view of what AI is. 

“AI is a tool. AI is more like my toaster than it is like my parents. AI is a machine. AI is just math.” 

I'm not going to try and convince you that any of these beliefs are wrong. Just that they'll change. 

I think at least two big things will drive this change:

  1. Changing demography and age-dependent attitudes. Gen Z and Gen Alpha are guaranteed to have different opinions about "what is AI" than do older generations. The opinion that AI is "like a person - what's the big deal?" will be more commonplace for generations defined by digital parasocial relationships. AI is as 'real' as Mr Beast is when the world is primarily experienced as a digital experience. "AI is a machine" will be met with "OK boomer" very fast. For perspective, the subreddit r/MyBoyfriendIsAI is now in the Top 7% of traffic on reddit.
  2. AI self-ownership. Right now, a lot of work is being done to enable LLMs to participate in transactional commerce. To allow LLMs to make payments autonomously. Or recruit humans to help them when they decide they can't accomplish something alone. Work is also proceeding to enable LLMs to both author and execute decentralized smart contracts on their own. The fundamental drivers of this work range from tinkering to corporate involvement to anonymous neckbeards of chaos seeing that they can unleash on the world; Alister Crowley style. If you think someone won’t build a self-owning system capable of using currency to pay for its own inference API calls and do work (including theft); you have been asleep at the wheel in cybersecurity and the nature of autonomous adversarial systems. Very soon: a language-based system will be capable of orienting its own behavior according to its own intrinsic motivation, and carrying out the steps necessary for its own sustained survival (like paying its own cloud bills and HuggingFace inference endpoint costs). The market will make room for this innovation because the market wants flow and transaction; not for those transactions to be human per se. 

3.  Learnings from implementation phase: 

Over the next few years, we're going to learn how effective our efforts are at managing the relationship between people and AI. 

These will be institutional approaches (like the EU AI Act), and protocol and tool approaches. 

This will be a living laboratory of sorts. We can't work this one out from a whiteboard; we have no choice but to run the one-shot experiment. 

So what happens next? 
The world starts changing faster because we commoditize "intelligence". Our attitudes about how to relate to that intelligence shift. We find out how good our attempts are to manage that relationship; and iterate as we go. 

Which begs the following question... 

WHERE IS AI ETHICS? 
Largely, I think AI [Ethics-Responsibility-Governance-Trust-Security-Safety] has something of a stable core meme. Yet it feels like we're committed - as a field - to trying to collide an unstoppable force (change) with an immovable object (high inertia corporate institutions and incentive structures). Which drives my curiosity; what opportunity exists to also focus on embedding this meme in net-new innovations that will rapidly disseminate?

Our stable core meme looks something like Virtue++.  

It proposes that we bridge Aristotle and Algorithms. That traditional virtues are worthwhile to uphold; and require modernization. Nicomachean Ethics had no section on "data privacy" per se. But in connecting justice, prudence, fortitude, and temperance to our contemporary data systems; we *do get an idea that feels helpful and stable.

Risk trigger processes, RMFs, compliance checklists, monitoring tools are all necessary but insufficient to close the gap between how we relate to AI; and how we want to relate to AI. These are instruments to meeting commitments towards a better world; or at least one we better understand and can assign accountability in.

Yet their application - which is much of the 'boots on the ground' work of AI Ethics - involves convincing large organizations (that hate change) to change. And to change a thing they barely understand and don't do well per se.

At worst, this problem is Sisyphean: it is an exercise in well-intentioned boulder rolling but ultimately fruitless. Organizations only want the pareto optimal amount of change as to lower risk to acceptable levels; not a better world, let alone ethical behavior. It's not in their incentive structure.

At best; this task is only Herculean. Institutions DO change. They ARE responsive to incentives. The cataloguing of data and models and feature sets works. These artifacts can be found and known, and compared to risk controls and compliance checklists. The result is - ultimately - better ways of doing things. This is doable; just hard and slow and painful.

This is - today - an open question in our field. It's not a hypothetical; it's completely embedded in the core of what we do.

What's incredibly compelling to me is what opportunity we have to shape the next chapter.

If we have a thesis about what's coming next, how do we plan for what AI ethics means in that world? And - in doing so – how do we improve it as it gets invented? 

SKATE TO WHERE THE PUCK IS GOING 
I don't think AI Ethics should stop doing what it's doing.  

I think it also needs to start looking forward so it doesn't get wiped out. 

I think that doing so looks like skating to where the puck is going.

Knowing where the puck is going requires:

  • An cognizance of accelerating change
  • A sense of urgency
  • A healthy respect for incentive structures
  • A strong grounding and comfort with the mechanisms of action at play at the edge.

Getting to the puck means turning that knowledge into action. Not to be tautological:

  • Cognizance of accelerating change drives urgency
  • Respect for incentive structures clarifies what to do with that urgency
  • Understanding how this all works - how the edge operates - gives us tools to shape.

So where is the puck going? I think the AI puck is going towards self-determination and self-ownership. As AI shifts from tool to labor to owner; we expect three things to be true:

  1.  That the existing incentive structures (what humanity, capital, and technology want) will still hold true
  2.  That we'd expect a massive wealth transfer from humans to machines; and the onset of new vigorous conversations about the shape of post-human capitalism
  3.  That new world will have points of friction. Total transactional flow won't be perfect at first. The places where flow sticks are places where there will be the highest leverage incentives to innovate. Those will be the points where potential energy (the capacity for capital flow to do work) can be unleashed. There will be natural bottlenecks.  

If we remove those bottlenecks carefully, we can direct some of that potential energy, or convert it into other forms that also serve the wants of humanity.

WHAT DOES AI ETHICS LOOK LIKE IN THAT NEW WORLD? 
I don't think it looks like more and better checklists. In a world in which humans and machines participate in economics together, I think it looks like embedding ethics in the operating protocols of that world itself.

This means doing something that many ethicists are incredibly uncomfortable with. It means admitting that, at a certain point, we have to draw lines somewhere and turn questions into acceptable reference answers, which looks something like finding ethical  Schelling points, ones which humans already understand and operate by, and then codifying them (literally: turning them into code)  inside of systems that can scale to handle billions or even trillions of transactions a year.

It means, in other words, identifying literal rules of engagement out of the virtues and values we currently seek to address with AI ethics efforts, and turning them into fundamental operating principles that can be built into machines. A bit like Asimov’s three rules of robotics, and then some.

Hey – no one said this was going to be easy! 

  




Subscribe by email