HR & AI across borders
Article by James Flint
The removal from President Trump’s One Big, Beautiful Bill of the proposal to stop US states from implementing their own AI regulations has big implications for business. Despite all the emphasis on AI safety at the highest levels in Washington over the last year or two, active AI regulation at federal level was never really on the cards, even if Trump had lost the election. Both parties have failed to introduce a federal privacy law; neither was likely to grasp the far more knotty and controversial issue of AI governance and get anything on the subject through Congress this term.
But the states are likely to do something; many of them already have active privacy laws, and many of those are no doubt keen to extend their regulatory reach into artificial intelligence, if only to assert their Democrat objection to all things MAGA. If and when this happens, it will make it harder (or at least more pointless) for the Trump administration to exert pressure on the Europeans to roll back the EU AI Act or dilute GDPR: the impact of a successful attempt to do that would be far reduced if a third of US states have just enacted much the same legislation.
Since Trump’s victory last November, very few businesses have thought very hard about adding serious AI governance to their existing data protection regimes. Not only the majority of those implementing AI still very much at the experimental proof-of-concept stage with the technology, which is still evolving so fast that it makes it hard to keep up, let alone make a long commitment to any one strategy – but the mood music from the US been one of discouraging regulation and thereby (supposedly) encouraging innovation at all costs. Who wants to bother regulating their own efforts if the big players like OpenAI, Microsoft, Grok and Meta are all getting to play rule and consequence free?
But if the states start bringing in rules that will change. California already has over 25 AI-related laws in place, which is of particular importance, given that most of the companies under discussion are based out of Silicon Valley. New York has four, Washington has six, Texas has four, and many other states are following suit. Even in USA, therefore the EU AI Act soon won’t seem like such an outlier after all, whatever noises are coming out of the White House.
Around the world, the Brazilian Senate approved a new AI bill last year that sets rights and obligations for developers, deployers, and distributors of AI systems, Australia has legislated around AI-related misinformation and deepfakes, Canada and Jamaica are actively working on national AI policy frameworks, and China has several laws in place that directly regulate AI.
And if your organisation is using AI in a way that involves the use of sensitive data, of course, or deploying it to make decisions that can have a material effect on people – using AI to evaluate candidates’ body language, facial expressions, and speech patterns in video interviews for example – then you don’t need to wait for AI regulations to be in place at all: those use cases will be captured by existing data protection regulations and need to be properly risk assessed and documented by your existing data privacy team.
All of which is to say, that despite a period of pushback, AI regulation is now coming. If you’re experimenting with AI in your business, especially if you’re handling data in different countries, the time to think about governance implications is now, not after you’ve put your new agentic AI workflows in place.