Blog

Using AI in HR? It’s time to get your act together

Written by James Flint | Jan 23, 2025 4:42:40 PM

The excitement around the breakthroughs in AI over the last eighteen months or so combined with the uncertain economic climate have driven HR departments in organisations of all stripes and scales to adopt the technology at pace.

It’s an area for which the new tools are particularly suited – but in which they also introduce significant risks if not properly aligned with regulatory guidelines. These guidelines used to be principally defined by the data protection principles of GDPR, with particular onus provided by the focus on “automated decision-making” provided by Article 23.

But as of 02 February 2025, there’s a new kid on the regulatory block, the EU AI Act, which – rather ominously for the productivity mavens in Personnel – defines any AI activity in the areas of recruitment, HR or worker management as “high risk”. And for “high risk” processes it appears to have reserved a seat or three in a special circle of legislative hell.

AI in the HR lifecycle

Although AI technology is evolving fast, even in its current incarnation it’s a really good fit for tasks throughout the HR lifecycle. In recruitment Chatbots can answer first touch candidate queries, help schedule interviews and provide progress updates on applications in a more personable way than a traditional website. When resumes come in, an AI can screen and rank them in – in theory – a less biased way than an overstretched human team.

On the outreach side, AIs can crawl job boards, social and professional networking sites and internal databases to find candidates with particular skill sets. And when those candidates are through to the final stages, AI can help work out the quality of their role and cultural fit by conducting psychometric testing.

Once someone has been given a job, AI can automate the onboarding administration of document verification, designing and running training, even setting up their devices and designing their work schedules.

While they’re at work, AI can analyse internal surveys and feedback data to monitor employee sentiment and spot trends both for individuals and for whole cohorts, as well as prompting and providing ongoing education and career development resources, which can significantly improve employee retention.

Evaluating performance data, whether that’s around productivity or health and safety, is another area where the technology is already in widespread – and often controversial – use. Less controversially, for all the talk of the risks of AI bias, AI is very useful for removing the kind of unconscious human bias that often creeps into performance reviews. It can do this with job postings and hiring and retention data too, actively contributing to diversity and inclusion efforts.

The big but

With all that on offer, it’s not surprising that people people are positive about getting these new tools deployed. But there is of course a but, and that but comes as a price of this success. Because if AI can be used so readily in all these areas, what happens when it starts underperforming, or goes flat wrong?

A big impact on people’s lives and careers. That’s what.

If the AI adds bias instead of removing it, hallucinates candidates’ qualities instead of assessing them, mis-analyses cohort data instead of correctly identifying patterns within it, the results can be catastrophic for both employer and employee alike. The wrong people can be hired, the wrong teams reprimanded, the wrong ads posted, all of which has a detrimental effect on company culture at best, and can end up in expensive litigation, PR disasters and worse, once those effects filter down into people’s day-to-day lives.

Acting up

This is why the EU AI Act has chosen to add recruitment, HR and worker management applications of AI to its “high-risk” category, where they sit alongside things like AI in medical devices, autonomous vehicles, law enforcement, biometric identification and critical infrastructure. This can seem a bit surprising at first, if you haven’t gone through the through process we’ve just gone through above and listed out the sheer breadth of the ways AI in HR can impinge upon the progress of people’s careers and, therefore, the viability of their livelihoods.

And the thing about being classified as “high-risk” is that it comes with responsibilities. If you’re creating AI systems for high risk use cases, this makes you an “AI provider”, and AI providers are subject to a whole list of regulatory obligations, including (but not limited to): implementing an appropriate risk management process; using data sets that are fit for purpose and free of bias, maintaining technical documentation and human oversight; completing something called a conformity assessment, which is a bit like a data protection impact assessment (DPIA) for AI; registering your model in the official EU database of high-risk AI systems; and monitoring and correcting your system for performance and safety after it’s been deployed.

If that sounds onerous, it is. What’s even more onerous is the fact that you will be classified as an AI provider and charged with all these responsibilities even if you’re just taking an existing model, say an open source large-language model (LLM) like Mistral or Llama and then fine tuning it with your own datasets or some retrieval augmented generation (RAG). So if you’ve got a proactive IM department that’s been building you a few fancy tools with all this new tech, beware – you might be in for more paperwork than you bargained for.

If you’re just deploying a system that you’ve bought in from elsewhere, you don’t escape scott-free. Much like the GDPR before it, the Act will still hold you to account for list of requirements that include: purpose limitation, human oversight, monitoring of input data, record-keeping, incident reporting, transparency to affected parties, and mitigation of bias and risks.

And this is before we even get to the ethical consideration that both providers and users must adhere to and ensure their systems align with.

Good AI governance is not a bug, it’s a feature

If all this sounds like a drag, it is. But it’s worth it. AI is not like traditional software. Traditional software is deterministic: it does what it’s told. When it doesn’t do what you want, that’s a problem with the programmer, not the program. But AI systems are inherently probabilistic. This makes them much more robust when dealing with real world uncertainty than traditional software systems, but it also means that their outputs are inherently uncertain.

Compliance with the EU AI Act should be seen as a way of containing that uncertainty and keeping it permanently under review. The focus should be on achieving full oversight of data, fostering transparency, and designing processes that serve both the business and its candidates. This involves conducting thorough audits of existing AI systems, putting rigorous conformity-by-design practices in place for new ones, and training teams on the ethical use of AI.

By taking a proactive approach and implementing good AI governance from the outset, HR departments can avoid the pitfalls of rushed implementation and ensure that this exciting technology proves a boon for their organisations, not a menace.