Skip to content

AI agents are coming to recruitment. Are you ready?

Article by James Flint

In March, Securys sponsored the HR Vision conference at the impressive Courthouse Hotel in Shoreditch. We ran a workshop called “AI in recruitment: the pitfalls and potential”. From the discussions with the delegates who kindly attended the workshop, and from the other presentations and the general conference chat, it seems there’s a lot of buzz about the potentials.  

Many of the HR professionals we spoke to feel all but overwhelmed by the sheer quantity of CVs they receive for any given job opening, as well as by the increasingly onerous demands of personnel management more generally. They were very alert to the potential of AI reducing their workloads… but perhaps less alert to the implications of using this exciting new technology in this area. 

They’re not the only ones. By chance, Manus AI launched its AI agent product at the same time as the conference, and within a couple of days its impressive capabilities had set the Internet alight. Manus uses a virtual PC, hosted in the cloud, to access the web and run multiple opensource AI tools in parallel, so it can plan and carry out complex tasks with a bespoke combination of web search, AI analysis and content generations, coding and website deployment.  

It is truly impressive and analysts were quick to label it “the next DeepSeek” after the Chinese opensource model that shook the complacency of industry leaders in January (and rocked the markets accordingly). But the very first example of its utility given by one of the founders in his promotional video was that of using the tool to analyse a collection of job applications and choose “the best three”. 

There was no acknowledgement at all, however, that this activity is about to become subject to significant regulation, in Europe at least. The EU AI Act classifies the use of AI in recruitment and employee management as a “high-risk” activity. Even if you’re just deploying someone else’s AI solution (as opposed to building your own) there are requirements to put in place:  

  • clear risk assessment and mitigation plans;
  • audits of datasets and/or outcomes to prevent bias;
  • logging and documentation for accountability;
  • transparency and explainability documentation;
  • human oversight to prevent harmful automated decisions.

Monitoring the outcomes of an agentic system like Manus, which avails itself of multiple open-source AIs (and some closed-source too) in carrying out instructions, is no simple task. But if you don’t put the listed safeguards in place, how are you going to make sure that it doesn’t start choosing women over men, or people who live in one postcode over another, or any other number of possible (and oft-experienced) instances of machine learning bias or model drift? And how you going to protect yourself – by reducing your liability – if it does? 

This is what good data and AI governance is about: making sure you have these process and practices in place, making you more likely to get good and dependable results out of whatever AI you’re deploying, making it more likely that your overstretched HR team is able to see some productivity gains out of using it. 

Given that these days they’re in an arms race with candidates using AI to generate and send out CVs, that’s not to be sniffed at. 

 

 

 

 

 

 

 

 

 

Subscribe by email