Skip to content

Tackling AI bias in hiring: Tools and Techniques for HR

Article by James Flint

The impact of AI is coming to all areas of business, but with their longstanding and widespread use of applicant tracking systems (ATS), HR departments are already well ahead of the curve. More talent assessment and talent management tools powered by machine learning are being deployed by overstretched personnel teams every day, especially now that candidates are resorting to AI tools themselves to prepare CVs and assist in interviews at scale. But how do companies know that the systems they’re using are really giving them genuinely unbiased insights into candidate capabilities and fit?

It’s not an academic question. Regulators across the world are increasingly demanding that the use of AI in HR is auditable and explainable – and threatening hefty fines if it’s not. The EU AI Act is leading the charge on this, by classifying this use case as “high-risk” and requiring the provision of conformity assessments, usage logs, and human oversight for any deployments from August 2026. But the GDPR’s Article 22, which gives candidates the right to explanations and human review of any decisions that materially affect them, also has a bearing on this. Even in the US, where a federal data protection law is still not a reality and AI regulation generally is being downgraded by the Trump administration, state and city level laws and statutes can still apply: New York City’s local law 144, for example, requires annual independent audits and candidate notices when automated employment decision tools (AEDT) are used. South Africa’s POPIA and India’s DPDP Act offer similar rights to candidates to contest automated HR decisions, too.

There are three key areas to consider to make sure you’re on the right side of the regulation – and that your AI tools deliver the right candidates for the right position, every time.

Pre-deployment

  • If you’re building a model, provide transparent documentation of the provenance of the model’s training data and prepare appropriate model cards. The Datasheet for Datasets guidelines, drawn up by such influential industry figures as Timnit Gebru, Jamie Morgenstern and Kate Crawford, provides a structure for this, and a platform like Securiti (which comes with a built in library of templates for common services) will help you create and track your model cards.
  • If you’re buying a model or using open-source, choose models and/or vendors that support interpretable outputs, and use SHAP values, LIME values or similar techniques to help trace which features of a CV drove which particular parts of a recommendation or evaluation. This will give you a feel for the ways in which the model is weighing the evidence.
  • If you’re providing training data from your own business records, whether to build a model from scratch, fine tune an existing one, or just to provide RAG reference data, make sure to collect a dataset that’s representative of the people you’d like to hire. If the data set is skewed, rebalance it with synthetic data or data from another source. Put together a review panel with members from your HR, legal, data science and various employee departments to oversee this process, so that you get a balanced view.
  • Run a bias audit on any vendor model or internally built algorithm before it’s used with real applicants. There are several open‑source toolkits such as IBM’s AI Fairness 360, Google’s What‑If Tool or Microsoft’s Fairlearn that will let your data team measure disparate impact across gender, ethnicity, age and more. These tools allow you to do things like change key input variables (e.g. age, or gender) for a given data point and see how this affects various output levels. By running counterfactual experiments like this you can see how robust and objective the model is.

Deployment

  • Set up your workflow so an experienced recruiter reviews AI scoring, validates any hiring shortlists and is able to override the AI if necessary. And document every decision; such logs will be essential for both internal and external audits.

Monitoring & Governance

  • Schedule periodic re‑audits (this may be a regulatory requirement: NYC law 144 mandates annual bias audits, for example), and track fairness metrics alongside accuracy in production dashboards. When drift appears, retrain or retire the model. And don’t forget to publish a summary of findings in your ESG or DEI report and on your website to show the care you’ve taken over all of this, and to help build trust with customers and stakeholders.
  • And lastly, the easy bit (or it will be easy, if you’ve done all the previous steps): conduct a DPIA and, if necessary, an AI conformity assessment; we can advise on these in more detail here at Securys, if you get in touch.

And if all this sounds complicated and not really that relevant to you, because you’re using an off-the-shelf platform like Eightfold, Harver or HireVue, don’t forget that as the data controller you’re still responsible for the behaviour of their platforms, if they have adverse effects on the people you are hiring. So use the above as a checklist of things to ask your point of contact with them; they should be able to answer all these points and evidence them, and do so happily. If they can’t, it may be a sign that you need to look elsewhere.

Subscribe by email