Executive Summary

Artificial intelligence (AI) is rapidly transforming recruitment and human resources (HR) practices across Europe and globally. Tools such as applicant tracking systems, resume screeners, chatbots, and video interview platforms are now integrated into hiring pipelines that previously relied on human judgment alone. While these systems promise efficiency, they also introduce new risks of bias and non-compliance with emerging legal frameworks. The European Union’s Artificial Intelligence Act (AI Act), adopted in 2024, explicitly designates AI used in employment and worker management as “high-risk.” This legal classification elevates hiring algorithms into a category of scrutiny once reserved for medical devices or aviation systems. Employers are no longer judged only on whether they hire effectively, but also on whether their processes meet strict standards for fairness, transparency, and governance (European Commission, 2021; European Parliament, 2024).

This white paper examines how bias manifests in AI-based hiring, with particular attention to the experiences of persons with disabilities, outlines the regulatory requirements that frame organizational responsibilities, and proposes a governance and auditing approach designed to prepare institutions for compliance.

Introduction

Hiring decisions profoundly shape both individual careers and the collective composition of the workforce. In the last decade, organizations have increasingly turned to automated systems to improve efficiency and consistency in recruitment. In the US and across the European Union, AI-enabled applicant tracking systems, natural language processing engines that parse resumes, predictive algorithms that estimate candidate success, and even video interview analysis tools are gaining traction in recruitment practices. Although adoption in Central and Eastern Europe lags behind leading Western economies, the trajectory points toward wider diffusion of these technologies, supported by both corporate demand and government strategies to foster AI innovation (OECD, 2025).

The risks associated with these systems are increasingly evident. Algorithms trained on historical hiring data inherit the prejudices embedded in those datasets. If prior managers undervalued candidates with career interruptions, algorithms trained on this history will also penalize applicants with employment gaps. If past hiring favored narrow educational or cultural backgrounds, predictive systems will replicate those biases systematically (Raghavan et al., 2020). The consequences are not trivial. A single biased filter can eliminate entire groups of qualified candidates before any human recruiter ever reviews their application.

Among those most affected are people with disabilities. Disability is often invisible in datasets, meaning algorithms are not designed to accommodate their realities. Resume gaps due to medical leave, differences in formatting caused by assistive technologies, or non-standard communication styles can be misinterpreted by AI systems as markers of poor suitability. As Cowgill (2023) has shown, automated hiring systems frequently misinterpret atypical signals, and the result is exclusion without intent or awareness. The ethical stakes of AI in hiring are therefore profound, but with the passage of the EU AI Act, they have also become legal and financial risks. Organizations can no longer rely on vague commitments to fairness. They must demonstrate readiness to comply with stringent requirements.

Regulatory Context

The EU Artificial Intelligence Act represents the world’s most ambitious attempt to regulate AI technologies. First proposed in 2021 and adopted in 2024, it introduces a risk-based approach to AI regulation. Systems used in employment and worker management are classified as high-risk, a designation that carries substantial obligations. Organizations deploying such systems must establish risk management processes, ensure the quality and representativeness of training data, maintain technical documentation, guarantee transparency for affected individuals, and provide meaningful human oversight over algorithmic decisions (European Commission, 2021).

The law also mandates conformity assessments, whereby high-risk AI systems are evaluated for compliance before and during deployment. Failure to comply may result in penalties of up to 6 percent of annual global turnover, a sanction level similar to that of the General Data Protection Regulation (European Parliament, 2024). This sets a precedent: AI in hiring is regulated not merely as a business process, but as a high-stakes socio-technical system with potential to cause significant harm.

Comparable efforts are emerging elsewhere. In the United States, New York City’s Local Law 144 requires that all automated employment decision tools undergo annual independent bias audits before they can be used to screen candidates. The U.S. Equal Employment Opportunity Commission (EEOC) has similarly issued guidance on assessing adverse impact when employers deploy software and algorithms in selection processes (EEOC, 2023). Taken together, these regulations reflect a growing consensus that algorithmic decision-making in employment must be scrutinized as carefully as safety-critical systems in medicine or transportation.

Case Study: Disability Bias in AI Hiring

The risks of AI in hiring are best illustrated through concrete examples. Consider a European medium-sized company that implements an automated resume screening system to accelerate recruitment. The system has been trained on historical records of “successful” employees and uses those records to score incoming applicants. Despite its apparent efficiency, the algorithm begins to reproduce discriminatory patterns.

One recurring issue involves employment gaps. Applicants with disabilities often have breaks in their work histories due to medical treatment or caregiving responsibilities. While such gaps are unrelated to ability, the algorithm interprets them as evidence of unreliability and assigns lower scores. Raghavan et al. (2020) note that algorithmic hiring systems often penalize deviations from “ideal” continuous careers, creating invisible barriers for those with non-linear paths.

Another issue arises with assistive technology. Applicants who rely on screen readers or adaptive software sometimes submit resumes with non-standard formatting. Automated parsers frequently fail to interpret these documents correctly, discarding them as incomplete or unreadable. Research by Shumailov et al. (2021) shows that technical biases in data processing can translate into exclusionary outcomes in hiring contexts.

Finally, video interview platforms present a subtler but equally concerning risk. AI systems designed to analyze facial expressions, vocal tone, or micro-expressions routinely misclassify candidates with mobility impairments, atypical speech patterns, or other non-standard modes of communication. Cowgill (2023) demonstrates that such tools, far from being neutral, embed strong normative assumptions about what competence “looks like” or “sounds like.” For disabled candidates, this often results in systematically lower confidence scores, regardless of their actual qualifications.

These examples illustrate that AI hiring systems can exclude candidates without any explicit discriminatory intent on the part of employers. The harm arises not from overt prejudice but from technical design choices and biased data. Left unchecked, these systems perpetuate structural inequality while giving employers the false assurance of objectivity.

Independent Audit Framework

Organizations cannot wait for regulators to expose these issues. Independent audits serve as a proactive mechanism to identify risks and strengthen governance before official assessments occur. An effective audit framework begins with scoping, mapping all AI systems used in hiring and HR, including third-party tools such as off-the-shelf applicant tracking systems. Many organizations underestimate their exposure because they assume only proprietary AI counts, when in fact outsourced systems are equally covered by regulation.

The next phase is bias testing. This can involve creating synthetic test cases that simulate candidates with identical qualifications but differing indicators, such as disability status. By comparing outcomes across these cases, auditors can detect whether the system treats groups differently. Evidence from prior studies shows that such counterfactual testing is one of the most effective ways to reveal hidden algorithmic discrimination (Raghavan et al., 2020).

Following this, compliance mapping links observed risks to specific regulatory obligations under the AI Act, GDPR, or EEOC guidance. This translation from technical findings to legal implications is essential for decision-makers. For example, a failure to provide transparency into how scores are generated may constitute a direct violation of the EU Act’s transparency provisions (European Commission, 2021).

Finally, governance planning ensures that organizations embed continuous oversight. This includes drafting internal policies for AI deployment, clarifying lines of accountability, and training HR staff to critically evaluate algorithmic recommendations rather than accept them uncritically. Independent auditors are well-positioned to support this process, not by replacing regulators, but by preparing organizations for the realities of external scrutiny.

Recommendations for Employers

Organizations seeking to deploy AI responsibly in hiring must act early rather than react defensively. Independent audits should be conducted as a readiness exercise, enabling organizations to identify vulnerabilities before formal enforcement begins. Vendor transparency should be demanded in contracts, with suppliers required to disclose the data sources, evaluation metrics, and limitations of their AI systems. Internal governance frameworks should be established, ensuring that policies and oversight mechanisms are in place to monitor AI use continuously. Training HR personnel is equally critical, as staff must be able to interpret and challenge algorithmic output rather than defer to them uncritically. Finally, organizations should communicate their commitments to fairness openly, publishing responsible AI policies to reassure candidates, regulators, and stakeholders alike.

Conclusion

AI in hiring represents both an opportunity and a liability. It promises efficiency and scalability but simultaneously risks embedding discrimination at scale. The EU AI Act’s designation of hiring systems as “high-risk” signals the seriousness with which regulators now approach this domain. Employers cannot afford to treat compliance as optional or defer responsibility to vendors.

The case of disability bias illustrates how algorithmic systems can invisibly exclude qualified candidates, compounding barriers for already marginalized groups. Independent audits provide a practical and ethical response. By assessing risks, mapping compliance gaps, and strengthening governance, organizations can prepare proactively rather than wait for enforcement or lawsuits.

The future of fair hiring depends on such proactive action. Organizations that act early will not only avoid penalties but also demonstrate leadership in building inclusive and responsible workplaces in the age of AI.

References

Cowgill, B. (2023). Algorithmic hiring and employment discrimination. Annual Review of Economics, 15(1), 547–573. https://doi.org/10.1146/annurev-economics-061622-031207

European Commission. (2021). Proposal for a regulation laying down harmonised rules on artificial intelligence (Artificial Intelligence Act). Link

European Parliament. (2024). Artificial Intelligence Act: Final text adopted. Link

NYC Department of Consumer and Worker Protection. (2023). Guidance on automated employment decision tools (Local Law 144). Link

OECD. (2025). Emerging divides in the transition to artificial intelligence. OECD Publishing. https://doi.org/10.1787/eeb5e120-en

Raghavan, M., Barocas, S., Kleinberg, J., & Levy, K. (2020). Mitigating bias in algorithmic hiring: Evaluating claims and practices. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 469–481. https://doi.org/10.1145/3351095.3372828

Shumailov, I., Shumaylov, R., Papernot, N., & Anderson, R. (2021). The curse of recidivism: Training on biased data in hiring algorithms. arXiv preprint arXiv:2106.00545. https://arxiv.org/abs/2106.00545

U.S. Equal Employment Opportunity Commission. (2023). Assessing adverse impact in software, algorithms, and artificial intelligence used in employment selection procedures under Title VII. https://www.eeoc.gov/laws/guidance

linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram