The incorporation of artificial intelligence (AI) in the hiring process is expanding, but experts warn that it shouldn’t be expected to be a silver bullet for all hiring challenges. According to the 2023 Hiring Benchmark Report by Criteria, just 12% of hiring professionals are currently using AI in their recruiting or talent management processes. However, AI solutions designed for streamlining sourcing, making informed selection decisions, and more are actively being marketed. Josh Millet, Founder and CEO of Criteria, stated, “When AI tools are well designed, deployed, and monitored properly, the technology has the potential to mitigate discrimination and bias on a broader scale.”
The introduction of AI in hiring is a delicate matter, given the legal, cultural, and business implications. In light of concerns surrounding AI bias, the Center for Industry Self-Regulation (CISR), in collaboration with companies like Amazon, Unilever, Koch Industries, and Microsoft, has developed principles and protocols for trustworthy AI in hiring. These guidelines focus on transparency, fairness, non-discrimination, technical robustness, safety, governance, and accountability in AI-powered hiring processes. Additionally, they specify criteria for third-party AI vendor certification to ensure accountability beyond the employer.
AI is currently utilized in various aspects of the hiring and recruiting process, including job description development, talent sourcing, assessments, applicant screening, candidate communication, and employee training. AI tools like OpenAI’s ChatGPT and Google’s Bard, along with recruiting chatbots and proprietary solutions, are instrumental in these processes.
While AI holds the potential to make hiring more equitable and efficient, it’s not without challenges. Biases, both explicit and implicit, continue to persist in the hiring process despite efforts to eliminate them. AI can be a powerful tool for reducing bias, but its outcomes aren’t guaranteed. Millet pointed out that in the quest to remove bias, AI systems can sometimes inadvertently amplify it.
Maintaining human oversight in AI-enabled hiring processes is essential, especially for high-stakes decisions and data-rich applications. Employers need to understand where AI is involved and where candidates may require human engagement, such as for accommodations under the Americans with Disabilities Act. Providing candidates with transparent data collection notices and ensuring informed consent is a best practice.
To build trust and transparency, employers and vendors should adopt “glass box algorithms,” which are transparent and can explain their conclusions. This approach can counter the issues associated with “black box algorithms,” which are opaque and can erode trust.
The integration of AI in hiring is a promising development, but it requires ethical and transparent practices to maximize its potential while mitigating risks. Employers must prioritize fairness, transparency, and accountability when implementing AI in their talent acquisition processes.