AI is everywhere in hiring. Nearly all Fortune 500 companies now use it in some form, whether to sift through résumés, recommend candidates, or even run initial interviews. And on the surface, it’s delivering: A recent Gallup study found that 45 percent of HR leaders say AI has already improved efficiency in their organizations.
But efficiency comes with a catch. Only a quarter of candidates say they trust AI to evaluate them fairly. They might be right to worry: A recent investigation revealed that AI-powered salary negotiation tools often advise women and minority candidates to ask for lower pay than their White male peers. And just this summer, a judge allowed a class-action lawsuit alleging bias in Workday’s AI hiring tools to proceed.
The pattern is clear: Without careful oversight, AI risks may amplify the very inequities in recruiting that inclusion leaders are working to eliminate.
How AI Hurts and Helps Recruiting Fairness
AI has transformed how organizations find and evaluate talent. What once required hours of manual screening and résumé sorting can now be done in minutes. Algorithms can scan thousands of applications, flag candidates with relevant experience, and even predict which ones might be the best fit. In theory, this should help broaden the pool, surfacing applicants with unconventional career paths or transferable skills who might otherwise be overlooked.
But the risks are just as real. These same systems can unintentionally filter out qualified candidates: those with career breaks or nontraditional résumés or those who use assistive technologies. And because the process is automated, those exclusions can happen at scale and without human awareness.
Recent headlines show that these risks aren’t hypothetical, and bias shows up not just once but across different tools and contexts. The consistency of these missteps should be a warning sign: Fairness won’t happen by default.
This is where inclusion leaders come in. Your role isn’t to fine-tune search strings but to ensure the systems themselves have checks and balances. That means asking the right questions:
- How is the AI being tested for bias?
- Who is accountable for monitoring its outcomes?
- What human oversight exists to ensure efficiency isn’t coming at the expense of equity?
By shaping these guardrails, inclusion leaders can help organizations harness AI’s promise without letting it hardwire discrimination into the hiring process.
Guardrails for Inclusive AI in Hiring
While most inclusion leaders may not be writing Boolean strings or running LinkedIn searches themselves, they play a critical role in shaping how their organizations use AI in hiring and beyond.
Here are a few ways to lean in:
- Be transparent: Tell candidates when AI is used in your hiring process, and make sure a human always reviews final decisions.
- Audit regularly: Test your AI-driven tools for evidence of bias. Look for trends in data: Is one group being advanced at a higher rate than others? Are certain résumés consistently flagged down? Adjust accordingly.
- Engage critically: Treat AI like a teammate, not a decision-maker. Ask follow-up questions, challenge its recommendations, and compare its outputs against your own judgment. Even go so far as to ask: “What biases may exist within this response?”
- Continue the fundamentals: Even the smartest tech doesn’t replace inclusive hiring practices such as standardized interview processes, clear evaluation rubrics, and strong referral pipelines. Get more strategies here.
An Opportunity: Using AI to Advance Skills-Based Hiring
Back in 2020, skills-based hiring gained momentum when the White House issued an executive order encouraging federal employers to waive degree requirements and open doors to qualified workers without four-year diplomas. The idea was simple but powerful: Reduce unnecessary barriers, expand opportunity, and modernize recruitment.
Yet follow-through has lagged. Research shows that among companies announcing a move to skills-based hiring, nearly half made little meaningful change in practice, even after removing degree requirements from job postings.
Now is a great time to recommit to skills-based hiring. AI could be used to breathe new life into the shift, helping recruiters identify candidates based on the specific capabilities needed for a role rather than credentials alone. By asking AI to prioritize skills and experience, organizations can minimize the degree bias that still shapes too many hiring decisions.
Not only would this open the door to more candidates, but it also can boost performance: Employees hired for skills over pedigree are 1.9 times more likely to perform effectively.
For inclusion leaders, this is a chance to reframe AI not just as a risk to be managed but as a tool to actively advance inclusion when paired with intentional, skills-first hiring strategies.
The Bottom Line
Like any tool, AI reflects how it’s used. In recruiting—a process already vulnerable to inequity—it can accelerate progress or entrench bias. Without thoughtful oversight, organizations risk sidelining the very people they hope to attract.
That’s where inclusion leaders come in. You are uniquely positioned to make sure fairness doesn’t get lost in the pursuit of speed and to champion ways AI can actually expand opportunity, such as recommitting to practice skills-based hiring.