Few topics dominate leadership conversations right now as AI does. Business leaders are pushing for scale. Technology teams are building and buying. HR is focused on adoption and skills. But the road is less clear for inclusion leaders, leaving many wondering, What is our role in all of this?
Inclusion leaders don’t need to become AI experts, but they can’t sit this one out either. That’s because AI adoption is far more than a technological shift; it’s a cultural shift too, shaping how people experience change at work, how opportunity is distributed, and how decisions land with employees. Those are areas where inclusion leaders are uniquely qualified to lead. When those perspectives are missing from AI decisions, organizations may move fast, but they do so without fully understanding who is being supported and who is being left behind.
To be effective, inclusion leaders should approach AI the same way they would approach any major shift in the workplace: by examining its human impact, ensuring equitable access, and advocating for responsible use from the start.
1. Start with the Human Impact
Research shows a clear disconnect between how leaders and employees are experiencing this shift. While 76 percent of executives believe employees are excited about AI, only 31 percent of individual contributors say they feel the same. This isn’t just resistance to change but rather uncertainty about what AI adoption will actually mean for people’s roles, workloads, and futures at work.
That uncertainty is already taking a toll on culture and trust. In a 2025 survey conducted in the wake of widespread layoffs, 68 percent of workers said they believe AI will lead to higher unemployment, and nearly half believe their own job will eventually be eliminated by AI.
At the same time, studies show that many employees using AI tools feel more burdened, not less, reporting heavier workloads and unclear expectations around productivity gains. When change feels opaque and emotionally charged, psychological safety erodes long before performance improves.
What Inclusion Leaders Can Do:
Inclusion leaders play a critical role in keeping the human experience visible as AI adoption accelerates. One immediate opportunity is to use existing inclusion infrastructure such as ERGs as listening channels to understand how AI adoption is actually landing with employees—both the benefits and the unintended consequences. Inclusion leaders can also advocate for greater transparency about how AI will be used, what decisions it will inform, and how changes may affect roles over time.
2. Keep Access and Opportunity Top of Mind as Work Changes
By now, most leaders recognize the pattern. Headlines about AI-driven job redesign, shrinking entry-level roles, and rapid skills shifts are everywhere. Many organizations are already planning to replace roles with AI, particularly in operations, back-office functions, and entry-level roles.
What’s becoming clearer is how unevenly those changes are landing. Research from the Algorithmic Justice League and Brookings shows that women, people of color, lower-income earners, and later-career professionals are significantly less likely to receive AI training or to be included in early pilot programs. Without intentional intervention, AI adoption risks reinforcing the very inequities inclusion leaders have spent years working to dismantle.
What Inclusion Leaders Can Do:
Inclusion leaders can influence how AI upskilling is structured and who gets access to it, such as reviewing who is invited into AI pilots and training programs, flagging where content skews toward certain roles or teams, and even advocating for additional learning pathways for employees who may be most affected by the disruption, including entry-level employees, later-career workers, and teams whose roles are being redesigned rather than augmented.
3. Help Shape Responsible and Trustworthy AI Use
As AI becomes more deeply embedded into workplace processes and decisions, organizations need a clear watchdog to ensure it is being used responsibly. This role goes by many names—trustworthy AI, accountable AI, responsible AI—but the underlying need is the same: Someone must consistently ask whether AI is being applied fairly, transparently, and with appropriate human judgment.
We are already seeing the consequences of moving too quickly without that scrutiny. Recruiting is one of the most visible examples. Research shows that human decision-makers often mirror the biases embedded in AI tools rather than challenge them, raising concerns about how much oversight these systems receive once they are in use. As recruiters face higher application volumes and increasing pressure to automate, those risks can scale quickly.
But this issue extends far beyond hiring. As AI is applied across performance management, promotions, benefits administration, workforce planning, and even employee monitoring, its influence touches nearly every part of the employee experience. For employees who have historically faced bias in workplace systems, AI can feel less like progress and more like another opaque layer between them and opportunity. Over time, that opacity creates a trust gap that technology alone cannot fix.
What Inclusion Leaders Can Do:
Inclusion leaders can advocate for bias reviews, human oversight in high-impact decisions, and the use of established accountability frameworks when AI is introduced into people processes. They can also help define where responsibility for AI decisions sits and ensure those systems are revisited as tools evolve, not just at launch. By staying involved as AI use expands, inclusion leaders help ensure responsible use remains an ongoing practice rather than a one-time checkpoint.
The Bottom Line
Inclusion leaders have a critical role to play as AI becomes more embedded in how work gets done—not just because their voice belongs in the conversation but also because many organizations are actively reassessing where inclusion efforts add the most value. AI puts inclusion leaders closer to core business priorities at a moment when showing that connection matters more than ever.
That doesn’t mean inclusion leaders need to weigh in on every AI decision. The work now is to be clear about where your perspective is most useful and to lean into that lane. For some, that may be access to opportunity and upskilling. For others, it may be employee trust, bias mitigation, or accountability in people decisions. The strongest impact comes from focusing on the areas where inclusion expertise directly supports business outcomes and being explicit about that value.
If you want to dig deeper into the role inclusion leaders can play in AI, check out a recent conversation between Seramount experts and the team at NVIDIA, where they discuss how responsible AI is being put into practice at NVIDIA. Watch now.
Kayla Haskins is an Associate Director, Product Marketing at Seramount. In this role, she supports DEI Practitioners and Talent Leaders in creating more inclusive workplaces by providing valuable insights and resources through webinars, blog posts, guides, infographics, and more.
With nearly a decade of experience in the technology and non-profit sectors,
Kayla Haskins is an Associate Director, Product Marketing at Seramount. In this role, she supports DEI Practitioners and Talent Leaders in creating more inclusive workplaces by providing valuable insights and resources through webinars, blog posts, guides, infographics, and more.
With nearly a decade of experience in the technology and non-profit sectors, Kayla excels in translating complex ideas into clear, actionable concepts. She is passionate about storytelling and is dedicated to addressing today’s most pressing workplace issues to drive meaningful impact.
Kayla holds a degree in English and Creative Writing from Dickinson College. She lives in Silver Spring, MD with her partner, Nick, and their dog, Zero. In her free time, she enjoys hiking, reading, and spending time with family and friends.