Blog

3 Ways Inclusion Leaders Should Shape AI in the Workplace

February 16, 2026

3 Ways Inclusion Leaders Should Shape AI in the Workplace

Few topics dominate leadership conversations right now as AI does. Business leaders are pushing for scale. Technology teams are building and buying. HR is focused on adoption and skills. But the road is less clear for inclusion leaders, leaving many wondering, What is our role in all of this?

Inclusion leaders don’t need to become AI experts, but they can’t sit this one out either. That’s because AI adoption is far more than a technological shift; it’s a cultural shift too, shaping how people experience change at work, how opportunity is distributed, and how decisions land with employees. Those are areas where inclusion leaders are uniquely qualified to lead. When those perspectives are missing from AI decisions, organizations may move fast, but they do so without fully understanding who is being supported and who is being left behind.

To be effective, inclusion leaders should approach AI the same way they would approach any major shift in the workplace: by examining its human impact, ensuring equitable access, and advocating for responsible use from the start.

1. Start with the Human Impact

Research shows a clear disconnect between how leaders and employees are experiencing this shift. While 76 percent of executives believe employees are excited about AI, only 31 percent of individual contributors say they feel the same. This isn’t just resistance to change but rather uncertainty about what AI adoption will actually mean for people’s roles, workloads, and futures at work.

That uncertainty is already taking a toll on culture and trust. In a 2025 survey conducted in the wake of widespread layoffs, 68 percent of workers said they believe AI will lead to higher unemployment, and nearly half believe their own job will eventually be eliminated by AI.

At the same time, studies show that many employees using AI tools feel more burdened, not less, reporting heavier workloads and unclear expectations around productivity gains. When change feels opaque and emotionally charged, psychological safety erodes long before performance improves.

clipboard icon

2. Keep Access and Opportunity Top of Mind as Work Changes

By now, most leaders recognize the pattern. Headlines about AI-driven job redesign, shrinking entry-level roles, and rapid skills shifts are everywhere. Many organizations are already planning to replace roles with AI, particularly in operations, back-office functions, and entry-level roles.

The challenge is that the demand for reskilling is outpacing employers’ offerings. The World Economic Forum predicts that nearly 40 percent of core job skills will change by 2030, yet 43 percent of employees lack access to necessary training. It’s no wonder employers consistently cite skills gaps as one of the biggest barriers to transformation.

Explore Seramount’s analysis of how skill demand is shifting in the era of AI.

What’s becoming clearer is how unevenly those changes are landing. Research from the Algorithmic Justice League and Brookings shows that women, people of color, lower-income earners, and later-career professionals are significantly less likely to receive AI training or to be included in early pilot programs. Without intentional intervention, AI adoption risks reinforcing the very inequities inclusion leaders have spent years working to dismantle.

clipboard icon

3. Help Shape Responsible and Trustworthy AI Use

As AI becomes more deeply embedded into workplace processes and decisions, organizations need a clear watchdog to ensure it is being used responsibly. This role goes by many names—trustworthy AI, accountable AI, responsible AI—but the underlying need is the same: Someone must consistently ask whether AI is being applied fairly, transparently, and with appropriate human judgment.

We are already seeing the consequences of moving too quickly without that scrutiny. Recruiting is one of the most visible examples. Research shows that human decision-makers often mirror the biases embedded in AI tools rather than challenge them, raising concerns about how much oversight these systems receive once they are in use. As recruiters face higher application volumes and increasing pressure to automate, those risks can scale quickly.

But this issue extends far beyond hiring. As AI is applied across performance management, promotions, benefits administration, workforce planning, and even employee monitoring, its influence touches nearly every part of the employee experience. For employees who have historically faced bias in workplace systems, AI can feel less like progress and more like another opaque layer between them and opportunity. Over time, that opacity creates a trust gap that technology alone cannot fix.

clipboard icon

The Bottom Line

Inclusion leaders have a critical role to play as AI becomes more embedded in how work gets done—not just because their voice belongs in the conversation but also because many organizations are actively reassessing where inclusion efforts add the most value. AI puts inclusion leaders closer to core business priorities at a moment when showing that connection matters more than ever.

That doesn’t mean inclusion leaders need to weigh in on every AI decision. The work now is to be clear about where your perspective is most useful and to lean into that lane. For some, that may be access to opportunity and upskilling. For others, it may be employee trust, bias mitigation, or accountability in people decisions. The strongest impact comes from focusing on the areas where inclusion expertise directly supports business outcomes and being explicit about that value.

If you want to dig deeper into the role inclusion leaders can play in AI, check out a recent conversation between Seramount experts and the team at NVIDIA, where they discuss how responsible AI is being put into practice at NVIDIA. Watch now.

AI and Inclusion: A Practical Starting Point for Leaders


Topics

DEI Strategy and Measurement , Future of Work

Related