Over the past few months, I’ve been spending my time interviewing leaders working at the intersection of AI and inclusion. What kept surfacing was not resistance to AI itself but a deeper struggle that involves trust, judgment, and who gets to remain legible as work changes.
One leader described the existing tension this way: Employees in his organization keep hearing a now-familiar line: “AI is not going to replace you, but people who know how to use AI are going to replace you.” In a workplace already marked by layoffs and eroding trust, that sentence does not land as encouragement. It lands as a threat. Employees, he said, were already asking whether AI was “the next wave,” while leaders were still talking about rollout, productivity, and momentum. Part of his job had become that of translating employee fears back to decision-makers who were not close enough to hear how the message actually sounded from below.
That, to me, is the leadership challenge of this moment.
Executives cannot let AI “happen” to the workforce. Whether leaders realize it or not, their adoption strategies are already shaping who is seen as capable of learning new tools, who bears the risk of getting it wrong, and who is expected to adapt or fall behind. In our research at Seramount, a key tension has emerged: As AI moves deeper into work, it is already redistributing power in the workplace, including power over who gets access, whose judgment carries weight, and who benefits first.
The market is finally catching up to the worker story
The broader market conversation is beginning to circle this problem, even if it has not fully named it yet. Harvard Business Review has framed AI as a strategic choice between automation and augmentation, warned that AI often intensifies work rather than lightening it, and raised concerns about how workers, especially junior ones, will build judgment if AI takes over the messy work where judgment used to develop. Brookings has focused on adaptive capacity, including evidence that women are heavily represented among highly AI-exposed workers with low adaptive capacity. McKinsey, meanwhile, is urging leaders to redesign workflows around human-computer interaction, not simply add AI to isolated tasks.
Even so, much of the market conversation still stops short. Executive enthusiasm remains high. Returns are still uneven. Underneath that mismatch sits the employee experience, and at the center of that gap sits trust. However, trust is not a technical variable alone. It is shaped by memory, context, and whether workers believe leadership is being honest about what is changing and why. HBR’s broader 2026 work trends coverage reflects that tension directly, noting that CEO expectations for AI-driven growth remain high while employees on the ground remain skeptical, even as many investments fail to deliver meaningful returns.
Who are we imagining when we say “the human”?
I spent nearly fifteen years in higher education as a researcher and professor studying sexual misconduct, power, and organizational climate. One of the questions I regularly asked students when teaching about unconscious bias was deceptively simple: When you picture someone associated with authority, expertise, and legitimacy, who comes to mind first?
Let’s take a CEO.
For many people, the image that surfaces most quickly is still someone white, able-bodied, male, and comfortably aligned with dominant cultural norms. That reflex tells us something important. The default human is never neutral. It is socially constructed, and those constructions shape who gets recognized as credible, trustworthy, competent, and worth designing around.
I think the same exercise applies here. The phrase “human judgment” is everywhere right now. Human oversight. Humans in the loop. Human-centered AI. I understand why. But I also think the phrase can hide as much as it reveals.
Who are we imagining when we say “the human”? Is it the White, Stanford-educated man in his thirties furiously producing code? Or is it:
- the junior employee whose developmental work is disappearing,
- the Black employee whose face is less readily recognized as the default face on a video camera,
- the woman in an administrative role whose job sits squarely inside AI exposure,
- the worker whose accent or dialect is heard as deviation rather than intelligence,
- the disabled employee whose needs still get treated as excessive accommodations,
- the worker in India, Ghana, or Vietnam whose language, infrastructure, and context are rarely treated as the design center.
One leader I interviewed made the underlying problem unmistakable: “Inclusion is not the default,” and if equity is ignored, AI systems “will only magnify the exclusion that already exists in society.” For her, the stakes are clear: “It’s not just about a poor user experience; it’s about perpetuating systemic inequity.”
She also made the business case with equal force: “More people means more business.” That pairing matters. Inclusion is not window dressing. It is part of how organizations widen trust, build better systems, and avoid mistaking the majority case for the human case.
Adding AI to legacy work redistributes opportunity
What I see in the research and in these interviews, is not a lack of motivation. Rather, it’s a lack of workforce redesign. Too many organizations are bolting AI onto legacy work models and calling the resulting friction transformation. McKinsey has begun to name this problem directly: Value comes when leaders redesign work itself, not when they simply tack AI onto preexisting workflows. But that is still where many organizations are. They are layering new tools onto old assumptions about who gets sponsorship, who gets developmental work, who gets second chances, and who is expected to absorb uncertainty quietly.
That is why I keep coming back to power. AI is changing productivity, yes. But it is also changing how opportunity moves. Our forthcoming insight paper argues that trust is becoming the fault line in workplace AI, that the learning curve is already uneven, and that AI is moving into the systems that shape opportunity. Those shifts happen through work design—or lack thereof. When leaders fail to redesign work, the benefits of AI tend to flow first to the people who already have fluency, sponsorship, and room to experiment.
One researcher I interviewed offered a clear distinction: “If you’re designing AI for collaboration, AI becomes a partner; it becomes something that augments, not replaces humans.” That line clarifies what gets lost when organizations move too quickly. Work gets hollowed out when leaders automate the very parts of work through which people build judgment. The repetitive but revealing messy middle disappears. The junior employee no longer has to wrestle with the early version of a problem long enough to build discernment or confidence.
When work gets hollowed out, mistakes do not disappear. They migrate upward. They surface later, in higher-stakes decisions, with more reputational and revenue risk attached. AI does not remove the need for judgment. It concentrates the consequences of weak judgment in places organizations can least afford it.
Three moves inclusion leaders need to make now
The framework emerging from our research comes down to three moves.
First, earn trust.
Employees need more than mandates or usage metrics. They need legible expectations, visible guardrails, and evidence that leaders understand AI can do harm. In one leader’s experience, trust was rebuilt only when responsible AI was embedded into the rollout itself and employees could see that leadership was taking their concerns seriously rather than treating them as resistance.
Second, prioritize judgment.
One of the strongest frames I heard described AI as “human first—AI empowered.” That is a useful standard because it keeps the tool in its place. AI can accelerate analysis. It can widen visibility. It can help surface patterns sooner. But judgment, accountability, and final ownership still have to be designed explicitly, especially where hiring, promotion, pay, and performance are concerned.
Third, track how opportunity is being redistributed by asking:
- Who gets training?
- Who gets manager encouragement?
- Who gets to experiment safely?
- Who loses developmental testing?
- Who is being augmented, and who is simply being measured against a tool?
Our research argues that inclusion leaders need clearer visibility into current AI use, early readiness gaps, and places where AI is already shaping opportunity. Those are not peripheral questions. They are where the next phase of inequity is likely to take shape if no one is watching closely enough.
From harm to imagination
Leaders need to take the worker side of this story seriously. Harm is real. Trust is fragile for a reason. But I also do not want to end there.
One leader told me that AI has “a serious marketing problem” because the public story centers almost exclusively on what will be lost. His point was not to disregard harm—harm must be addressed, measured, and governed—but that the conversation cannot stop with damage control. We also need to identify how, as another leader put it, AI can be “a force for good.”
That possibility depends on leadership. It depends on whether leaders are willing to use AI creatively to solve problems with justice and inequity in mind: to surface patterns sooner, widen access, and redesign work without simply inheriting old lines of advantage. To do that, they must redesign work in ways that earn trust, protect judgment, and take the redistribution of opportunity seriously.
Want to go deeper?
Join our upcoming webinar, where we’ll unpack the research behind these three moves and explore how inclusion leaders can spot early readiness gaps, define where human judgment still belongs, and identify where AI is already redistributing opportunity across the workforce.