Over the past few weeks, I’ve been in the market with partners and senior leaders, and two themes have remained consistent across conversations.
First, leaders feel behind on AI. Global AI spending is predicted to top $2.5 trillion in 2026, a 44% increase year over year. The technology is moving incredibly fast, investment is accelerating, and the pressure to show progress is real.
Second, trust has not kept pace with adoption. Only 46% of people globally say they are willing to trust AI systems. Some teams are all in. Others are waiting for clearer direction. Some are experimenting quietly. Others are being actively encouraged by their managers, while many are receiving little guidance or even resisting altogether.
That unevenness complicates how organizations can and should assess the success of their transformation rollouts, because usage statistics tell us only part of the story.
Our recent research finds that AI investments rarely fail all at once. They fail in the gaps between strategy and adoption, between experimentation and governance, between output and judgment, and between what leaders think is happening and what employees are actually experiencing.
In other words: AI may not be failing. Your organization may be absorbing it wrong.
The pressure to move fast is real. The ROI is less clear.
While statistics surrounding AI largely show widespread adoption, measurable productivity gains have yet to manifest. A recent National Bureau of Economic Research survey of almost 6,000 executives across the United States, United Kingdom, Germany, and Australia found that while 69% of firms actively use AI, nearly 90% of those surveyed reported no impact on productivity or employment.
That gap between adoption and impact underscores what we call the AI productivity paradox. The problem is not that AI cannot make individual tasks faster—it can: Employees can draft, summarize, analyze, and generate more options in less time. However, what’s becoming clearer is that faster output does not create a stronger organization, regardless of how seductive speed might be.
AI collapses the cost of production, but it does not collapse the cost of judgment. People still must evaluate quality, interpret context, manage risk, and own the decisions that follow.
That is where many organizations are stuck: They are adding AI to old workflows and expecting new performance. But AI value does not come from grafting new tools onto legacy work. It comes from redesigning work—preserving human judgment, expanding learning, and ensuring accountability guardrails are in place.
What AI is really disrupting
When people talk about AI disruption, they often mean jobs: which roles will change, which tasks will be automated, and where headcount may be reduced. Yes, that conversation matters, but its focus is too narrow.
The bigger disruption is happening inside the systems that determine whether organizations can keep building capability, developing managers, and maintaining trust the same way as work changes.
AI is not only evolving what work gets done. It is changing how people learn, how ownership is defined, what managers must reinforce, and whether employees trust the systems reshaping their work.
That is why senior leaders need a more nuanced view of disruption. The risk is not simply that some jobs will change. The risk is that organizations will change the work without redesigning the human systems that make work effective.
1. AI is disrupting leadership pipeline strength.
The least discussed AI risk may also be the most consequential for CHROs: capability development.
AI is drastically changing how employers must assess early-career talent readiness. While hiring committees once asked whether someone can perform an entry-level job, they now also need to ask how quickly they can grow, adapt, and move into more complex roles when parts of that early work are automated. That shift introduces a leadership pipeline risk that many organizations are not yet naming.
Early-career work has never been only about output. It also has been about how people learn, because it provided the training grounds for experimentation, failure, and eventually, growth. First drafts, research summaries, analyses, review cycles, edits, and feedback loops have historically helped employees understand not only what the answer is but also how stronger thinkers arrived there.
If AI compresses or removes too much of that work without a new development model in place, organizations may gain efficiency at the task level while weakening the capabilities they will need from those same employees later: judgment, discernment, ownership, and leadership readiness.
Our recent research names this risk directly: AI can reduce opportunities to build lasting expertise when humans struggle to keep pace with the speed at which AI can evaluate data. As a result, experienced workers—those with higher levels of confidence and stronger metacognition—may benefit more from AI because they already have acquired the analytical skills needed to validate outputs, while less experienced workers may be more susceptible to error, overreliance, or missed learning opportunities.
For this reason, organizations cannot rely on adoption metrics to decipher how and whether an organization is getting stronger. A workforce can use AI more often and still become less prepared to lead complex work over time.
For CHROs, that is the real risk—and one we’re examining closely through our HR Executive Board: What happens to leadership pipelines when AI changes the early work that formerly built judgment proficiency?
The real risk is judgment disruption—not merely job disruption.
2. AI is disrupting manager capability.
AI is shifting the roles of managers faster than most organizations are supporting managers.
Managers are now expected to evaluate AI-assisted work, coach employees with different levels of AI fluency, reinforce responsible use, explain what is acceptable, and keep teams moving as fast as possible through uncertainty. Many managers are performing these duties without having been equipped with the essential framework of shared decision rules, clear examples, or updated performance expectations.
That matters, because uneven manager behavior quickly leads to execution volatility: One team experiments safely; another waits. One manager rewards AI fluency; another quietly discourages it. One function builds capability; another falls behind.
These gaps are not always distributed evenly among the numerous demographics represented in the workplace. Recent Lean In findings show that women are 25% less likely than men to be encouraged by managers to use AI and less likely to be praised or promoted when they do.
That assessment should concern CHROs. Manager encouragement is not just a behavioral detail; it can become an access point to future capability. If some employees are invited into the AI learning curve earlier than others, today’s adoption gap can become tomorrow’s advancement gap. Over time, those differences become execution risk—not because people are unwilling to change but because the organization has not made the change manageable.
3. AI is disrupting trust.
Trust may be the most minimized pitfall of the AI transition.
Employees are trying to determine what AI means for their role, their value, their privacy, their evaluation, and their future. When leaders do not make AI use visible and understandable, adoption does not necessarily slow, but it often stalls.
Edelman’s 2026 Trust Barometer shows how fragile this moment is: 54% of low-income respondents and 44% of middle-income respondents believe generative AI will leave them behind rather than create benefits for them.
While change management communication is a critical piece of this puzzle, fragile trust ultimately signals a broader operational risk. People do not adopt what they do not trust. And they do not trust what they cannot see, question, or understand.
Slowing down is not the same as stalling.
These risks do not mean that AI has failed or that it does not represent incredible potential. These risks mean leaders need to stop treating speed as the strategy.
Some stages of AI adoption should move quickly. Employees need room to experiment, build practical fluency, and test low-risk use cases. But other stages require a slower, more intentional pace: redesigning workflows, clarifying decision rights, equipping managers, protecting trust, and rebuilding development pathways when early-career work changes.
That is not stalling. It is sequencing.
To sequence AI in a way that creates enterprise value, leaders must ask harder questions:
- What work should AI eliminate, and what work should remain because it builds judgment capabilities?
- Where are we increasing output without reducing workload?
- Who owns the final decision when AI produces the first answer?
- What standards do managers need to reinforce differently?
- Where is adoption uneven by function, level, role, or team?
- Where could AI weaken trust, compress learning, or widen capability gaps?
These are work design questions. And they belong squarely on the HR agenda.
HR may not own the technology, but HR owns many of the systems that determine whether AI creates durable value: role architecture, learning, manager capability, performance expectations, leadership development, trust, and change adoption.
The next phase: better absorption through governance
Most organizations are already measuring AI activity. They know which tools have been launched, how many pilots are underway, and where usage is increasing.
But usage does not tell leaders if the organization is absorbing AI well.
Absorption requires the introduction of a different lens into the problem—governance—a lens that will show where AI is creating friction underneath usage metrics. HR leaders need to know where managers are interpreting expectations differently, where trust is fragile, and where capability development is being compressed. Those are the execution signals that Assess360 helps surface, and this is where the AI disruption conversation needs to get more precise: Discern not only which jobs may change but also where AI is reshaping the systems that make work effective.