Blog

What Teaching Writing Taught Me About Judgment in the Age of AI

February 19, 2026

What Teaching Writing Taught Me About Judgment in the Age of AI

Before I started working with organizations on workforce strategy, I spent nearly 15 years teaching writing, most recently as an English professor at Carnegie Mellon University, a place often called the “birthplace of AI.”

In First-Year Composition, a course every college student takes, we don’t teach polished essays on the first try. We teach process. Revision. We teach students to interrogate assumptions, evaluate evidence, and build arguments deliberately. Even on a campus defined by technological acceleration, we were teaching students to slow down and think.

When students write this way, they learn to examine their own reasoning and reconsider their assumptions. And they struggle. But that struggle is formative. Through revision, they build confidence not in being instantly right but in improving their thinking. Revision builds judgment. Judgment builds ownership. Ownership builds performance.

AI changes that experience.

After years of writing extensively and coaching students through draft after draft, I understand the labor of good thinking. That’s precisely why I use generative AI the way I do. It helps me move from a blank page to structure faster than I ever could before. I use it as a thought partner, sharpening my reasoning and reducing the painful inertia of simply starting. It accelerates production.

But while AI collapses the cost of production, it does not collapse the cost of evaluation.

But while AI collapses the cost of production, it does not collapse the cost of evaluation.

When production becomes instant, evaluation can start to feel optional. That tension—between speed and judgment—keeps me up at night.

The Productivity Paradox We’re Tracking

In our recent research at Seramount, we’re tracking what many call the “AI productivity paradox.”

Across industries, AI is increasing output. Drafts move faster. Analysis scales. Content multiplies. And yet, broad productivity gains remain uneven. Many organizations report abandoned pilots, rising review burdens, and unclear ROI.

When production becomes cheaper, lower-quality work can scale faster than judgment can catch it. If evaluation standards, decision rights, and accountability structures don’t evolve alongside tool adoption, organizations create output without sustained value. The paradox is simple: Work gets faster—and worse.

Our research finds several consequences of this paradox, but from my vantage point, here are two of the most urgent:

1. The Capability Gap

AI will disproportionately reward employees who already know how to think critically and challenge outputs.

Research on the emerging “AI fluency gap” shows that a relatively small group of evaluative users capture most of AI’s upside, while others see marginal benefit—or even increased workload. Researchers warn that this fluency gap is quickly becoming a new digital divide, shaping who advances and who doesn’t.

In other words, AI does not automatically raise performance levels. It amplifies whatever capability already exists inside your organization. At first, the dashboards look strong. Output rises. Usage increases. But beneath the surface, quality begins to decline.

The danger isn’t that AI won’t work. It’s that it will work unevenly, and that unevenness compounds. Over time, it shows up as inconsistent work, increased rework, and slower decisions despite faster production.

You may not feel this immediately.

But within a few years, the gap between speed and quality becomes a structural drag on performance. If left untouched, that gap will quickly turn into mounting revenue risk.

2. The Human Strain

The second issue is more human.

Research shows employee trust in workplace AI is lagging adoption. Many workers report skepticism about AI’s reliability, unclear accountability, and anxiety about how these tools affect job security. Those who report being most “productive with AI” also report higher burnout and emotional disconnection, consequences that worry me.

Corporations are already navigating rising fatigue and uncertainty. When speed becomes the dominant metric and evaluation remains implicit, employees begin to ask:

  • What matters now?
  • How will I be judged?
  • Who is accountable?

When answers to those questions are unclear, trust erodes.

And trust is not a soft metric. It shapes engagement, discretionary effort, and whether employees speak up or self-protect. It determines whether managers enforce standards consistently and whether teams align around shared expectations. When trust is weakened, engagement declines. When engagement declines, performance follows.

What to Do About It

These risks are not inevitable. They are shaped by workforce design choices that leaders must take seriously. Here are three places to start:

1. Make Human Capability Your AI Multiplier

Technical comfort is not enough. Organizations must prioritize evaluative capability—both in whom they hire and how they upskill. Widespread fears that AI will automate work are circulating. That framing misses the point: AI doesn’t eliminate the need for human skill; it raises the bar for it.

Judgment, critical thinking, communication, and contextual reasoning aren’t “soft” add-ons. They determine whether AI improves performance or simply accelerates output, leaving two implications for business leaders:

Our research shows that as AI accelerates production, the skills that differentiate performance are human. If you don’t deliberately invest in the human side of human-machine interaction, you risk scaling low-quality work—and that quickly turns into revenue and reputational risk.

That’s why leadership alignment matters. As AI reshapes performance expectations, many senior HR leaders are turning to forums such as Seramount’s HR Executive Board to pressure-test hiring standards, compare upskilling strategies, and align on how AI is redefining performance.

2. Identify the Real Friction

If you measure AI adoption by usage and output alone, you will fail to discern where execution is quietly being degraded. The earliest signs of risk rarely appear in dashboards. They show up in misalignment.

Leaders need visibility into:

  • Where managers are reinforcing different standards of quality or speed
  • When review cycles are lengthening despite faster drafting
  • How teams are over-relying on AI—or quietly resisting it
  • Why accountability for AI-assisted decisions is unclear

Left unexamined, output may rise, but decision quality will vary. Once rework increases, trust erodes. Execution slows in ways that are difficult to diagnose.

Understanding the lived experience of AI adoption—not just tool usage—is essential to prevent execution risk. That requires structured, disciplined sensing and clear signals on where alignment is assumed but not real.

3. Protect Opportunity as AI Scales

AI amplifies differences. Full stop.

Without deliberate design, influence and advancement will be concentrated among those already confident and technically fluent.

That concentration has consequences—not just for inclusion but also for long-term performance. When opportunity narrows, organizations shrink their leadership pipeline, reduce diversity of thought, lose robust customer insights, and increase blind spots in decision-making.

Leaders must ask:

  • Who is gaining influence through AI-enabled work?
  • Who is deferring or becoming less visible?
  • Whose development pathways are shrinking as tasks are automated?

Designing AI-enabled work with opportunity in mind is not separate from performance strategy. It is performance strategy.

Organizations that proactively examine these dynamics strengthen both inclusion and execution resilience.


As was the case with the Internet, it will take us time—decades even—to  understand fully the impact of AI on our worlds and our workplaces. But that doesn’t mean we should wait to act. Speed is seductive because it impresses dashboards. However, judgment, in the long run, is what will protect enterprises.

Want to prevent faster work from becoming lower-quality performance?

Join us for our upcoming webinar, “AI Is Making Work Faster—and Worse: The Productivity Problem Leaders Can’t Ignore.”


Topics

DEI Strategy and Measurement , Employee Experience and Culture , Future of Work , Talent Management – Recruitment and Retention

Related