Enterprise AI

What Is AI Workslop? The $9 Million Problem Hiding in Your Inbox

41% of workers receive AI-generated 'workslop' — polished-looking but low-quality output that costs nearly two hours of rework per incident. Here's why it happens, what it costs, and how to stop it.

kju Team

kju Team

AI Education Experts

5 min read
Office workers reviewing documents on screens with some content highlighted in red, indicating low-quality AI-generated work that needs correction

"Slop" was Merriam-Webster's 2025 Word of the Year. Its workplace cousin — workslop — is shaping up to be the defining productivity problem of 2026.

Workslop is AI-generated content that looks polished and complete but is actually unhelpful, low-quality, or off the mark. It's the 2,000-word strategy memo that says nothing. The perfectly formatted slide deck with hallucinated data. The client email so generic it could have been sent to anyone.

Research from BetterUp Labs and Stanford Social Media Lab found that 41% of workers have encountered workslop in the past month. Each incident costs nearly two hours of rework. For a 10,000-person organisation, that's over $9 million per year in lost productivity.

And it's getting worse.

The Workslop Problem Is Growing

AI adoption at work has doubled since 2023, but confidence in using AI effectively has dropped 18%. That gap — more usage, less competence — is the workslop factory.

Harvard Business Review has published three major pieces on workslop since September 2025. The core finding: employees are under pressure to use AI and produce more output, but they haven't been taught how to use it well. The result is a flood of AI-generated content that shifts cognitive work from the sender to the recipient.

MetricFigureSource
Workers who've received workslop41%BetterUp / Stanford
Managers who've received workslop54%BetterUp / Stanford
Average rework time per incident~2 hoursBetterUp / Stanford
Annual cost per 10,000 employees$9M+BetterUp / Stanford
Recipients who view sender as less trustworthy42%BetterUp / Stanford
Recipients who'd avoid working with sender again33%BetterUp / Stanford
Employees annoyed by receiving workslop53%BetterUp / Stanford

That last row captures something metrics miss. Workslop doesn't just waste time — it damages relationships. About half of employees who received workslop said they viewed the sender as less creative, capable, and reliable. Nearly one in three said they'd be less likely to want to work with that person again.

AI Is Increasing Workloads, Not Reducing Them

The workslop problem sits inside a larger paradox: AI was supposed to save time. It's doing the opposite.

An ActivTrak analysis of 10,584 users measured work patterns 180 days before and after AI adoption. The findings were stark: time spent on every administrative and collaboration category increased. Email time doubled. Messaging time rose 145%. Business management tasks increased 94%. Meanwhile the share of the workday dedicated to deep focus work fell by 9%.

There wasn't a single activity category where using AI actually saved time. The data is unambiguous: AI, as currently used, does not reduce workloads. It intensifies them.

UC Berkeley researchers warn that AI is having "the opposite effect it was supposed to." And BCG's 2026 study found that "AI brain fry" — mental fatigue from intense oversight of AI tools — is worsening burnout. Among workers reporting AI brain fry, 34% showed active intention to leave.

Meanwhile, thousands of CEOs admitted that AI had no impact on employment or productivity, resurrecting economist Robert Solow's 1987 paradox: "you can see the computer age everywhere but in the productivity statistics."

The MIT Media Lab puts it bluntly: 95% of organisations see no measurable return on their AI investments.

Why People Create Workslop

It's tempting to blame lazy employees. The research points elsewhere.

HBR's January 2026 analysis found that management is the primary driver of the workslop epidemic. Leaders pressure employees to "use AI" and "produce more" without offering training, guidelines, or quality standards. Employees comply — by pasting prompts into ChatGPT and forwarding whatever comes back.

Three factors create the workslop cycle:

1. The Skills Gap

Most employees can prompt an AI model. Far fewer can evaluate whether the output is actually good. That's the difference between AI literacy and AI fluency — knowing about AI versus knowing how to use it effectively.

Only 26% of organisations offer formal AI upskilling programmes, down from 35% a year earlier. Of those who received training, 41% describe it as "too short and too generic." The result: employees who can generate output but can't judge its quality.

2. The Pressure Paradox

Executives see AI as a productivity multiplier and expect more output. But Gallup data shows that only 12% of U.S. employees use AI daily. The gap between leadership expectation and workforce reality creates pressure to appear AI-fluent without actually being AI-fluent.

Workers respond rationally: they use AI to produce volume and hope the quality holds. Often it doesn't.

3. The Evaluation Blind Spot

AI output has a distinctive quality: it looks confident even when it's wrong. Well-formatted, grammatically polished, structurally complete — but hollow. Without training in critical evaluation, employees lack the skill to distinguish genuinely useful AI output from sophisticated-looking nonsense.

Workslop isn't a character flaw. It's what happens when organisations deploy AI tools without teaching their people when AI output is good, when it needs editing, and when it should be discarded entirely. The fix isn't a usage policy. It's a skills investment.

The Real Cost: Trust, Not Just Time

The financial cost of workslop — $9 million per year for a 10,000-person company — is significant. But the deeper cost is interpersonal.

BetterUp's research found that receiving workslop fundamentally changes how colleagues perceive the sender:

  • 53% felt annoyed
  • 42% viewed the sender as less trustworthy
  • 38% felt confused about what was expected
  • 33% said they'd be less likely to collaborate with that person
  • 22% felt offended

In organisations that depend on collaboration, credibility, and trust — which is to say, all of them — these numbers matter more than the rework hours. Workslop erodes the social fabric that makes teams function.

And it hits managers hardest. 54% of managers report receiving workslop, compared with 38.5% of individual contributors. The people who need to trust their team's work product are the ones most frequently receiving low-quality AI output.

What Actually Fixes It

Workslop isn't a technology problem. It's a skills gap problem. The organisations that avoid it share a common approach: they invest in AI fluency before AI adoption.

Build Critical Evaluation Skills

The single most important skill for preventing workslop: the ability to look at AI output and ask, "Is this actually good?" That sounds simple. It requires training.

Critical evaluation means understanding where AI models hallucinate, recognising when output is generic versus contextualised, knowing when to edit, when to regenerate, and when to write from scratch. These aren't intuitive skills — they're AI fluency skills that develop through practice.

Make It Daily, Not Annual

A quarterly workshop on "responsible AI use" won't prevent workslop any more than a single driving lesson prevents accidents. The forgetting curve means learners lose up to 90% of new information within a week without reinforcement.

Short daily sessions — 6 to 10 minutes — achieve 80% completion rates versus 20% for long-form courses. Spaced repetition improves retention by up to 200%. Organisations with structured daily AI upskilling report strong ROI at nearly double the rate of those without.

Make It Role-Specific

A marketing manager creating workslop in campaign briefs faces different challenges than a finance analyst creating workslop in quarterly reports. Generic "AI best practices" training doesn't address either. Role-specific training delivers 40% better retention because it maps directly to the work people actually do.

Set Quality Standards, Not Usage Mandates

The worst thing a leader can do is mandate AI usage without defining quality. "Use AI more" without "here's what good looks like" is a recipe for workslop. Effective organisations set output standards — what AI-assisted work should look like, what review process it should pass, and what quality bar it needs to clear.

Organisations that pair AI tool deployment with structured, daily AI fluency training are 42% more likely to report strong AI ROI. The tools aren't the bottleneck. The skills are.

The Window to Act

AI adoption is accelerating. 91% of businesses use AI in at least one capacity. Agentic AI is adding autonomous AI agents to the mix. The volume of AI-generated content will only grow.

Without intervention, so will workslop. Every untrained employee with access to an AI tool is a potential source of polished-looking, trust-eroding, time-wasting output. The $9 million annual cost will compound as AI adoption scales.

The fix is straightforward but requires commitment: build the AI fluency that turns AI tools from workslop factories into genuine productivity multipliers. Daily practice. Role-specific skills. Critical evaluation as a core competency.

Six minutes a day is enough to build the judgment that separates useful AI output from workslop. The organisations that invest in that judgment now will be the ones where AI actually delivers on its promise.

Frequently Asked Questions

What is AI workslop?
AI workslop is AI-generated content — emails, reports, slide decks, or code — that looks polished and complete but is actually unhelpful, low-quality, or off the mark. The term was coined by researchers at BetterUp Labs and Stanford Social Media Lab. It describes output that offloads cognitive effort from the sender to the recipient, who must then spend time deciphering, correcting, or redoing the work.
How much does AI workslop cost organisations?
According to research from BetterUp Labs and Stanford, each workslop incident costs nearly two hours of rework, averaging $186 per month per affected employee. For a 10,000-person organisation, that adds up to over $9 million per year in lost productivity. The costs extend beyond rework: 42% of recipients view workslop senders as less trustworthy, and nearly one in three say they would be less likely to work with that person again.
What percentage of employees receive AI workslop?
BetterUp's September 2025 survey of 1,004 full-time U.S. desk workers found that 41% of employees have encountered workslop. Managers are disproportionately affected: 54% report receiving workslop compared with 38.5% of individual contributors. The problem is growing as AI adoption increases without corresponding training.
Why do employees create AI workslop?
Workslop is primarily a skills problem, not a laziness problem. HBR research found that management pressure to use AI and produce more output — without clear training or guidelines — is a major driver. Employees who lack AI fluency cannot distinguish good AI output from bad, don't know when to edit versus regenerate, and often default to accepting whatever the AI produces. The fix is better training, not stricter policies.
How can organisations reduce AI workslop?
Organisations reduce workslop by building AI fluency — the ability to critically evaluate, edit, and contextualise AI output. Research shows three approaches work: daily skill-building (6-10 minute sessions achieve 80% completion rates versus 20% for workshops), role-specific training (40% better retention than generic content), and teaching critical evaluation of AI output rather than just how to prompt. The goal is judgment, not just adoption.
Is AI actually reducing workloads in 2026?
No. An ActivTrak analysis of 10,584 users found that time spent on every administrative and collaboration category increased after AI adoption — from 27% to 346% depending on the task. Email time doubled and messaging time rose 145%, while the share of the workday dedicated to deep focus work fell by 9%. Fortune and UC Berkeley researchers describe this as the AI productivity paradox: the tools were supposed to save time but are instead increasing workloads because employees lack the skills to use them effectively.
What is the AI productivity paradox?
The AI productivity paradox describes the gap between AI's promise and its reality. Despite $550 billion in AI spending and near-universal adoption, 95% of organisations see no measurable return on their AI investments according to MIT Media Lab. Thousands of CEOs admitted AI had no impact on productivity, echoing economist Robert Solow's 1987 observation that 'you can see the computer age everywhere but in the productivity statistics.' The missing factor is workforce capability.