AI slop: the new spam problem

AI slop: the new spam problem

It’s quietly wrecking how we read, work, and trust the digital world.

On a Tuesday morning in August, marketing consultant Lina Sørensen opened her inbox to find five reports—“polished, professional, ready to send”—drafted by the AI assistant she uses. In minutes, she checked them, made a few tweaks, and forwarded them. A few days later, her colleague pushed back. “These feel hollow. Where’s the insight? The voice?”

Lina realized what had happened. She had been sinking into the trap of workslop—AI-generated work that looks good but doesn’t deliver real value. And it’s not just her. Many professionals now find their days littered with content that seems competent, until you scratch the surface.

That’s the real cost of AI slop—low-quality, mass-produced AI content spreading everywhere. It’s not a fringe problem. It’s infiltrating our lives—and undermining the digital world we depend on.

What is “slop,” really—and why it’s spreading

The term “AI slop” describes content—text, images, audio—generated by AI with minimal effort and little originality. It’s filler, fluff, noise. Slop is digital clutter that prioritizes speed, quantity, and engagement metrics over substance.

Originally whispered about in tech circles, the idea gained traction in 2024 when observers noticed entire library catalogs filling with AI-written books that no human would have chosen. It now appears across social media feeds, image banks, streaming services, even academic publishing.

In science, the problem is especially alarming. Journals are being flooded with low-quality studies built on public datasets and simple AI pipelines. These papers sometimes make misleading or outright false claims. A recent article in Nature warned that biomedical literature is at risk of becoming polluted by AI slop masquerading as scientific breakthroughs.

At work, the slop has a name: “workslop.” Researchers from Harvard Business Review and Stanford define it as AI-generated content that appears polished but lacks substance. They show that peers increasingly distrust such work, and productivity suffers. Indeed, surveys estimate that workslop costs organizations up to $186 per employee per month in wasted time and lost opportunity.

In music, slop is threatening creativity. On Deezer alone, over 30,000 fully AI-generated tracks are uploaded daily—nearly one-third of new additions. And many of these tracks never truly stream by real listeners; they’re gaming the system. Spotify has responded by demanding AI credits on music uploads, filtering out duplicates, and clamping down on impersonation.

Why it matters: the human ripple effects

Slop isn’t just a tech buzzword. It is changing how people live, work, and trust.

The erosion of trust and truth

We used to navigate information with a baseline of trust: experts, journalists, creators. Now we must constantly ask: was this made by a human? Slop blurs the line. Deepfake videos that never happened go viral as real. AI-manufactured news items reshape narratives. One Guardian columnist warns that political “slopaganda” is now part of the news cycle.

When fake content proliferates, genuine sources lose visibility. A citizen doing research may end up buried under mountains of generic AI content before finding the real signal. That weakens accountability and amplifies disinformation.

Creativity under attack

Independent artists, writers, composers—they now compete with automated content farms that output cheap imitations at mass scale. The human voice begins to drown out. Lina, the marketing consultant, told me she’s started charging more for custom strategy, because “anyone can push a button to write a memo—but few can give context, nuance, risk insight.”

In music, AI tracks splashing through streaming services threaten royalty revenue. Platforms are wrestling with whether to suppress or label slop, and how to reward human creators meaningfully.

Workplaces clogged with shallow work

Slop doesn’t just waste individual time. It burdens whole teams. When someone sends you a report generated by AI—thin on reasoning—you have to take time to rebuild. Over weeks, this drains trust. The HBR/Stanford study found that people judged slop senders as less creative, less capable, less reliable. One project manager admitted she now double-checks every AI draft—even the “final” ones.

Beyond reputation, there’s money. Multiply $186 per employee per month across a mid-sized company and you’re dealing with six-figure losses in opportunity cost.

The downward spiral in AI itself

Here’s a twist: slop feeds itself. AI models trained on low-quality AI content degrade over time, losing diversity and subtlety. Successive layers of AI paraphrasing its own output create a “telephone game” effect where meaning warps. In other words, slop begets more slop—reducing the overall intelligence of new generations of models.

What’s changing, and what’s still at stake

Some companies are pushing back. Spotify now requires AI credits in music uploads and enforces stricter impersonation rules. Deezer tags and filters AI tracks, blocking them from editorial playlists. But these are defensive moves. They don’t yet shift the incentives behind slop.

Platforms face a delicate balance: clamp down too hard and they risk stifling innovation. Let slop run rampant and they lose user trust. In a way, we are witnessing a crisis of information curation—like spam in the early Internet era, but far more insidious.

Meanwhile, creators push for transparency: label AI works, watermark them, incentive quality. In education and science, peer review and stronger data checks are proposed as bulwarks against bogus AI papers.

Why you should care—but also what you can do

You see slop every time you scroll your feed, click on search results, open an email attachment or an article. It weakens your confidence that what you read is true. It wastes your time. It penalizes creators who put in effort.

But it is not too late to respond. Demand disclosure: when content is AI-generated, label it. Reward human-crafted work. Be skeptical, not cynical. Don’t treat AI as authority—treat it as a tool. Scrutinize the output, check sources, ask: what was lost?

Because if we let slop dominate, we won’t just lose quality content. We will lose trust in content, creators, and in each other.

Leave a comment

Your email address will not be published. Required fields are marked *