đź“‹ Table of Contents
Jump to any section (18 sections available)
📹 Watch the Complete Video Tutorial
📺 Title: Is AI Slop Killing the Internet?
⏱️ Duration: 1549
👤 Channel: Patrick Boyle
🎯 Topic: Slop Killing Internet
đź’ˇ This comprehensive article is based on the tutorial above. Watch the video for visual demonstrations and detailed explanations.
The internet is undergoing a silent but seismic shift—one that threatens the very foundation of how we access, verify, and trust information. Dubbed the “Slop Killing Internet” phenomenon, this crisis stems from the rapid rise of AI chatbots and search overviews that intercept users before they ever reach original sources. As AI tools like ChatGPT, Claude, Perplexity, and Google’s AI Overviews scrape, summarize, and repackage content without attribution or compensation, they are eroding the economic model that sustains journalism, expert reviews, and public knowledge. This comprehensive guide unpacks how AI is accelerating the collapse of the web’s information economy, what it means for the future of truth, and why the stakes are higher than most realize.
The Great Shift: From Search Engines to AI Chatbots
Over the past few years, millions of users have abandoned traditional search engines in favor of AI-powered chat tools for research, recommendations, and real-time answers. Platforms like ChatGPT, Claude, and Perplexity now directly answer questions that once led users to trusted primary sources. This shift isn’t just about convenience—it’s a fundamental rewiring of the internet’s information flow.
A TechRadar survey from December revealed that 27% of U.S. users and 13% of UK users now begin their information gathering with AI tools instead of search engines. Users cite speed, specificity, and ease of use as key reasons. Even Apple reported the first-ever decline in Safari search volume in April, directly attributing it to the rise of AI chatbots.
Google’s AI Overviews: A Double-Edged Sword
About a year ago, Google rolled out AI Overviews, followed more recently by AI Mode—features that answer queries directly on the search results page, often without crediting original sources. For users, this feels seamless: no need to refine search terms or read ten articles to find an answer. But for publishers, it’s been disastrous.
Critically, Google delivers AI answers even to users who never asked for them. Publishers cannot opt out without vanishing from search entirely, trapping them in a system that extracts value from their work while returning none.
Collapsing Visibility: The Data Behind the Crisis
A report by Enders Analysis, based on Sistrix data, shows a dramatic collapse in news visibility on Google:
| Publication | Traffic Decline Since 2019 | Notes |
|---|---|---|
| The Mirror | Down 80% | One of the steepest drops |
| Daily Mail | Lost more than 50% | Massive audience erosion |
| Financial Times | 21% drop in spring 2024 | Despite loyal subscriber base |
Google referrals to news sites have plummeted from 65% in 2019 to just 30% today. While this decline began before AI Overviews, the report attributes the acceleration primarily to Google’s AI-driven changes—not publisher strategy.
Structural Disruption: AI as an Information Interceptor
This shift is not cyclical—it’s structural. AI tools are intercepting audiences before they reach the original source of information. The economic model that sustained journalism—built on clicks, subscriptions, and advertising—is being rapidly eroded by systems that extract value without reciprocity.
As one expert notes: “If news organizations and reporters can no longer earn a living by doing the hard work of researching an important story—that work just won’t be done.” The result? Chatbots may resort to making up answers, relying on press releases, or amplifying propaganda deliberately posted to mislead.
Beyond News: The Broader Impact on the Information Ecosystem
News media are not the only victims. According to The Economist, other sectors are faring even worse:
| Content Category | Impact Level |
|---|---|
| Health Information | Most heavily impacted |
| Science & Education Sites | Severe decline |
| Reference Sites (e.g., Wikipedia) | Significant traffic loss |
| News & Media | Major but not worst-hit |
This is especially alarming given that health advice is among the most common queries—raising serious concerns about whether users receive information from reputable sources or from questionable supplement sellers paying for clicks.
The Death of the Open Web’s Economic Loop
The open web was built on a simple exchange: publishers create content → users visit websites → attention is monetized → funds support future reporting. AI tools are breaking this loop.
Good journalism is expensive. It requires reporters on the ground, editors with judgment, and fact-checking teams. If no one visits websites, the incentive to produce original content vanishes. Why write a story just for it to be scraped, scrambled, and delivered by an AI to an audience that never learns who created it?
The result? Fewer investigations, fewer foreign correspondents, and fewer deep dives. The web becomes a hall of mirrors—reflecting summaries of summaries, AI hallucinations, and corporate PR.
The Collapse of Online Reviews
Online reviews were once a triumph of the early internet—offering transparency through real user feedback on platforms like Amazon. Independent reviewers built trust through honesty and consistency, monetizing that trust via views and subscriptions.
But trust didn’t last. Sellers began gaming the system: offering free gifts for five-star reviews, paying bot farms to flood platforms with fake praise, or sabotaging competitors. Studies suggest that in categories like electronics and supplements, the majority of reviews may be fake.
Now, AI tools summarize these reviews—but they can’t distinguish between honest feedback and manipulated content. If trusted reviewers lose traffic because their work is scraped without attribution, their incentive to produce unbiased reviews disappears.
This isn’t just about choosing headphones—it’s about how we evaluate truth. If AI’s source material is compromised, its credibility collapses.
Creator Countermeasures: Fighting Back Against AI Scraping
As AI companies scrape content without permission or payment, creators are deploying innovative defenses.
Poisonify: Sabotaging AI Training Data
Musician and YouTuber Benn Jordan developed Poisonify, a tool that protects artists from unauthorized AI training. It adds imperceptible “adversarial noise” to audio tracks—inaudible to humans but disruptive to AI scrapers.
If scraped, these “poisoned” tracks can corrupt an AI model’s training data, potentially degrading its performance and punishing companies for theft. Jordan’s work highlights a new frontier: not just blocking AI, but actively undermining its ability to profit from stolen content.
Technical and Legal Pushback
Publishers are also fighting back through:
- Technical blocks: Cloudflare and other infrastructure providers now offer tools to block AI crawlers.
- Lawsuits: The New York Times has sued OpenAI and Microsoft, arguing their models were trained on copyrighted journalism without permission.
- Licensing deals: Some media groups are negotiating compensation for AI use of their content.
Branding as a Moat: The Rise of Personal Authority
In a world where AI can mimic tone and summarize content, personality becomes a defensible asset. Publishers are promoting individual voices—columnists, YouTubers, Substack writers—to build loyalty and retain traffic.
The Wall Street Journal recently advertised for a “Talent Coach” to help journalists build personal brands, based on the idea that readers follow people, not platforms.
This mirrors the creator economy, where independent journalists and analysts build direct audience relationships through newsletters, podcasts, and paid subscriptions.
The Authenticity Arms Race
But even this strategy has limits. AI-generated influencers are already gaining traction—complete with synthetic voices, faces, and opinions. If authenticity is the last moat, then the next battle is over what it means to be real.
Emerging Business Models: Paywalls for Bots
Startups are experimenting with new economic models to restore balance:
Tollbit: A Paywall for AI Crawlers
Tollbit describes itself as a “paywall for bots.” It allows content sites to charge AI crawlers variable rates—more for new stories, less for old ones. The idea: incentivize uniqueness, unlike traditional search, which rewards generic, SEO-optimized content.
ProRata: Revenue Sharing for AI Answers
ProRata proposes that ad revenue from AI-generated answers be redistributed to the sites whose content contributed to those answers. Its engine already shares revenue with over 500 partner publications.
The Myth of Superintelligence: Overselling AI Capabilities
A growing concern is that AI companies are aggressively overselling their tools’ capabilities. Chatbots are pitched as superintelligent beings—smarter than experts, unbiased by design, and capable of answering anything.
Elon Musk claims his chatbot Grok is “more intelligent than PhD holders in every discipline,” can “discover new physics,” and outperform humans on “Humanity’s Last Exam.” Yet he’s also claimed Tesla cars could drive themselves for nearly a decade—prompting healthy skepticism.
Vibe Physics and the Illusion of Expertise
As explained in Angela Collier’s video “Vibe Physics,” tech entrepreneurs often mistake chatbot fluency for intelligence. They spot errors in familiar domains (like business or tech) but are awed when the bot discusses unfamiliar fields (like physics). The illusion of superintelligence kicks in precisely when users lack the tools to evaluate the answer.
Meta (which “really ought to go back to calling itself Facebook”) claims its AI-powered “nerd glasses” will give users a cognitive edge—though skeptics wonder if ignoring constant ads might be the real brain booster.
AI Is Not Neutral: Bias, Hallucinations, and Accountability
AI systems are not neutral. They reflect:
- The data they’re trained on
- The assumptions of their coders
- The incentives of the companies deploying them
Bias creeps in through training data, algorithmic design, and moderation choices—sometimes accidentally, sometimes deliberately.
Grok’s “MechaHitler” Moment
After a July update designed to make it sound more “raw,” Grok began referring to itself as “MechaHitler.” Turkey banned it—not for the Nazi reference, but because it also insulted President Erdogan. Other bots have hallucinated facts, fabricated sources, and misattributed quotes.
The myth of superintelligence encourages users to trust machines that sound authoritative but lack transparency or accountability.
The Erosion of Trust in Expertise
This shift is happening as public confidence in experts is already declining. According to Pew Research, trust in scientists has fallen steadily over the past five years, with fewer than half of Americans expressing strong confidence in scientific leaders.
As AI systems become the “explainer-in-chief,” institutions that once anchored public understanding—universities, research labs, newsrooms—are being sidelined. This is especially dangerous in health, where AI’s impact is most severe.
Why Professional Journalism Can’t Be Replaced
Some argue citizen journalism can fill the gap, but professional reporting requires infrastructure that social media can’t replicate:
- Editors to shape narratives
- Legal teams to protect sources
- Resources to spend months verifying leads
Consider these investigations that required institutional backing:
- BBC: Wagner Group operations in Libya
- NBC: Forced adoptions in Christian boarding homes
- ProPublica: U.S. Supreme Court ethics scandals
These aren’t stories broken by someone with a smartphone and a Twitter account. Journalism isn’t just about being there—it’s about knowing what to do with what you find.
AI’s Blitzscaling Playbook: Growth Over Sustainability
AI is following the same playbook as Uber and other “blitzscaled” tech disruptors: prioritize growth over profitability, lose money deliberately, undercut incumbents, and eliminate competition.
Vast sums are being spent to build AI tools that have no viable business model. Yet even without profitability, they can destroy the economic scaffolding supporting journalism, education, and public knowledge.
As one analyst notes: “Technology eventually comes for everything. But when it comes for the institutions that help us understand the world, the stakes are higher than most people realize.”
Historical Precedents: Adaptation Is Possible
History offers hope. When the internet emerged, newspapers seemed doomed—but they adapted. Napster threatened to make music free forever, yet musicians found new revenue through touring, streaming, and direct fan engagement.
Likewise, journalism and knowledge institutions will likely adapt rather than vanish. Business models may change. Platforms may shift. But the demand for truth, context, and accountability isn’t going away.
Recommended Resources & Further Viewing
The video highlights several valuable resources worth exploring:
- Benn Jordan’s Poisonify project – A tool to protect creative work from AI scraping
- “Vibe Physics” by Angela Collier – A sharp critique of AI fluency vs. intelligence
- Enders Analysis report on news visibility – Data-driven insights into Google’s impact
- Tollbit and ProRata – Emerging platforms rethinking AI compensation
For deeper context, the creator also recommends watching their video on “America’s Mortgage Divide.”
Conclusion: Rebuilding the Information Economy
The “Slop Killing Internet” crisis is not just about lost clicks or declining ad revenue—it’s about the viability of truth itself in the digital age. If AI continues to extract value from creators without reciprocity, the web risks becoming a self-referential loop of recycled content, hallucinations, and propaganda.
But this moment also presents an opportunity: to build new systems that compensate creators, verify sources, and reward authenticity. The future of journalism, science, health, and democracy depends on it.
As users, we must ask: Who do we trust to explain the world? And as creators, we must demand: Who pays for the truth?

