TLDR: majority of social media posts are generated by bots now.
Maybe AI Slop Is Killing the Internet, After AllThe assertion that bots are choking off human life online has never seemed more true.
Fil Menczer caught his first whiff of what he calls “social bots” in the early 2010s. He was mapping how information travels on Twitter when he stumbled onto a few clusters of accounts that looked a little suspicious. Some of them shared the same post thousands of times. Others reshared thousands of posts from each account. “These are not human,” he remembers thinking.
So began an extensive career in bot watching. As a distinguished professor of informatics at Indiana University at Bloomington, Menczer has studied the way bots proliferate, manipulate human beings and turn them against one another. In 2014 he was part of a team that developed the tool BotOrNot to help people spot fake accounts in the wild. He’s now regarded as one of the internet’s preeminent bot hunters.
If anyone is predisposed to notice the automatons among us, it’s Menczer. A few years ago, when a hypothesis known as the
dead internet theory started kicking around, positing that nearly all conversations online had been replaced by artificial-intelligence-generated chatter, he wrote it off as bunk. Now, though, the
generative AI boom, with its chatbot boyfriends and AI influencers, is inspiring Menczer to see the theory in a new light. He still doesn’t take the idea literally, but he is, as they say, beginning to take its underlying message seriously. “Am I worried?” he asks. “Yes, I’m very worried.”
The dead internet theory became popular in 2021, following a post from a user named IlluminatiPirate on an obscure online forum. IlluminatiPirate argued that the internet had become a vast, inhuman wasteland, filled with algorithmically optimized copycat posts. The theory blamed the entire thing on a covert government conspiracy, which made it easy to dismiss. But the arrival of tools such as ChatGPT and Midjourney has made it look downright prophetic.
Social media feels weirder. Search feels worse. Entire AI-generated news networks have sprung up overnight. Meta Platforms Inc. envisions a future where AI is involved in the creation of a substantial share of the posts on Facebook and Instagram. Sites such as Wikipedia are straining under the weight of AI crawlers that root around their pages, searching for fresh information to feed their models.
All of this is creating a feedback loop, where AI-generated content is being created to please AI-powered recommendation systems, threatening to turn humans into bystanders.
» Click to show Spoiler - click again to hide... «
Last year, Renée DiResta, a leading misinformation researcher, and Josh Goldstein, a research fellow at Georgetown University, set out to study the use of AI-generated content in spam and scams. They zeroed in on more than 100 Facebook pages loaded with dozens of AI images each, which together had millions of followers. Some included fake photos of miniature cows that directed followers to scammy sites where they could supposedly buy them. Others included idyllic images of tiny homes and log cabins, which drove people to ad-filled websites.
These efforts follow in a long tradition of creating so-called content farms to make money from digital ads. With generative AI, the process of stocking those farms has become a lot more efficient. Not only that, but ad industry research shows that generative AI is making it easier for bots to simulate authentic user activity, making it look like real people are clicking on those ads.
In their paper, DiResta and Goldstein identified many of the Facebook pages by the copied-and-pasted captions they shared. “This is my first cake! I will be glad for your marks,” read one caption on at least 18 different images of 18 different AI-generated people posing with 18 different cakes. The pages attracted human followers, who often weren’t in on the act. Even more baffling were the hundreds of thousands of likes and heart and hug reactions on an AI image of Jesus depicted as a crab, part of a peculiar but sizable niche of Christ-as-crustacean-themed AI imagery. Low-quality AI art has become prevalent enough online that observers have given this kind of content its own name: slop.
In some cases, the motivation behind slop isn’t simply commercial. Russian disinformation network Pravda, for example, has published millions of articles on hundreds of newly created websites since Russia invaded Ukraine, perhaps in an attempt to manipulate the AI models themselves by churning out staggering amounts of propaganda designed for AI crawlers to ingest. Recently the media watchdog NewsGuard found references to those sites turning up in answers generated by leading chatbots.
The generative AI tools for creating industrial-scale slop came along at an opportune time, just as social platforms were shifting away from recommending posts by people’s family and friends to promoting content from users they didn’t follow. This helped random accounts spread slop farther than it might have gone back when social media was more, well, social. Sure enough, the more DiResta interacted with these pages, the more slop she saw. “The content wasn’t just being created, it was being recommended,” says DiResta, who’s now an associate research professor at Georgetown. “The machines are helping it find us.”
Slop does sometimes appeal to humans. It can be bizarre or gruesome enough to persuade people to linger; at times content that doesn’t have to operate within the bounds of reality is genuinely more cute or captivating than scenes of our corporeal world. “If you limit yourself to just things that actually happened or jokes humans came up with, that is a predefined pool of content,” says Jeff Allen, co-founder and chief research officer at the trust and safety think tank Integrity Institute. “AI enlarges that pool.” But content generated and promoted by AI also operates like an invasive species, spreading so rapidly that it negatively affects the internet’s other inhabitants. “This is like the algae bloom that can blow up and suffocate the life you would want to have in a healthy ecosystem,” Allen says.
In February, OpenAI reported on some “malicious uses” of its models. In one, a fake Ghanaian youth organization used AI-generated articles and comments to try to swing the country’s 2024 election. In another, dozens of accounts potentially tied to North Korean cybercriminals landed real jobs at Western companies using AI-generated résumés, AI-generated cover letters and even AI-generated personas posing as their references. They used OpenAI’s tools to talk their way through job interviews and, after securing jobs, to explain to co-workers why they never took video calls. (OpenAI says, unsurprisingly, that its policies strictly prohibit this kind of fraud.)
Another problem is the sheer scale of the web-scraping effort AI companies use to gather data for their models. According to Tollbit, a company that helps publishers get compensated when their sites are scraped, the volume of scrapes per site doubled from the third to the fourth quarter of last year. When former President Jimmy Carter died, Wikimedia’s services were temporarily slowed after being hit with a surge of traffic from scrapers accessing a video from a 1980 debate. “Our infrastructure is built to sustain sudden traffic spikes from humans during high-interest events, but the amount of traffic generated by scraper bots is unprecedented and presents growing risks and costs,” the foundation wrote in a blog post.
Some publishers are responding by striking deals with AI companies that will pay for access to content or by throwing up paywalls to deflect crawlers. This trend could undermine the very idea of a free and open web, warns Shayne Longpre, a Ph.D. candidate at the Massachusetts Institute of Technology and lead of the Data Provenance Initiative. “The average consumer is going to find it harder to access certain information without paying, or they’re going to have to subscribe to certain AI bots to get the access to that information,” he says. Meanwhile, “smaller web publishers might be left out of the conversation.”
The transition to a chatbot-powered internet could also be threatening to the internet’s biggest players. The most obvious case may be Alphabet Inc.’s Google, whose search engine is based on pointing users toward other sources of information. The company has begun featuring chirpy summaries called AI Overviews. Beyond offering up occasionally dubious advice, Overviews may make it harder for humans to run websites other humans will click on. As Bloomberg Businessweek has reported, some online publishers have seen traffic plummet, and they mainly blame AI. (Google rejects this explanation, saying there are many reasons that sites gain or lose traffic and that AI Overviews are “creating new opportunities to connect people to web content.”) The rise of the AI-powered internet is highlighting how the tech giants’ agendas are often at odds with their users’ interests. In a race to dominate the AI-driven future, they’re accelerating the shift, whether the world is ready or not. Do Facebook users really want to see AI accounts mingling with posts from old classmates and distant relatives, as company execs envision? Who knows? But if it keeps real people glued to its apps at a time when humans are posting less frequently, Meta’s more than willing to find out.
It’s not hard to envision the dystopian endpoint where all of these trends converge, Allen argues. In a world where real people can no longer make enough from digital advertising to sustain their websites, and their posts can’t cut through the AI-generated din on social media, the dead internet wins. Even worse, research suggests that when AI models train on AI-generated content, they can collapse. Without any new human-made stuff online, Allen says, “the internet kind of dies.”
Menczer isn’t buying the doomsday scenario quite yet. If tech companies allow their products to devolve into low-rent breeding grounds for bots, he argues, humans will eventually turn their attention elsewhere. “If the signal-to-noise ratio is so low that it’s basically crap, then people will stop using it,” he says. Tech companies aren’t about to let that happen.
That may be true. Then again, one of the most widely viewed posts on Facebook last year showed a neat room with a giant fan built into the headboard of a midcentury modern bed frame, looming over a bare mattress; it drew about 179,000 reactions. “Finally found the perfect bed,” the author of the post wrote, “bet I wouldn’t sweat at night with this thing!” The post had some of the characteristics of AI slop—the fan’s grill looked slightly warped, and identical posts were shared by other users—but there was no way to tell for sure. And that’s precisely the point.
This post has been edited by diffyhelman2: May 12 2025, 08:42 PM