ZeroSlop: Why I Built the 'SponsorBlock' for AI Slop on X

Written by woodrock | Published 2026/03/17
Tech Story Tags: generative-ai | ai-detection | twitter-bots | open-source | browser-extension | social-media | dead-internet-theory | zeroslop

TLDRTired of AI-generated "slop" ruining your Twitter (X) timeline? ZeroSlop is an open-source, community-powered browser extension that acts like SponsorBlock for AI spam. By using context-aware detection (scanning entire threads or profiles) and a shared Firebase registry, the community can detect, auto-hide, and even "gamify" the hunting of AI bots with automated Wanted posters.via the TL;DR App

The Timeline is Overwhelmed

Is your Twitter (X) timeline overflowing with AI-generated bot content? Videos? Memes? Text? Scams? Self-help gurus? Are you drowning in a sea of unfulfilling content that makes your soul want to scream? If yes, know that you are not alone. I have found myself in a very similar situation. The difference is, I decided to do something about it.

Ever since the advent of large language models, generative AI, and applications like ChatGPT, the AI bots on Twitter have slowly transitioned from obvious spam to sophisticated “slop” that mimics human engagement to farm impressions. In fact, US publisher Merriam-Webster awarded “Slop” as the word of the year for 2025. Social media is drowning in AI-generated slop, disguising itself as videos, images, and text in tweets on our very own Twitter (X) timelines. This is an example of the Dead Internet Theory in practice, a conspiracy theory that asserts since 2016, the internet consists primarily of bot activity. While I rarely condone conspiracy theories, this one is not entirely wrong—maybe just one year off. Ever since the release of the Google paper “Attention Is All You Need” (2017), which proposed the initial invention of the Transformer, and the subsequent invention of Generative Pre-trained Transformers (GPT) and the series of GPT models from OpenAI, public access to free slop-generating AI algorithms has lowered the barrier to entry for engagement farming on Twitter. The problem is that Twitter (X) currently rewards volume, which perfectly incentivizes high-frequency, AI-generated posting.

The Solution: ZeroSlop - A Community-Powered Shield

The solution to this AI-generated slop hellscape that we call Twitter (X) nowadays is ZeroSlop—a community-powered shield that is free to use and protects the user from AI-generated slop. We use decentralized intelligence to detect slop, moving from individuals fighting an uphill battle against slop themselves, to a shared community registry (think "SponsorBlock for AI"). This project is free, and will remain free, to the end user. The project is open-source, non-profit, and dedicated to keeping our social media timelines human. To get started, follow the installation instructions here on the ZeroSlop website or install the chrome extension on the Chrome Web Store directly.

Note that while the Chrome extension itself is free, we detect AI-generated slop using an API that does cost money. To become a Slop Bounty Hunter and detect AI-generated slop yourself, you will have to either subscribe to a monthly subscription or buy credits on the detection API's website. Hold your horses, though; when I said free to the end user, I meant it. For the average user, you can gain protection from AI slop on your timeline for free, with free access to the cloud database hosting the Slop Registry. In plain English: you don’t have to pay to get access to other people's slop detections on Twitter.

Technical Deep-Dive: How it Works

Now for a technical deep-dive on how it works when detecting AI-generated slop. "DetectGPT: Zero-Shot Machine-Generated Text Detection using Probability Curvature" (2023) is an excellent paper that explains in great technical detail how to detect text that has been generated by a large language model. But for those of you who would rather skip the mathematics and have an intuitive explanation, here is an analogy:

Think of an AI’s favorite sentences as resting perfectly on the sharpest peaks of high mountains. When you slightly change the words in an AI-generated sentence, it causes a dramatic drop down the steep mountain slope (showing high curvature). Human writing, however, lies on flatter, rolling ground, where changing words doesn't drastically change the elevation.

Large language models generate text that sits on sharp peaks of high probability; if you tweak the words, you fall steeply down the slope. In contrast, human text is on a gentle, rolling landscape, where tweaking words changes the elevation very little. By slightly altering suspect text, detectors can see if there is a sharp drop in the probability "score" to identify if an AI was sitting on that peak.

One issue we run into when detecting AI-generated text is the sample size of the corpus—i.e., the number of words in the text we are trying to analyze. An AI-generated text detection algorithm needs a decent chunk of text, usually a couple of paragraphs, in order to make an accurate prediction. Why? The short answer is statistical reliability. In the world of data, a short string of text simply doesn't give the detector enough evidence to separate a real pattern from random chance. Going back to our mountain analogy: imagine you are blindfolded and take exactly one step. If your foot drops down a few inches, it's hard to tell what kind of terrain you are on. You might have just stepped into a small pothole on an otherwise flat human plain, or you might be stepping off a massive AI mountain peak. To confidently know the true shape of the landscape, you need to walk around in multiple directions. Similarly, a single human sentence might coincidentally trigger a sharp probability drop just by chance. A longer passage gives the detector enough words to tweak, test, and average out, revealing the true, overall "shape" of the text.

Because of this, we implement context-aware aggregation in two ways. A user can scan a user’s Twitter (X) profile, or scan an entire thread. This increases the sample size of the text that the AI-generated slop detector is using to make its prediction, therefore making the predictions more accurate so you can be more confident in the reliability of their results. This means if you see a thread on Twitter and are suspicious, you can scan it. Or, if you see a short tweet that comes across as slop but would not provide enough text to scan alone, you can instead scan their profile. Smart, right?

That brings us to another hurdle. Can one person fight the tsunami of oncoming AI-generated slop? The short answer is no, which left me incredibly unsatisfied. The long answer, that I had to build into existence, is that we need a community of Slop Bounty Hunters to stop the slop. To do this, we use the Firebase Firestore database as a shared registry for AI slop tweets. Now, users of the extension get access to badges that clearly indicate if a tweet has been identified as AI-generated slop, a community-driven voting system to moderate false positives and authenticate truly detected slop, and an auto-hide feature that hides detected tweets from your timeline automatically.

Gamifying the Hunt: The Wanted Poster

But if it costs money to be a slop bounty hunter, why would I do it? Indeed, an excellent question. I have (attempted) to gamify the experiment. When the user detects an AI-generated slop post, there is a button to generate a wanted poster, which happens automatically and is copied to their clipboard seamlessly. Then, only if they want to, and are brave enough, they can reply to the original tweet with this generated wanted poster, and call them out! The wanted posters include the slop detection score (a percentage), their name and Twitter handle, and a link to the repository for instructions on how to set up the extension.

I don’t encourage people to argue with others online, or intentionally be provocative. But the Slop Wars must be fought somehow, and if Twitter (X) has given up on fighting its own bots, and I had to build the infrastructure myself to do so, then maybe the occasional shitpost about an AI slop poster is warranted.

The Future of Digital Hygiene

Isn’t it ironic that we use AI to fight AI-generated content in what is technically an AI arms race? As the tools for AI-generated content get better, the tools for their detection must evolve. We propose ZeroSlop’s database as an open-source registry that could serve as a high-quality dataset for future anti-bot research. However, the game theory of the Slop Wars may not be so picturesque. The Nash Equilibrium in this "slop war" is a permanent, high-friction stalemate where neither the engagement farmers nor the community of detectors can completely defeat the other without destroying their own underlying incentives. Spammers will continuously optimize their AI to generate perfectly camouflaged content that barely evades detection, while users will only run costly API checks until the financial or computational burden outweighs the benefit of a clean feed. Ultimately, the platform stabilizes as a degraded "market for lemons," saturated with just enough sophisticated bot filler to remain profitable for the farmers, while retaining barely enough authentic human interaction to keep users from abandoning the network entirely.

Closing: Join the Hunt

The internet is drowning in AI-generated spam, and we need your help to build the shield. ZeroSlop is an open-source rebellion against the Dead Internet, shifting the fight from isolated users to a decentralized network of Slop Bounty Hunters. We need engineers, hackers, and researchers to optimize our probability curvature algorithms, scale our Firebase registry, and anticipate the next wave of adversarial evasion tactics. Whether your expertise lies deep in machine learning, building robust and memory-safe systems in Rust, or crafting clean functional logic in Haskell, your skills are needed on the front lines. Join the hunt, fork the repo at github.com/woodrock/zero-slop, and help us stop the slop.


Written by woodrock | Leave the world a better place you found it. Software Engineer, PhD student in AI
Published by HackerNoon on 2026/03/17