In January this year, Meta announced their intentions to encourage “More speech, fewer mistakes” on their platforms. This involved them axing their factcheckers, implementing community notes and encouraging more political content.
Now just months later, Meta is starting to test community notes across platforms, while Instagram is plagued by AI-generated adult ‘content creators’ fetishizing Down Syndrome… and users are being encouraged to use AI to help them write comments.
Meta has begun rolling out AI-assisted comments, adding to its long list of endorsements for AI features & usage. While the feature is designed to enhance user experience, there’s a clear relationship between on-platform engagement and Meta’s ad revenue. If more users are engaging with content, the session duration on Meta is likely to increase; meaning Meta can sell more ad space. Making comments more frequent, and predictable, also gives Meta the option to encourage advertisers to buy ads based on the promise of engagement – which has been the case for years.
On the user-side of AI usage on Instagram, 404media recently reported on a network of AI-generated influencers, depicted with Down Syndrome. These accounts are being monetized through the sale of AI-generated adult content on the platform Fanvue.
Ultimately, this fetishizes Down Syndrome and relies on AI models being trained to understand the typical appearance of a person with the condition – training data that almost definitely didn’t involve any informed consent. At the time of writing, Instagram and Fanvue have taken no action against these accounts.
With its head in the sand, Meta continues to barrel forwards with efforts to replace third-party fact checking with Community Notes. To their credit, they are rolling this out gradually, and will only publish notes when contributors with diverse viewpoints agree on the note. They believe this will create a ‘less biased’ and ‘more scalable’ moderation solution.
To translate this, Meta will make more money by using Community Notes. They don’t need to pay factcheckers, and users that contribute to Community Notes will probably spend more time on-platform, generating more ad revenue for Meta.
Meta will also build up a database of uniform ‘Content Moderation’ data using Community Notes submissions. This lays the foundation for them to replace the community-driven approach with an AI solution down the line.
Zuckerberg continues to shift his platforms closer to Elon-esque philosophies, and users are rightly concerned with their data privacy and Meta’s content moderation. The intercept released leaked training documents from Meta in January, defining “permissible speech” on their platforms.
Some delightful examples of comments that will be allowed on Meta platforms include;
Meta appears to be rapidly shaping their platforms into yet another breeding ground for hate speech and bigotry, while guzzling up every byte of data they can to keep users in their network for as long as possible, generating ad revenue and training their AI models.
To combat your data being used to train AI models, the first thing you should do is opt out of Generative AI data use in Meta.
We also recommend an immediate purge of Meta platforms to further safeguard your content.
Our app, Redact.dev gives you tools to wipe your entire Facebook history – which helps reduce the likelihood of your content being used to train AI.
We’re also working on building out support for Instagram and Threads – stay tuned via our Twitter page or Discord server.
Finally – find a new platform to call home, and try to bring your community with you.