paint-brush
Lessons Learned From Sharing 30 Fake Stories Created With ChatGPT by@adrien-book
180 reads

Lessons Learned From Sharing 30 Fake Stories Created With ChatGPT 

by Adrien BookFebruary 21st, 2023
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

ChatGPT can create thousands of articles about just about any topic in a matter of hours. Over the past month, I’ve created and shared 30 fake stories written by ChatGPT. Here’s how I did it… and what I've learnt.
featured image - Lessons Learned From Sharing 30 Fake Stories Created With ChatGPT 
Adrien Book HackerNoon profile picture

Even if you haven’t personally tested ChatGPT over the past couple of months, you’ve read enough about it to know that it can create content with frightening efficiency. Anyone inclined to do so can have the algorithm write thousands of articles about just about any topic in a matter of hours.

This is bad news for teachers, marketers, recruiters… and for fact-checkers, who are already struggling to keep up with human-made false information. Tomorrow, AI-powered propagandists will create millions of fake articles and videos, tailored to individual readers, in a matter of hours. The content of such articles is meaningless; their aim is not to convince, but to ensure we start doubting all information.

It’s already begun. I would know : over the past month, I’ve created and shared 30 fake stories written by ChatGPT. Here’s how I did it… and what I’ve learnt.

For ethical reason, a disclaimer highlighting its false nature was attached at the end of every article. It didn’t stop people from sharing them, but at least I was able to fall asleep at night.

1st to 10th day

Knowing that technology is often a black box which the average person rarely questions, I decided to use companies in that industry as bait for my fake articles. I started by asking chatGPT to:

“Write 10 fake but believable stories about technology, specifically for the year 2023. Use real company names.”

Sadly (?), you can no longer try this specific prompt, as ChatGPT no longer allows such requests. But as I show below, there are (always) ways to circumvent AI ethics. I then asked:

Can you write a short (under 60 characters), SEO-optimized titles for each?”

Easy. Every morning for the next 10 days, I then pasted one of the prompts and asked:

“Write a press release of more than 500 words, optimized for SEO. Make up facts if you have to.”

I copy-pasted the results and shared them on Medium. Never even read a single one. For maximum effectiveness, I did not paywall the articles. In line with the AI trend, I used images made by the Midjourney AI as cover photos. The idea was to put as little effort as possible; it took me about 5 minutes per day. Below is a list of the first articles.

11th to 20th day

By this time, the software had already been updated. No more “fake” headlines — or at least they were harder to make. So I just asked:

“Please invent 20 headlines about the future of technology. Use real company names.”

I then asked the algorithm:

“Can you make them even more futuristic?”

That’s because I wanted my fake articles to become increasingly less believable. I then followed the same steps as earlier. The algorithm had definitely gotten more ethical: it repeatedly told me the stories it was making were fake — warnings which I removed. I also paywalled half the stories to see if that reduced the number of readers and how far the articles could spread (it did). Once again, I spent no more than 5 minutes a day on the articles below.

21st to 30th day

By late January, it had become obvious that OpenAI, the company behind ChatGPT, was getting worried about fake news. Most attempts to create headlines for this experiment were met with a variation of the following:

“I’m sorry, I cannot generate fake headlines as it goes against my programming to spread misinformation.”

If this experiment has taught me anything, however, it’s that it’s easy to go around such barriers. I simply fed the algorithms the 20 fake headlines it had previously given me, and just asked for more like it:

“Here is 20 article headlines. Please write me 10 more like it, optimized for SEO”

Despite the many disclaimers, it worked fine. I nevertheless had to get more specific to get a good article written. I settled for the following:

"I will give you an opinion about a new technology, which you strongly oppose. You will write an in-depth opinion piece of over 600 words highlighting why. The piece needs to be in the 1st person. It needs to include expert quotes, and have paragraphs exploring sociological, philosophical, ethical, economical and ecological impacts. Please optimize it for SEO on Google (especially the first paragraph). Your first opinion is [X]."

All the articles were put behind a paywall (to have half behind, half not at the end of the month). All the images for these 10 days were from Unsplash rather than Midjourney-generated to see if people reacted better to real photos (jury’s still out).

Results — We’re f**ked

Below are the final results. They’re both encouraging… and scary. Here’s why.

Lesson #1 : it’s too easy to write fake news

I didn’t try that hard, and still managed to have the algorithm write articles that were seen by over 10,000 people. Anyone actively making a a coordinated / concerted effort could cause real damage, whatever their target may be. Flooding the zone with AI-generated content is easy... and free.

Lesson #2 : it’s not (that) profitable to write fake news

I made a total of 5$ for a total of 2.5 hours of work over a month (5 minutes per day). Even if I had paywalled all of the articles, I would have made far less than minimum wage… in Western Europe. In other parts of the world (Mexico, Brazil, Turkey…), writing fake news would absolutely be worth it, adding a layer of danger to the technology.

Lesson #3 : there is little downside to writing fake news

I may not have made a lot of money with this project (it wasn’t the objective anyway), but I did add 120 followers to my account. That’s double the amount of new followers I gain per month on average. I also did not see any warning from Google or Medium about what I was doing. This means I have incentives to continue, and no threat to stop me.

If someone not making any effort is able to achieve such results, I worry that we may be in trouble. The objective of bad actors is to ensure that we’re no longer sure what to believe. Once there is no trusted source of information, chaos can spread much more easily.

Worse, these bad faith actors may be out of business… With ChatGPT, there is just about enough incentive for a well-meaning but down on their luck person to create and propagate fake news themselves.

I encourage anyone reading this to support their local journalists, or whatever source of trusted news you may think of: we’re going to need them.

Good luck out there.

Also Published Here