Verified Writers is a nonprofit, providing writers with Verified Writer Status to defend writer ID, IP and reputation.
Most of us have heard about the Cambridge Analytica Facebook scandal. We know that content on Facebook was combined with big data analytics to manipulate users politically.
We see robots on social media making easily identified automated posts on a regular basis.
And, until last year, they were easy to spot.
Since GPT-3 got released, that may be changing because it’s eerily human and, it can auto-generate anonymous or ghostwritten content with minimal inputs.
GPT-3 stands for Generative Pre-trained Transformer 3 (GPT-3). It's an autoregressive language model that uses deep learning to produce human-like text.
I wrote an article about how I think it’s disrupting The Deep Web and maybe even killing it.
It uses comparisons from an extraordinarily wide range of data sets that it was trained on to build realistic sentences that make sense.
GPT-3 is the third-generation language prediction model in the GPT-n series. It’s the follow-up to GPT-2. Both were developed by OpenAI, a San Francisco-based artificial intelligence research facility.
At the time of writing, GPT-3's full version has a capacity of 175 billion machine learning parameters. It’s in development for even more. So, it’s a very clever piece of software.
On the face of it, how dangerous can an AI that produces realistically human-like text be?
As an example, imagine that a rich man didn't like an anticancer drug company, so he bought 2 million convincingly written robo articles that claimed that the drug actually made cancer worse. It would be easy to syndicate them, hard to trace who had bought them, and hard for Google to police or prevent it from becoming a widely held fake fact.
For that reason and others, Google needs to stop content from being anonymously made by robots. But how?
There are a few ways. Firstly, Google will prioritise content that is recognisably written by a human with a byline. Secondly, Google will ensure that the grammar and accuracy of articles like this one are top-notch.
This is partly because sometimes robots learn bad habits.
GPT-3's text quality is so high that it may be tough to determine whether or not it was produced by a person or a robot, which has both good and bad implications.
As I’ve explained in the previous example, it could be abused by malicious people for purposes of corporate or political espionage, subterfuge, and sabotage.
As early as August 2019, MIT published an article called, “OpenAI has released the largest version yet of its fake-news-spewing AI” in which writer Karen Hao, outlined the dangers that GPT-3 posed to the veracity of the information on the internet.
Her piece began:
“In February OpenAI catapulted itself into the public eye when it produced a language model so good at generating fake news that the organisation decided not to release it.”
The authors of the original May 28, 2020 paper describing GPT-3 were keen to emphasise the risks. They called for further research. The writing credit for the paper included a whopping thirty-one OpenAI researchers and engineers.
By then, Microsoft had licensed exclusive usage of GPT-3. Perhaps they did this to mitigate perceived risks and also create some sort of regulation around such a powerful new AI?
Developers are still able to utilise the public GPT-3 API to obtain output, but only Microsoft has access to GPT-3's underlying model.
A lot of time has passed since 2019 and now, in 2022, GPT-3’s use to produce content for the internet is in full swing. A company whose name has changed from Conversion.ai, to Jarvis.ai and most recently to Jasper.ai provides the GPT-3 AI platform to power a writing tool for human journalists, web copywriters, and content writers to create meaningful (and hopefully—given the tool’s limitations—factually accurate) articles and copy.
But, it still seems to be obvious that we need a way for writers to prove they are human, stand by what they (with or without GPT-3’s help) have written, and ensure real writers remain authoritative voices in the crowded and ambiguous digital world.
At present it doesn’t seem that anyone has taken the time to address the issues which still exist as a result of this technology’s existence. What about the impending storm of fake news that can still be developed through this solution?
I’m personally in the process of tackling the issue by founding a nonprofit called Verified Writers.
I predict that Google is going to increasingly prioritise authoritative, trustworthy and relevant articles in search results.
One way to lend a piece of writing more authority (and therefore credibility) is to ensure that it includes a trustworthy byline. So, freelance writers who want to get their work read, will increasingly need to produce work that is recognizably, and proven to be written by a human being.
To win fake authority or ‘fake humanness’ robots that produce fake news might start stealing or cloning writer’s bylines. It’s not hard for them to do it and it could cause two problems.
Firstly, it could dupe people into thinking writers wrote things they didn’t.
That's a reputational issue.
(I’m just going to let you imagine your byline associated with some of the worst articles you have ever read for a second.)
And secondly, it could lend credibility to fake articles, (because you are a genius!)
Both of these things would obviously be terrible.
That’s where I hope to help. Verified Writers (VW) is a non-profit organisation that aims to provide writers with Verified Writer Status (VWS). It will also give them tools to flag and issue takedown notices through the platform if writing which they didn't produce is fraudulently attributed to them.
In a world full of AI robots that can write as convincingly as human beings and at a time when political and corporate espionage overwhelmingly occurs online, it's increasingly important that we can verify that content published on the internet was written by genuine, accountable human beings, and not a software program on behalf of humans with nefarious intentions.
The new nonprofit will seek to address the rise of robots, by providing a profile for each writer. This element is similar to existing journalist portfolio solutions available now.
Writers will sign up for Verified Writers, verify their identity using Yoti (bank-level ID verification), have their own publicly viewable profile page, and be able to add links to their work to/from that profile.
In this way, they can confirm they are the genuine writer of the work. They should also link to their VW profile in their article bylines.
Because bylines will be increasingly important, some robotic byline ID theft may be on the horizon. We can preemptively stop it.
By joining Verified Writers, writers can flag work attributed to them fraudulently through the system and VW will issue litigious notices to publishers to remove, amend or correct the fraudulent attributions.
In order to become a Verified Writers member, writers would make an annual donation. All funds collected from user annual donations would be invested in three important causes.
First, protecting the writer's intellectual property and ID (takedown activity for fraudulent attributions/plagiarism litigation where necessary).
Second, combating robot-enabled fake news where it could have a negative social impact.
And, third, contributions to The United Nations Convention For Climate Change Projects through UNFCCC.
The overall goal of Verified Writers is to protect writers and writers' work and to make sure that robots don’t become a tool for abuse in the writing and marketing content sector. It’s about keeping the veracity of the writing on the internet as high as possible.
We can do this by working together and continuing to be human-forward.
This article was written by Sarah Othman, a British freelance writer based in Tokyo, Japan. You can register your email address at www.verifiedwriters.org now to be able to sign up as soon as the platform launches.