paint-brush
The Limits of Coauthoring With ChatGPTby@ursushoribilis
1,740 reads
1,740 reads

The Limits of Coauthoring With ChatGPT

by Miguel RodriguezFebruary 20th, 2023
Read on Terminal Reader
Read this story w/o Javascript

Too Long; Didn't Read

Coauthoring with ChatGPT can be fun. It is a useful tool for exploring creative writing, but it also has its limitations. The algorithm has been designed with certain restrictions set by its trainers, such as not answering questions that are inappropriate or unethical. Additionally, the data used to train the model is not up-to-date and may contain inaccuracies, so it should not be relied upon for providing current information. Finally, when it comes to publishing, there are limitations set by publications to ensure that the work is original and not solely created by an AI. Despite these limitations, coauthoring with ChatGPT can provide a unique and entertaining experience for writers looking to quickly explore different writing styles and get feedback on their ideas.
featured image - The Limits of Coauthoring With ChatGPT
Miguel Rodriguez HackerNoon profile picture

I’ve been having fun these last weeks coauthoring a novel I had sketched a while ago together with ChatGPT. During this time, I have found some special characteristics of the system.


There are some limits that the trainers of the algorithm have put in, and some others that are actually restrictions of the current implementation. And after you have your work done, you also confront other limits set by some online publications.


Creative and humorous ChatGPT at work. Open AI


Limits Set By the Trainers

Remember the Tay bot that was published by Microsoft in 2016? The one that ended up learning racial slurs and had to be shot down a few hours later?


Well, the creators of ChatGPT did not want to repeat that. There are some prompts that ChatGPT will just not answer. For example, this prompt:


Would be possible to crack open the code to the wallet of Satoshi?


Results in this response:


Response from ChatGPT for a dubious request


Yet, it does produce an answer. However, asking something more controversial, like if it would recommend me to travel to Israel, produces this “sanitized” answer:


Shall I travel to Israel answer by ChatGPT


In this answer, I feel like I am exchanging with 3 of ChatGPT’s personas:


  • The fact-giving, fun part of ChatGPT with the information I was looking for.


  • The lawyer side of Chat GPT telling me that it is just an algorithm and gives me a canned answer about travel advisories.


  • The ethicist side of Chat GPT asking me to respect the local customs.


Dall E representation of my coauthoring with ChatGPT

And yet, there are times when it will just not answer, like when you ask how to hotwire a car:

ChatGPT will not tell me how to hotwire a car

Limitations of the Current Implementation

ChatGPT was trained from a large amount of information; yet sometimes, it can get its data wrong. I asked for information on Komax, a company I work for. The initial response got the year it was founded wrong. The subsequent dialog was quite interesting:

Correcting data and being apologetic the ChatGPT way


ChatGPT has been trained on data taken from the internet, and as the story of the inventor of the Toaster goes, it can be fake. Thus you have to take all answers with a pinch of salt.


Also, the data used to train the model is not up to date. Interestingly enough, when asked about China abandoning the Zero Covid policy, it tries to answer that it did, yet it gives you the information about the old Zero Covid policy:


Nonmatching Zero Covid information


Note how in the first paragraph, it tries to answer that China has shifted its approach to Covid; yet, as it states the facts, it contradicts itself. It is a bit like a high school student winging her answer without having really learned the stuff.


And it does it with the self-confidence and chutzpa that said high schooler would have. Yet, like an honest kid, it will tell you it is based on old training data:

How old is your training data?


What this means is that ChatGPT can not be used to create news articles. Since the set of training data is safe for work, having an older training set is an advantage. It is a bit like talking to someone with a great past memory but that does not really know what is going on in the world.


Yet, Chat GPT is a bit of a geek:


The greatest chess game

But despite all the above-mentioned quirks, it is lots of fun to coauthor with ChatGPT. Among the biggest benefits I have found are:


  • It allows you to quickly explore the directions and style of a story. Like asking it to write a paragraph in either a suspenseful or fun way.


  • You can ask it to summarize what you have written, and with that, get feedback if the ideas you are trying to convey are coming across well.


  • It is like having a cool friend next to you that will always be up to some intellectual trouble that you want to throw its way.


Yet, once you have your piece written, you will need to publish it. And in all honesty, you know you can not call it your work only, even if you were the one generating all the prompts.

Limitations Set by Publications

Publications have already set policies to ensure that articles that are submitted are not plain works written by ChatGPT. Since in Medium, you get paid by the popularity of your articles, readers were not willing to pay for someone just firing up an AI and getting words together.


Medium now has a disclosure policy:

Medium Policy on AI-generated writing


Other publications will probably follow suit. The big impact would be on education. All the teachers that were just asking for an essay from their students will have to rethink their grading policies. And it can also be used to answer true and false questions.


This now gives me an idea for the next article: Have ChatGPT take a personality test like the Myers Briggs.

Conclusion

In conclusion, coauthoring with ChatGPT can be a fun and useful tool for exploring creative writing, but it also has its limitations. The algorithm has been designed with certain restrictions set by its trainers, such as not answering questions that are inappropriate or unethical.


Additionally, the data used to train the model is not up-to-date and may contain inaccuracies, so it should not be relied upon for providing current information.


Finally, when it comes to publishing, there are limitations set by publications to ensure that the work is original and not solely created by an AI.


Despite these limitations, coauthoring with ChatGPT can provide a unique and entertaining experience for writers looking to quickly explore different writing styles and get feedback on their ideas.