paint-brush
ChatGPT Vs. ChatGPT: How to Detect Text Generated Using the AI Language Modelby@ursushoribilis
1,014 reads
1,014 reads

ChatGPT Vs. ChatGPT: How to Detect Text Generated Using the AI Language Model

by Miguel RodriguezMay 3rd, 2023
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

ChatGPT can help you assess if a text has been written by an LLM. However, only a confidence level of above 80% should be taken seriously.
featured image - ChatGPT Vs. ChatGPT: How to Detect Text Generated Using the AI Language Model
Miguel Rodriguez HackerNoon profile picture

You are a teaching assistant at the university and your professor has not yet understood that Chat GPT can generate essays and asked the students in the class that you coordinate to write such a text.


You know your students, and you know that many will be tempted by the sirens of text generated by Large Language Models and probably submit text generated by models. How can you know if they were penciled by a hard working student?


Well, why not ask the main culprit, ChatGPT for its opinion. I did an initial test so that you know how to ask and what the results meant.


First, I submitted to ChatGPT4 a text that it had written in response to one of my prompts:


Did you write this?:


Have you ever considered that the world we live in might not be the true reality, but instead a complex simulation? This theory has been popularized by movies like ‘The Matrix’ and is now a topic of debate among philosophers, scientists, and even some technologists. A fascinating take on this theory suggests that our reality is actually a simulated game where Teddy bears and similar figures are the dominant beings. In this alternate reality, known as the Teddyverse, these plush creatures compete for attention, with the most used and loved Teddy bears gaining the most points. Let us dive into the depths of the Teddyverse and explore the intricacies of this fascinating simulation.


I had to ask for a percentage of certainty and got this very confident answer:

80–90% confidence for its own text


Not bad for a first test. I tried the same for a text I had written last year before the ChatGPT craze:


Asses this:
A few weeks ago I went to buy a pastry for my coffee break. As if it was nothing the cashier asked for 3 bucks for something that last week had cost 2.80. It brought memories of my teenager years in Mexico where runaway inflation was common. Most people in the west are not familiar with inflation and what does it mean. What effect it will have on their savings, their wealth, their plans for the future. I want to share with you what I learned as a kid.


The answer was interesting:

60–70% confidence for my text


Then I remembered that the puritan minds behind the dataset they used to train the model I decided to try with a +18 text that could have come from the pages of Cosmo or similar publications (You know which ones, you pervs). I got the following answer:


40–50% confidence on “explicit nature” text


This answer is interesting twofold. First, it is apologetic by saying that if the model actually generated this it would have been under duress of a ”specific prompt”.


The second is that despite it being “explicit nature” it still assigns it a 40 to 50% of confidence.


I tried other texts including poetry and never getting it below this level, so I used the nuclear option and pasted the lyrics to Bohemian Rhapsody. This was the answer:


Finally, not taking credit for it…


In a nutshell, ChatGPT can help you assess if a text has been written by an LLM. However only a confidence level of above 80% should be taken seriously. Make sure you always ask it to assign a percentage of certainty to its responses.


And if you are a student writing a report, just add some explicit content references and misspelled words. Might take some points down but not reject the whole essay.


Last but not least, if you are a Professor, please get more imaginative on how to grade your students. The time of reports at the end of the semester is over.