paint-brush
OpenAI Made an AI Detection Tool, So Why Isn’t It Releasing It?by@ehecks
482 reads
482 reads

OpenAI Made an AI Detection Tool, So Why Isn’t It Releasing It?

by Eleanor HecksAugust 23rd, 2024
Read on Terminal Reader
Read this story w/o Javascript

Too Long; Didn't Read

OpenAI created an AI detection tool but has not yet released it. Some have accused the software of promoting plagiarism, while others feel it is not as detailed as human-created work. Opponents of the software fear it may drive people to be less creative for fear of triggering a detector.
featured image - OpenAI Made an AI Detection Tool, So Why Isn’t It Releasing It?
Eleanor Hecks HackerNoon profile picture


OpenAI became an overnight household name, at least in tech circles, by releasing ChatGPT in November 2022. Businesses and individuals quickly adopted its tools and began using them to automate daily tasks.


One issue with AI-generated text and images is that they can mirror existing works. Some have accused the software of plagiarism, while others feel it is not as detailed as human-created work. Employers everywhere want better ways to detect AI usage to meet target audiences’ needs and know the best time to utilize the technology.


Open AI responded by creating an AI detection tool but has not yet released it. Many want to know the reason for the holdup and if it will ever be made available to the public.

Why Is OpenAI Hesitant to Release Its Tool?

TechCrunch published an article explaining OpenAI's detection tool, which is meant to catch students who cheat by having the computer program write their papers. The company has sat on it rather than releasing it, making some question why. A representative for OpenAI said the company is considering the broader impacts of the tool and taking its time.


Many other detection tools have been ineffective, creating false positives at times and failing to detect AI-generated text at others. OpenAI's product would focus mainly on detecting text watermarked by ChatGPT. One reason for not releasing it is fear that it could negatively impact nonnative English speakers. The tool also forces ChatGPT to change how it works when adding the watermarked words.

Broader Uses of the Tool

Although the detection software is meant to catch students using AI to write papers, it could be applied to many other situations, particularly in combating fraud at larger-scale levels.


For example, consider how useful an AI detector tool could be in catching computer-generated text used by cybercriminals. In 2023 alone, ransomware payments doubled from the year before to more than $1 billion dollars, and this number is expected to continue rising dramatically with the prevalence of AI. As consumers and companies work to combat data breaches, AI could make it easier to spot computer-generated text.


Many scams include flowing verbiage meant to tug at someone’s emotional heartstrings or cause fear. When unsure whether a message is legitimate, people could run the text through the detection tool to see if AI was used to generate it. This could be quite valuable since many scams involve robot-written text.

Arguments Against Using Detection Tools

So far, AI detection tools haven’t been very accurate. They’ve falsely flagged original content as AI-generated and caused students to have to fight for their grades and sometimes their academic careers. Universities and employers take plagiarism seriously, so a false accusation could destroy someone’s future or livelihood.

The tools may also be ineffective when users can__plug into easy-to-use and versatile software__ that helps them bypass AI detection, such as HumanizeAI and RealWriter. People should learn to utilize AI in ways that allow them to brainstorm and generate fresh ideas without plagiarizing information. However, they may shy away from using any type of software for fear of being flagged by a detector.


Opponents of the software fear it may drive people to be less creative for fear of triggering a detector. Programs like StealthAI will rewrite AI-generated text to sound more human. They also say people will turn away from commonly used tools like ChatGPT and seek ones without the same watermarks, effectively rendering the detector tool useless.


People also express concerns over bad players using the tool to reverse engineer code and create something that will utilize AI to create deep fake videos or DDoS attacks. Cybercriminals are already tricky. No one wants to give them more ways to fool people into sharing their personal data.

Benefits of an AI Detection Tool

Arguments against AI detection tools may soon be a moot point. While people expressed concern over nonnative English speakers getting left behind, software developers will likely fix language barrier problems as time passes.


Many believe OpenAI will soon release its tool. Reports indicate that internal debate has been going on for nearly two years about the usefulness of the software versus concerns about inclusivity. Some see the benefits of universities and employers using AI-detection tools.


As AI detector development grows, expect to see it used for things such as:


  • Identifying fake videos used in political campaigns

  • Finding misinformation and marking it more accurately than other bots do

  • Generating ethical content


Releasing the tool could inspire other brands to create their own programs and spur the success rate of AI detection.

The Future of AI Use

Love it or hate it, AI is here to stay — although companies have been a bit slower to embrace it than first predicted. MIT released a report showing that the technology is being adopted at different rates based on sectors. For example, most businesses adopting AI were in manufacturing and health care.


As marketing departments learn more about how AI works and ways to implement it, expect more brands to use it and seek detector tools to prevent privacy issues and copying.


Those who use AI for a while start to notice patterns that can be detected without the help of software. For example, the intro to nearly every article has similar language. The layout, pacing, and details are the same. With a bit of practice, most people can spot an AI-generated piece fairly accurately.

Why People Need the Tool Now

Without a way to detect AI-generated content accurately, people may distrust what they read online. In other situations, they might suspect someone used AI without disclosing it but find it impossible to prove. Workers may be falsely accused or lose their jobs and degrees when they aren’t guilty.


The tool will solve the issue of students doing very little work and being lackluster in their studies but still obtaining an advanced degree based on someone else’s hard work, sweat, and tears.


OpenAI’s product will likely have the most impact in the education sector. However, companies can also utilize it to ensure they produce unique, highly targeted content customers want to read.

A Complex Problem

OpenAI’s decision to sit on its detection tool is rooted in wanting to do what is best as AI technology grows and ethical concerns arise. It will likely release the software at some point, to the relief of educators everywhere. How other industries might use it remains to be seen.


All companies developing AI programs should balance the pros and cons of each feature they introduce. Only then will AI become an indispensable tool that spurs progress.