paint-brush
75% Of Companies Are Banning the Use of ChatGPT: What Happened?by@georgedrennan
2,410 reads
2,410 reads

75% Of Companies Are Banning the Use of ChatGPT: What Happened?

by George DrennanSeptember 25th, 2023
Read on Terminal Reader
Read this story w/o Javascript

Too Long; Didn't Read

ChatGPT has taken the world by storm since its release last year. But a recent survey found that 75% of businesses have implemented or are considering bans on ChatGPT and other Generative AI applications in their workplace. Companies are worried about potential cybersecurity risks. The first major ChatG PT data leak occurred earlier this year and involved the tech giant Samsung.
featured image - 75% Of Companies Are Banning the Use of ChatGPT: What Happened?
George Drennan HackerNoon profile picture

Is the ChatGPT honeymoon period over?


ChatGPT has taken the world by storm since its release last year. However, a recent survey found that 75% of businesses have implemented or are considering bans on ChatGPT and other Generative AI applications in their workplace.


Why are companies getting cold feet about ChatGPT?


It’s not that they doubt its capabilities. Instead, they’re worried about potential cybersecurity risks.


A Growing Concern: Data Leaks and ChatGPT

Generative AI tools are designed to learn from every interaction. The more data you feed them, the smarter they become. Sounds great, right?


But there’s a gray area concerning where the data goes, who sees it, and how it’s used.


These privacy concerns led to the Italian data-protection authority temporarily banning ChatGPT in April.


For businesses, the worry is that ChatGPT might take user-submitted information, learn from it, and potentially let it slip in future interactions with other users.


OpenAI’s guidelines for ChatGPT indicate that user data could be reviewed and used to refine the system. But what does this mean for data privacy?


The answer isn’t clear-cut, and that’s what’s causing the anxiety.


Data Leak Worries: What’s the Real Risk?

A Cyberhaven study found that by June 1, 10.8% of workers utilized ChatGPT at work, with 8.6% inputting company information. The alarming statistic is that 4.7% of workers have entered confidential information into ChatGPT.


Cyberhaven Study Results


And because of the way ChatGPT works, traditional security measures are lacking. Most security products are designed to protect files from being shared or uploaded. But ChatGPT users copy and paste content into their browsers.


Now, OpenAI has added an opt-out option. Users can request their data not to be used for further training.


But the opt-out is not the default setting. So, unless users are aware and take proactive measures, their interactions might be used to train the AI.


The concerns don’t stop there.


Even if you opt out, your data still passes through the system. And while OpenAI assures users that data is managed responsibly, ChatGPT acts as a black box. It isn’t clear how data flows within the system once it’s ingested.


There’s also the risk that something goes wrong.


On March 21, 2023, OpenAI shut down ChatGPT because of a glitch that incorrectly titled chat histories with names from different users. If these titles held private or sensitive details, other ChatGPT users might have seen them. The bug also exposed the personal data of some ChatGPT Plus subscribers.

The Samsung Incident

The first major ChatGPT data leak occurred earlier this year and involved the tech giant Samsung. According to Bloomberg, sensitive internal source code was accidentally leaked after an engineer uploaded it to ChatGPT.


A leak like this can have severe implications.


And it wasn’t just Samsung. Amazon, another titan in the tech industry, had its own concerns. The company identified instances where ChatGPT’s responses eerily resembled Amazon’s internal data.


If ChatGPT has Amazon’s proprietary data, what’s stopping it from inadvertently spilling it to competitors?


Lack of Clear Regulations

The rapid evolution and adoption of Generative AI tools have left regulatory bodies playing catch-up. There are limited guidelines around responsible use.


So, if there’s a data breach because of the AI, who’s responsible – the company using the tool, the employees, or the AI provider?


In OpenAI’s Terms of Use, the responsibility lies with the user:

OpenAI Terms of Use


That puts the risk on companies. Without clear regulations, they are left to decide on the best course of action. That’s why many are now becoming more hesitant.


Conflicting Views from Tech Leaders

When it comes to deploying new technologies, businesses often look to tech leaders. If a tech giant adopts a new innovation, it’s often seen as a green light for other companies to follow suit.


So, when a company as influential as Microsoft offers mixed signals regarding a technology, the ripples are felt across industries.


On the one hand, Microsoft has expressed reservations about the use of Generative AI tools. In January, Microsoft warned employees not to share ‘sensitive data’ with ChatGPT.



But Microsoft also champions its own version of the technology, Azure ChatGPT. This iteration promises a safer and more controlled environment for corporate users.


This move raises questions: Is Azure ChatGPT genuinely immune to the risks Microsoft has pointed out in the broader Generative AI landscape? Or is this a strategic maneuver to ensure businesses stay within Microsoft’s ecosystem?


The Balance of Adopting ChatGPT

When evaluating any new technology, businesses are often caught in a tug-of-war between the potential benefits and the potential pitfalls.


With ChatGPT, companies seem to be adopting a wait-and-see approach.


As the technology and its regulations evolve, the hope is that a safer, more transparent use can be established.


For now, ChatGPT is great for a bit of coding help and content. But I wouldn’t trust it with proprietary data.