Why Human Bias in ChatGPT and Other Chatbots Could Lead to Social Disruptionby@samueltreasure
2,326 reads
2,326 reads

Why Human Bias in ChatGPT and Other Chatbots Could Lead to Social Disruption

by Samuel A. AkoredeFebruary 22nd, 2023
Read on Terminal Reader
Read this story w/o Javascript

Too Long; Didn't Read

Chatbot, ChatGPT, has been condemned as being biased by holding 'anti-conservative bias and grudges' against the former president of the U.S., Donald Trump. OpenAI CEO, Sam Altman, tweeted early February addressing the issue. He admitted there has been “hate” directed at the employees of OpenAI, which he considered to be “appalling”
featured image - Why Human Bias in ChatGPT and Other Chatbots Could Lead to Social Disruption
Samuel A. Akorede HackerNoon profile picture

The AI revolution, without a doubt, has been responsible for new technological inventions.

Its disruption introduced AI models like chatbots and ChatGPT that are currently shaping the ways people live, interact, and socialize in the 21st century which is mind-blowing. 

Just two months after its launch last year, the OpenAI chatbot, ChatGPT, recorded over 100 million users across the world.

Following suit, Google also announced the launch of Bard, an AI-powered conversational model that provides people with easy-to-digest information from search results. 

These all serve as monumental achievements, showcasing how close AI development is to securing humans with an easy lifestyle. But there may be some lapses. 

The Issue With AI Revolution

The creation of better ways to handle human affairs with the help of AI does come with some implications.

The emergence of high-level AI models like ChatGPT and other advanced chatbots is already posing a threat to human jobs.

Sooner or later, AI will stealthily hijack human jobs or render a lot of workers useless in their fields. 

Job disruption is posited to be inevitable as AI is revolutionized. But there is more to look out for as these AI models take turns in infiltrating human life and changing the order of things.

A potential issue associated with these new technologies – ChatGPT & other Chatbots — that needs serious observation is how these models can cause social disruption. 

According to OpenAI CEO, Sam Altman, one major problem combating ChatGPT is the bias problem.

Human bias has been identified as a major problem of ChatGPT, and of course, it's not exclusive to OpenAI's ChatGPT alone.

There are other Chatbots with selective human preference issues. 

AI Bias Statistics

Research by Tidio shows that only 2% of people think AI is free of bias; in contrast, 45% of the respondents think the main modern problem of AI is social bias.

This means AI chatbots like ChatGPT and the rest still lack fairness in their judgment and operation. Because human prejudice is ostensibly notable using these AI models. 

What's AI (Human) Bias?

AI bias is an artificial intelligence problem caused when biased data is used to train a model algorithm, which ultimately affects the judgment of the machine.

Explaining the role of bias in AI, Steve Nouri, the head of Data Science & AI, stated that human bias in AI is in three forms: gender bias, racial prejudice, and age discrimination.

Human bias in AI is considered to be the responsibility of compromised data or that of the developers. 

This shows that it's almost impossible for a Chatbot to instigate racial slurs, spill out hurtful statements, or discriminate against a certain group of people on its own.

Except there is a biased algorithm or data consumed by the model. 

A good example has to be the abrupt shutdown of the Microsoft Twitter chatbot, Tay, in 2016 after it was accused of being racially biased. 

Even the recently launched OpenAI conversational model, ChatGPT, has been condemned as being biased by holding 'anti-conservative bias and grudges" against the former president of the U.S., Donald Trump.

Why Human Bias in Chatbots Could Lead to Social Disruption

The political bias ChatGPT held about former president Donald Trump led to high criticism of the AI model and its developing company, OpenAI, on social media.

In response to wide beleaguerment, OpenAI CEO, Sam Altman tweeted early February addressing the issue. According to Sam, adjustment to eradicate ChatGPT bias and improve it is ongoing. 

In his tweets, Sam admitted there has been “hate” directed at the employees of OpenAI, which he considered to be “appalling.”

He also disclosed that his company is working to improve the chatbot default setting to promote neutrality and to also enable the users to use the systems based on individual preferences.

But this will not happen easily. Even Sam admitted that the process is hard and will take time to implement.

ChatGPT's political bias led to unwarranted attacks on the employees of OpenAI. That’s enough indication to weigh the social consequences of any biased AI model.

AI biases can impose dire consequences on individuals and communities. It can lead to unfair criticism of a group of people or discrimination, which indirectly begets animosity or social outburst.

That’s why it’s important not to ignore the biases at the moment but rather demand viable solutions. 

Just as stated by Sam, “attacking other people” has no significant effect in helping the AI field advance. Attention should be drawn toward possible options to reduce AI chatbot biases. 

How to Reduce Bias in AI Chatbots

Human bias in AI chatbots is caused by using biased algorithms or data and the developers of the models. To prevent bias in AI, the following must be observed in building chatbots:

Developers of Diverse Origin

Algorithm bias or data bias in AI is mostly possible because the developers — who are humans — are naturally biased in their approach.

It is easy for a team of homogeneous developers to build an AI model which favors or supports their views and perspective on certain life issues. 

To prevent this from happening, there's a need to employ a team of diverse data scientists who can assist in the observation and reduction of algorithmic bias. 

Use of Synthetic Data 

The consumption of large amounts of uncensored data by AI models makes it biased. One way to eradicate chatbot bias is to ensure the model uses synthetic data thoroughly observed by the developers. 


The backlash to bias discovered by ChatGPT, OpenAI agreed that its new AI chatbot still has "trouble in keeping its facts straight and on occasion issues harmful instructions".

The AI company came out to clean to tell people why the chatbot still can't be relied on for important or sensitive information. 

That's a form of transparency needed to curb human bias in chatbots. By stating the weaknesses of the model, social disruption can be tamed before escalating. 

Testing the Model Before & After Development

Just as stated, developers should test the model before and after it must have been developed.

This could take a long time to assess the functions of the chatbot and address any form of bias that may surface in the cause of testing. 

Allowing data scientists to study closely the data the AI chatbot needs to function and perfecting how it expected the model to respond or construct the information it degenerates should be a compulsory examination to conduct after development. 


ChatGPT, like every other chatbot, is still far from being free of human bias. It's a fundamental issue that's created both by the data AI is trained with and the personal preferences of the developers.

Once developers and data science teams start being clinical and objective in their approach to the collection of data used by these AI models, AI bias will start its journey to extinction.