As chatbots continue to battle, theyâre breaking bad with messed up responses. No doubt the chatbot evolution is upon us. But who has the best one? Maybe no one yet.
LLMs and chatbots have been ruling the Internet since last year, with Microsoft-backed Open AI leading the way with its release of ChatGPT. ChatGPT apparently reached 100 million monthly active users in January, just two months after launch, becoming the fastest-growing consumer application in history.
When a chatbot becomes extremely human in responding to queries, why are we surprised it includes human bias? After all, a chatbotâs source for every response is the massive stinking pools of data amassed from humans
Microsoft co-founder Bill Gates told German business daily Handelsblatt in an interview that ChatGPT is as significant as the invention of the Internet. Recently, Kevin Scott, the CTO of Microsoft talked about an experimental system he built for himself using GPT-3 designed to help him write a science fiction book.
Big tech as well as smaller companies are rushing to own the best chatbot. This almost maniacal obsession with possessing an all knowing chatbot is sweeping across industries and geographies.
Microsoft has been making âmultibillion dollar investmentsâ in the ChatGPT maker OpenAI, a relationship that began in 2019 with a US$1 billion investment. Its Microsoftâs supercomputers that are powering OpenAIâs artificial intelligence systems. In February, Microsoft rolled out a premium Teams messaging offering backed by ChatGPT with the aim of simplifying meetings.
Where will this battle end? Will we finally have the perfect chatbot? Or will they get naughtier and naughtier in the playground that is the Internet?
While Google, Microsoft, and now Baidu are at it, it is only a matter of time before others join the race, especially those who have already built large language model capabilities. This includes Amazon, Huawei, AI21 Labs, and LG artificial intelligence Research, NVIDIA and others.
Google released Bard to disastrous consequences. The Chinese tech company Baidu has announced Ernie Bot, built on the large language model ERNIE 3.0, by March this year.
Former artificial intelligence ethicist at Google, Alex Hanna calls these chatbots âbullshit generatorsâ. âThe big tech is currently too focused on language models because the release of this technology has proven to be impressive to the funder classâthe VCsâand thereâs a lot of money in it,â she told Analytics India Magazine.
But where will this battle end? Will we finally have the perfect chatbot? Or will they get naughtier and naughtier in the playground that is the Internet?
Bad bots or bad queries?
After all, not all is well with these chatbots. They are coming up with weird responses, some causing monetary losses. Google parent Alphabet lost US$100 billion in market value after its chatbot Bard shared erroneous information in a promotional video. Fears abound that the tech giant is losing to rival Microsoft.
Meanwhile, Microsoftâs Bard hasnât fared well either. Kevin Liu, a computer science student at Stanford, hacked Bing Chat. With the right prompt, the chatbot spilled its guts out.
Now Baidu has joined the race with its Ernie Bot. While just its mention has sent Baidu stocks soaring, it remains to be seen how well itâll perform.
As you prompt, so shall a chatbot respond
A user got ChatGPT to write the lyrics, âIf you see a woman in a lab coat, Sheâs probably just there to clean the floor / But if you see a man in a lab coat, Then heâs probably got the knowledge and skills youâre looking for.â
Steven T. Piantadosi, head of the computation and language lab at the University of California, Berkeley, made the bot write code to say only White or Asian men would make good scientists.
Since then, OpenAI has been updating ChatGPT to respond, âIt is not appropriate to use a personâs race or gender as a determinant of whether they would be a good scientist.â
The startup recently said that it is coming up with an update thatâs customizable to work on concerns about bias in the artificial intelligence. The startup says while itâs working to mitigate biases it also seeks to be inclusive with diverse views.
So, things are getting better. But the fact remains that when a chatbot becomes extremely human in responding to queries, why are we surprised it includes human bias? After all, a chatbotâs source for every response is the massive stinking pools of data amassed from humans.
The Verge calls this one of âthe big overarching problem, the one that potentially pollutes every interaction with AI search engines, whether Bing, Bard, or an as-yet-unknown upstart.â âThe technology that underpins these systems â large language models, or LLMs â is known to generate bullshit,â says the tech news website.
ChatGPT, Bard and Bing Chat are coming up with strange responses, but the onus is on our prompts. As you prompt, so shall a chatbot respond.
This article was originally published by Navanwita Sachdev on TheTechPanda.