Many prominent figures have recently talked about AI and noted their thoughts on paper. Some urge seeing AI in a positive light, and others alarm that AI can have ‘profound risks to humanity.’
Some experts argue that AI & ChatGPT can act as a
In learning, AI can help identify what motivates students and what makes them lose interest and then deliver tailored content based on that information. It can also help teachers plan course instruction learning plans, help evaluate students’ knowledge on exams or certain topics, and support as a tutor in training.
In healthcare, AI will help by completing paperwork, filing insurance claims, and empowering ultrasound machines that can be used with minimal training. Probably the most impressive part is that it can help people in impoverished countries who can hardly see a doctor when needed.
After being introduced to ChatGPT, Bill Gates, the creator of the BASIC programming language for Altair 8800 and the MS-DOS operating system for IBM PC, was so impressed that he said:
“I knew I had just seen the most important advance in technology since the graphical user interface.”
He is adamant that “[AI] will change how people work, learn, travel, get health care, and communicate with each other.” In five years, AI may add more than $15 trillion to the global economy through advances in medicine, biology, climate science, education, business processes, manufacturing, customer service, transportation, and more.
But AI, as with many new technologies, is making many uncomfortable. Elon Musk and Geoffrey Hinton are the latest to openly voice their opinions that AI comes with potentially destructive risks.
Whichever group we belong to, we need to accept the fact: The AI horse has left the barn and is not coming back. Another fact may be that we’re risking in ways that could profoundly impact humanity.
Learning about AI’s risks is like going through a wormhole. The further you go, the darker it gets.
On the eve of Monday, May 23, a controversial image of an
This, unfortunately, is just the beginning of AI’s potential to spread misinformation, boost non-factual propaganda, and induce panic.
Computer scientist Geoffrey Hinton, who got the nickname ‘The Godfather of AI’ because of his early work on algorithms that train machines that underline AI, recently told
He firmly believes that AI machines are designed to get more intelligent than humans, and what would take up to 50 years for his fears to become a reality, he now believes they may knock on our technology door “as soon as 5-10 years from now”. A few weeks ago, he left his part-time job at Google and stopped his contribution to the company’s Bard chatbot to tell a message to the world:
“I want to sort of blow the whistle and say we should worry seriously about how we stop these things getting control over us.”
The company behind the world’s most powerful AI chatbot to date, OpenAI, in a
The quote ended up being included in an
The letter suggests that pausing the work from AI labs for at least six months is the only way to have enough time to prepare for adequate regulations and enjoy an “AI summer” without rushing for a destructive “fall.”
Elon Musk raised many eyebrows when he
Google’s Sundar Pichai didn’t come lightly with his statement about AI either, saying that its risks
As if AI-powered chatbots were not enough, OpenAI’s renowned ChatGPT has recently announced a plug-in that allows AI to make decisions regarding its host. Major companies, including
AI can now even complete everyday tasks on people’s behalf, such as ordering a drink, booking a hotel, or reserving the best possible seat in a restaurant. A new part of it, known as a “code interpreter,” can also write and execute codes on behalf of a business. And that’s hazardous from a security perspective. Or, as
The announcement of the GPT plug-in has since raised many eyebrows. One of them who doubts the safety of such a plug-in is Ali Alkhatib, acting director of the Center for Applied Data Ethics at the University of San Francisco. He argues that part of the issue with plugins for language models is that they could help to easily jailbreak such systems, warning that:
“Things are moving fast enough to be not just dangerous, but actually harmful to a lot of people.”
Going from content generation to taking action on people’s behalf closes the air gap of language models’ access to decision-making. “We know that the models can be jailbroken, and now we’re hooking them up to the internet so that it can potentially take action,” says
Hendricks suspects extensions inspired by ChatGPT plugins could simplify tasks like spear phishing or phishing emails. He added, “That isn’t to say that by its own volition, ChatGPT is going to build bombs or something, but it makes it a lot easier to do these sorts of things.”
Kanjun Qiu, CEO of
Not only that. GPT-4’s red team found out they could “send fraudulent or spam emails, bypass safety restrictions, or misuse information sent to the plugin,” as Emily Bender, a linguistics professor at the University of Washington, reported.
The GPT-4 red team also admitted that because GPT-4 can execute Linux commands, it has the potential to explain how to make bioweapons, synthesize bombs, or buy ransomware from the dark web. Bender further added:
“Letting automated systems take action in the world is a choice that we make.”
ChatGPT has taken over the conversations online, from magazines to social media platforms like LinkedIn, on prompts to use for best results, opinions on its impact, and whatnot. Lots of these writings are even done by ChatGPT itself.
Psychology and emotions are where AI and ChatGPT fail to impress. Let’s say, to write an up-to-date post when a social injustice has conquered the news or write an ad that connects with the audience’s emotions and offers genuine solutions to their pain points. The last time I asked ChatGPT, its answers were far from convincing. Basic, at best.
That’s probably not the most impressive social media post about ESG. The limitation comes mainly because AI is not necessarily good at understanding the main concept of a human request, which leads to some bizarre answers. This is where psychology and human emotion in marketing come in.
When it comes to SEO marketing, in her recent article,
Some may argue that the software provides the results according to the prompts it’s asked for. That may be true to some extent. But not entirely.
At its core, ChatGPT is run by AI, machine learning, and natural language processing (NLP), meaning it can offer as much information as it has been provided and how it has been programmed.
“The biggest danger is that the weaponization of information will become a lot scarier, especially with AI’s lack of limits to form opinions on politics, religion, and other sensitive issues, in ways we marketers could handle before.”
Human marketers offer differentiation, a sense of belonging, community, and emotional connection for the consumer, something AI will hardly ever excel at.
“Do you think ChatGPT and AI will take over?” a friend asked. She is not alone in her doubts about the potential of AI to massively impact the future, negatively as much as positively. The bluntest answer to this question probably comes from ‘The Godfather of AI,’ who says, "It’s not inconceivable. That’s all I’ll say.”
It’s time for AI labs to push some brakes and governments to develop “extremely powerful regulations” to limit AI’s risks for humanity. With the way AI developments are going, one thing is sure: the clock is ticking loudly.