paint-brush
Risks of AI: Why It’s Time to Consider the Warnings of Elon Musk and 50,000+ Tech Expertsby@gashihk
1,020 reads
1,020 reads

Risks of AI: Why It’s Time to Consider the Warnings of Elon Musk and 50,000+ Tech Experts

by Hekuran GashiMay 30th, 2023
Read on Terminal Reader
Read this story w/o Javascript

Too Long; Didn't Read

AI labs are creating a beast we cannot tame that could ‘manipulate people to do what it wants’ Elon Musk, ‘The Godfather of AI,’ and 50,000+ tech experts warn artificial intelligence could have profound risks to humanity.

People Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - Risks of AI: Why It’s Time to Consider the Warnings of Elon Musk and 50,000+ Tech Experts
Hekuran Gashi HackerNoon profile picture

AI labs are creating a beast we cannot tame that could “manipulate people to do what it wants.” Elon Musk, ‘The Godfather of AI,’ and 50,000+ tech experts warn artificial intelligence could have profound risks to humanity.


Many prominent figures have recently talked about AI and noted their thoughts on paper. Some urge seeing AI in a positive light, and others alarm that AI can have ‘profound risks to humanity.’


Some experts argue that AI & ChatGPT can act as a ‘white-collar worker to help you with various tasks.’ In workplaces, for example, AI can improve productivity by serving as a resource for employees to communicate with and save time on writing emails.


In learning, AI can help identify what motivates students and what makes them lose interest and then deliver tailored content based on that information. It can also help teachers plan course instruction learning plans, help evaluate students’ knowledge on exams or certain topics, and support as a tutor in training.


In healthcare, AI will help by completing paperwork, filing insurance claims, and empowering ultrasound machines that can be used with minimal training. Probably the most impressive part is that it can help people in impoverished countries who can hardly see a doctor when needed.


After being introduced to ChatGPT, Bill Gates, the creator of the BASIC programming language for Altair 8800 and the MS-DOS operating system for IBM PC, was so impressed that he said:


“I knew I had just seen the most important advance in technology since the graphical user interface.”


He is adamant that “[AI] will change how people work, learn, travel, get health care, and communicate with each other.” In five years, AI may add more than $15 trillion to the global economy through advances in medicine, biology, climate science, education, business processes, manufacturing, customer service, transportation, and more.


But AI, as with many new technologies, is making many uncomfortable. Elon Musk and Geoffrey Hinton are the latest to openly voice their opinions that AI comes with potentially destructive risks.


Whichever group we belong to, we need to accept the fact: The AI horse has left the barn and is not coming back. Another fact may be that we’re risking in ways that could profoundly impact humanity.

AI could ‘manipulate people to do what it wants

Learning about AI’s risks is like going through a wormhole. The further you go, the darker it gets.


On the eve of Monday, May 23, a controversial image of an explosion near the Pentagon complex made the circles online and took the world by storm. Twitter “blue ticks” jumped in to share it. The stock market wobbled, and the S&P 500 nose-dived by 30 points.


Image: Fake AI-generated image of the attack on the Pentagon complex. By TMZ


This, unfortunately, is just the beginning of AI’s potential to spread misinformation, boost non-factual propaganda, and induce panic.


Computer scientist Geoffrey Hinton, who got the nickname ‘The Godfather of AI’ because of his early work on algorithms that train machines that underline AI, recently told New York Times that AI labs are creating a beast that will be hard to tame and that could be “manipulating people to do what it wants.”


He firmly believes that AI machines are designed to get more intelligent than humans, and what would take up to 50 years for his fears to become a reality, he now believes they may knock on our technology door “as soon as 5-10 years from now”. A few weeks ago, he left his part-time job at Google and stopped his contribution to the company’s Bard chatbot to tell a message to the world:


“I want to sort of blow the whistle and say we should worry seriously about how we stop these things getting control over us.”


The company behind the world’s most powerful AI chatbot to date, OpenAI, in a recent statement admitted that "At some point, it may be important to get an independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models."


The quote ended up being included in an open letter pleading with policymakers to pause the development of AI systems stronger than GPT-4 from AI labs, a letter that has since been signed by likes of Tesla’s CEO Elon Musk, Apple’s Co-Founder Steve Wozniak, and 50,000+ AI and tech experts and enthusiasts.


The letter suggests that pausing the work from AI labs for at least six months is the only way to have enough time to prepare for adequate regulations and enjoy an “AI summer” without rushing for a destructive “fall.”


Elon Musk raised many eyebrows when he openly criticized OpenAI, a company where he reportedly invested over $1 billion over time, for becoming a “closed-source, maximum-profit company,” which he says he “never intended.”


Image: Elon Musk commenting on the original intention behind OpenAI. Twitter


Google’s Sundar Pichai didn’t come lightly with his statement about AI either, saying that its risks “keep [him] up at night.” Google recently saw over $100 billion get wiped off the company’s market cap because of an incorrect response by its newly released chatbot, Bard.

OpenAI’s new GPT plug-in can even make decisions for businesses

As if AI-powered chatbots were not enough, OpenAI’s renowned ChatGPT has recently announced a plug-in that allows AI to make decisions regarding its host. Major companies, including Expedia, OpenTable, Instacart, and Klarna Bank, have already integrated the plugin into their services.


AI can now even complete everyday tasks on people’s behalf, such as ordering a drink, booking a hotel, or reserving the best possible seat in a restaurant. A new part of it, known as a “code interpreter,” can also write and execute codes on behalf of a business. And that’s hazardous from a security perspective. Or, as Linxi “Jim” Fan, a research scientist at NVIDIA AI, explained it: “[The next generation of] ChatGPT will be like a meta-app—an app that uses other apps.”


The announcement of the GPT plug-in has since raised many eyebrows. One of them who doubts the safety of such a plug-in is Ali Alkhatib, acting director of the Center for Applied Data Ethics at the University of San Francisco. He argues that part of the issue with plugins for language models is that they could help to easily jailbreak such systems, warning that:


“Things are moving fast enough to be not just dangerous, but actually harmful to a lot of people.”


Going from content generation to taking action on people’s behalf closes the air gap of language models’ access to decision-making. “We know that the models can be jailbroken, and now we’re hooking them up to the internet so that it can potentially take action,” says Dan Hendricks, director of the Center for AI Safety.


Hendricks suspects extensions inspired by ChatGPT plugins could simplify tasks like spear phishing or phishing emails. He added, “That isn’t to say that by its own volition, ChatGPT is going to build bombs or something, but it makes it a lot easier to do these sorts of things.”


Kanjun Qiu, CEO of Generally Intelligent, adds that AI chatbots “could have unintended consequences.” For example, such a plugin could book an overly expensive hotel or distribute misinformation and scam.


Not only that. GPT-4’s red team found out they could “send fraudulent or spam emails, bypass safety restrictions, or misuse information sent to the plugin,” as Emily Bender, a linguistics professor at the University of Washington, reported.

The GPT-4 red team also admitted that because GPT-4 can execute Linux commands, it has the potential to explain how to make bioweapons, synthesize bombs, or buy ransomware from the dark web. Bender further added:


“Letting automated systems take action in the world is a choice that we make.”

Can AI transform industries that are all about psychology?

ChatGPT has taken over the conversations online, from magazines to social media platforms like LinkedIn, on prompts to use for best results, opinions on its impact, and whatnot. Lots of these writings are even done by ChatGPT itself.


Psychology and emotions are where AI and ChatGPT fail to impress. Let’s say, to write an up-to-date post when a social injustice has conquered the news or write an ad that connects with the audience’s emotions and offers genuine solutions to their pain points. The last time I asked ChatGPT, its answers were far from convincing. Basic, at best.


Image: AI-generated social media post about ESG, ChatGPT


That’s probably not the most impressive social media post about ESG. The limitation comes mainly because AI is not necessarily good at understanding the main concept of a human request, which leads to some bizarre answers. This is where psychology and human emotion in marketing come in.


When it comes to SEO marketing, in her recent article, Victoria Kuchurenko shares how her AI-generated article performed in search engines. Despite publishing the article on a completely healthy domain, the article ranked 45th for the same keyword, with a negative tendency of impressions. “I believe impressions will eventually drop to 0. That’s how AI content performs in organic search results today,” she wrote.


Some may argue that the software provides the results according to the prompts it’s asked for. That may be true to some extent. But not entirely.

At its core, ChatGPT is run by AI, machine learning, and natural language processing (NLP), meaning it can offer as much information as it has been provided and how it has been programmed.


Shagorika Heryani, a marketing expert of 20+ years that has led some of the biggest brands in India and UAE, said no matter how good AI is, “it can not replace human creativity; it simply forces marketers to go back to the fundamentals of marketing, which is about human psychology, and problem-solving, creativity, and critical thinking.” But she also believes that AI will inevitably disrupt the marketing space and has major concerns about the direction of AI in marketing.


“The biggest danger is that the weaponization of information will become a lot scarier, especially with AI’s lack of limits to form opinions on politics, religion, and other sensitive issues, in ways we marketers could handle before.”


Human marketers offer differentiation, a sense of belonging, community, and emotional connection for the consumer, something AI will hardly ever excel at.

‘A white-collar worker to help you with various tasks.’ But with lots of risks, too.

“Do you think ChatGPT and AI will take over?” a friend asked. She is not alone in her doubts about the potential of AI to massively impact the future, negatively as much as positively. The bluntest answer to this question probably comes from ‘The Godfather of AI,’ who says, "It’s not inconceivable. That’s all I’ll say.”


It’s time for AI labs to push some brakes and governments to develop “extremely powerful regulations” to limit AI’s risks for humanity. With the way AI developments are going, one thing is sure: the clock is ticking loudly.