AI: the Good, the Bad, and the Need for Oversightby@allan-grain
198 reads

AI: the Good, the Bad, and the Need for Oversight

by Allan GrainMarch 14th, 2024
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Generative AI and deepfake technology portend a dangerous future if they are not built and utilized with regulation.
featured image - AI: the Good, the Bad, and the Need for Oversight
Allan Grain HackerNoon profile picture

It’s been 35 years since computer scientist Tim Berners-Lee sent a memo detailing his idea of a "distributed hypertext system" on March 12, 1989. While it was mostly ignored by his colleagues at CERN, the European Organization for Nuclear Research, Berners-Lee decided to go with it and, well, here we are. It’s 2024 and now the big hype is generative AI, cloud computing, and virtual reality.

These are exciting times. Researchers have unveiled Devin, the first AI software engineer, and an AI version of Marilyn Monroe was introduced at SXSW. The NBA is working on a great, new innovative generative gaming tool that creates a fully-immersive environment.

But generative AI and deepfake technology also portends a dangerous future if they are not built and utilized with regulation. A new report commissioned by the US State Department paints an alarming picture of the “catastrophic” national security risks posed by rapidly evolving artificial intelligence, warning that time is running out for the federal government to avert disaster.

The US government has been urged to act "quickly and decisively” to mitigate significant national security risks posed by AI, which, in the worst-case scenario, could present an "extinction-level threat to the human species” according to the report.

The European Union’s parliament on Wednesday approved the world’s first major set of regulatory ground rules to govern the media-hyped artificial intelligence at the forefront of tech investment. The EU AI Act, first written in 2021, divides the technology into categories of risk, ranging from “unacceptable” — which would see the technology banned — to high, medium and low risk. The regulation is expected to enter into force at the end of the legislature in May.

The US government and the EU are not wrong in their assessments of the dangers of AI. India also has its concerns over the technology. Millions of India’s citizens are expected to vote in April or May and the government has now moved to ensure AI is not used to manipulate media and endanger the integrity of the elections.

A groundbreaking study from the Institute for the Future of Work has found that exposure to new technologies including trackers, robots and AI-based software at work is bad for people’s quality of life.

And AI is affecting jobs.

In a report by Quartz, IBM told employees on Tuesday that it is slashing jobs in its marketing and communications division. IBM is the latest in a wave of tech companies to announce layoffs early this year. Tech giants like Google-parent Alphabet, Microsoft, and Amazon all announced job cuts in 2024. More recently, they were joined by Sony, Bumble, and Expedia.

According to IBM, “This rebalancing is driven by increases in productivity and our continued push to align our workforce with the skills most in-demand among our clients, especially areas such as AI and hybrid cloud.”

What’s clear is that AI is advancing in terrific ways and is creating new and fresh opportunities for companies and consumers alike, but there are inherent dangers built into the technology that creates a need for at least some level of oversight and regulation.