Andrej Kovacevic

A dedicated writer and digital evangelist.

As AI Gets Better at Writing, There's Some Trouble on the Horizon

In the realm of AI development, there's perhaps no more important goal than to create systems that can truly master natural language processing (NLP). That's the key to making AI broadly useful, as it will need to interact with humans (who lack the programming skills to speak machine languages). On the path to NLP, it's fair to say that getting an AI to speak human languages is a prerequisite to getting them to understand what people are saying.
The problem is that most efforts to get AI to write in human languages have led to some hilarious results. Now, however, some new AI-enabled text generators are starting to evolve into more mature systems that are able to produce results that are almost indistinguishable from human-written text. The latest iterations are so convincing that developers are beginning to restrict access to them, for fear of what they'll be used for in the wild.
The state of AI NLP development has now reached the point that the industry as a whole is going to have to start coming to grips with the ethical implications of what they're creating – lest we end up with a repeat of the deepfake scare that garnered so much attention in recent years. Here's a look at three potential areas of concern that must be addressed.
Photo: gonin / Adobe Stock

Propaganda and Fake News

Of all of the ways AI text generation might be used for nefarious purposes, there's one that poses the biggest threat to the world at large. It's the possibility that AI might be used to churn out misinformation and propaganda at scale in ways we've yet to even imagine. As far as risks go, this one's very real. There's already an AI tool known as Grover, whose designers intended it to be a tool for detecting AI-written fake news stories. As it turns out, though, Grover's adept at creating fake news, too. You couldn't write a more perfect hypothetical to illustrate the challenges that the latest AI text generation tools will create – they're both problem and solution all in one.
Photo: Jakub Jirsak / Adobe Stock

Education Sector Challenges

Since the dawn of the internet, educational institutions have encouraged students to use it to access knowledge from around the world. The problem is that it also led to a dramatic rise in incidents of plagiarism, as scores of students found it easier to cut and paste text found online instead of forming their own original ideas. The internet also facilitates access to innumerable writing services, which students use to cut down on their workloads – and that teachers cannot detect. It's easy to see what kind of effect the latest in AI text generation may have on the education sector, creating a whole new way for students to shirk their writing assignments and research papers. If AI use becomes widespread here, it could do some serious damage to educational outcomes around the world.
Photo: Montri / Adobe Stock

The Damage to Search Engines

On today's internet, there's so much content that nobody could ever make sense of it all. That's why we rely on search engines to be our arbiters of what's relevant and useful whenever we need information. They do so by evaluating websites and scoring them according to how well they satisfy user queries and the quality of the content they provide. In the search industry, though, there's already some concern about how the advent of millions of AI-created blog posts, news articles, and other content will affect current-generation search engines. Some expect that the effect will be like a much more damaging version of article spinning, a phenomenon that forced Google to overhaul its algorithm to fight back. Experts believe that taming a wave of AI content won't be anywhere near as quick and easy to accomplish.
Photo: ipopba / Adobe Stock

Getting Ahead of the Problems

So far, there hasn't been much discussion about how to prevent the AI-fueled problems mentioned here – at least in the public sphere. There's little doubt that the major developers of AI technology are trying to figure out how to get ahead of the problems their newest creations could cause, but the whole tech world should start to bring its considerable skills to bear on these issues. Otherwise, they may have a major problem getting the genie back into the bottle once it's loose. Given how damaging such an outcome would be, some solutions must be found very soon if we hope to avert a minefield of problems.


Tags

Comments

More by Andrej Kovacevic

Digital Video
Data Breach
Forex Trading
Saas
Blockchain
Florida Tech Hub
Internet Service Provider
Lenders
Blood Diamonds
Voip
Crypto Volatility
Hacking
Blockchain
Big Data
Gaming
Ico
Wifi
Artificial Intelligence
Artificial Intelligence
Internet
Topics of interest