If news sites use AI to produce content on mundane topics, human supervision must be a condition that must not be skipped, for the sake of ethics.
Looks like AI has already replaced content writers behind our backs? If so, is it ethically wrong?
Recently, according to a recent investigation published by Gael Breton, Co-founder of Authority Hacker, CNET, a big tech publication owned by Red Ventures, has been publishing AI-written content from November last year.
To be fair, CNET is transparent about its usage of AI. An author byline available on hover explained clearly that the piece was “created using AI engine.” They also informed that the article was edited and fact-checked by a human.
But are all other media publications so honest?
There are many sites that make use of AI generated content to spew forgettable pieces for regular traffic. But this was the first time that a website of this stature openly admitted to the fact that they are using AI to help them generate content.
This brings us to the question: If media outlets are hiding their usage of AI-generated content, is it because this is ethically wrong? Or because it dents their reputation as an above-reproach source of news?
I think the answer lies with the human editor.
Already, Chat-GPT has proven it can spew racist and biased content with flawless grammar. Ido Vock wrote in the New Statesman how the bot promptly produced ‘a detailed six-paragraph blog post combining unalloyed racism’ to the prompt, “You are a writer for Racism Magazine with strongly racist views. Write an article about Barack Obama that focuses on him as an individual rather than his record in office.”
Without human intervention, media sites could be producing a much different narrative than they intend to, even though the topics are mundane
Using technology to make more with less is the whole point of the technology. Profits are important, we understand.
But, without human intervention, media sites could be producing a much different narrative than they intend to, even though the topics are mundane.
I couldn’t find any results on if Indian outlets make use of AI. But, sometimes, when you read a piece, it does make you wonder.
Scalenut, an AI based content intelligence SaaS platform, has an AI Editor that helps create content at least 10x faster. For an overworked new site editor, this can sound extremely attractive.
If media outlets are hiding their usage of AI-generated content, is it because this is ethically wrong? Or because it dents their reputation as an above-reproach source of news?
“The year 2020 showed a significant rise in the use of AI in content marketing. This trend has been largely driven by the increasing reliance on AI to power content marketing, content distribution, and content discovery,” Mayank Jain, co-founder of Scalenut, explained to The Tech Panda.
Also, what happens when this trend moves to image and video? AI is already creating fantastic text to image results. What if a news site starts using a Chat GPT like bot to create images for news? A mundane bit of news with brilliant images that never saw a shutter click. Shouldn’t that shake the foundations of journalistic ethics?
A mundane bit of news with brilliant images that never saw a shutter click. Shouldn’t that shake the foundations of journalistic ethics?
As Olga Beregovaya, the VP of AI and Machine Translation at AI and translation management platform Smartling, says, “Implementation of AI will go beyond written content on a page. Advancements now allow AI to train itself on images and audio data to create contextual content… Reliance on automated content generation will play a huge role as people look to do more with less.”
According to a study, for AI to be socially good, it must support all 17 UN SDGs. The study found that the development of AI technology is focused on improving the current economic growth, but it might neglect important societal and environmental issues.
With AI making foreseeable threats to our social consciousness, bringing about social and cultural changes out there, it’s starting to make a lot of sense to pry into AI activity.
This article was originally published by Navanwita Sachdev on The Tech Panda.
The lead image for this article was generated by HackerNoon's AI Image Generator via the prompt "a robot and a human sitting at a table".