Recent Photo of HackerNoon Founder & CEO David Smooke.
Tim Keary from Techopedia: Confidence in journalism is at an all time low, and artificial intelligence threatens to destabilize this further. Currently news publishers of all sizes are being forced to decide what role AI plays in the newsroom.
Will they accept AI-generated content and headlines, how much AI use needs to be disclosed to readers, and will AI-generated news will take traffic away from human-written journalism?
Techopedia reached out to David Smooke, founder and CEO of HackerNoon, a technology publisher with over 45,000 contributing writers and 4 million monthly readers, to find out how his organization is experimenting with AI, and to get his thoughts on how AI should be approached by journalists and news publishers.
The Q&A provides a brief look at how HackerNoon is experimenting with AI in its operations, what the acceptable limits of AI’s role in the newsroom should be, Smooke’s thoughts on the The New York Times vs OpenAI lawsuit, and the future of human-written journalism.
Smooke, from Colorado, founded HackerNoon in 2013, back when AI was still effectively confined to the dreams of Hollywood.
Comments and formatting have been edited slightly for brevity.
David Smooke: I’m more of a writer and a product manager than a journalist.
Within our text editor, we have a custom ChatGPT layer for rewrites, a handful of image generation models, and leverage AI to generate summaries for the native character count per distribution channel. We use AI to make stories more accessible by making more versions of the story; for example we use Google AI to translate stories into foreign languages and generate audio versions of the blog post.
As a consumer of news, when it comes to the newsroom specifically I would like journalists to research their stories with whatever the most advanced and relevant search technology or specific methodology that story calls for, but to never fully trust the AI, and always always verify.
It’s not acceptable for content to be presented as human made when it was made by AI. Platforms should do what they can to indicate where and how AI contributed to the experience. For example, we use
There are side effects from the mass production and mass consumption of AI generated content. Deep fakes create billions of views across social media. Platforms are getting better at detecting them and labeling them, but they are just easier than ever to make.
When
Many financial websites and tools have been using natural language processing and automation to dish out headlines in seconds because that information is important for investors. It's a speed and convenience thing vs. slower human input, but it's been going on for far longer than the generative AI boom we're currently seeing.
Yes, there is a risk of more attention moving from the publisher to search experience. If a Google generated AI search results solves the problem today that a page on someone’s site would have solved yesterday, that is a lost visitor. On the plus side for publishers, super powerful AI functions are a single API call away, meaning the publisher’s discovery and search experience may also be able to retain quality traffic longer.
In the future, I anticipate the government and even the private sector will reign in the wild west approach of anything on internet could be used as training data. Will
Curating is a value add. Sometimes, especially if given reliable and detailed rules, AI can curate as effectively as some humans. AI is 100% changing how we search and research on the internet. I’m not even certain that Perplexity’s search differentiation is the use of AI as Perplexity has amazing design choices. Was not surprised to see Meta roll out a very similar scrolling topics homepage during the launch of Meta AI chat and for SearchGPT to use a similar design for displaying relevant sources. Google search still dominates the market, their use of generative AI in search results demonstrates that generative AI will be a part of the future of the internet search market.
We use AI in a number of publishing systems across HackerNoon. Before publication, AI recommends headlines based on the story draft and past performance of HackerNoon stories. The humans still write the headlines better 95% of the time, but it’s nice to have the machines generate a few relevant options. We also use AI to better curate stories, like when we had to categorize
AI content is not source material. Sources will always need to be cited and linked to. With the rise of AI generated search result summaries and the aggregate number of human to AI interactions increasing daily, it is becoming expected for search experiences to be supplemented by an AI assistant.
Don’t be afraid of competing with robots. The demand for authentic human stories is as high as it's ever been. As AI floods the internet with billions of pieces of bad content, mediocre content, acceptable-ish content and even some remarkable content, great storytellers will continue to rise above. Whenever the writer lives/d, there’s barriers to entry to acquiring readers. Most of human history there were no magazines or internet. Whoever had a quality story to tell, found a way to tell their story. If you have stories to write, there are more ways than ever before to acquire readers from around the globe.
We will always need human originality… because we’re humans, we crave human stories.
Also Published as “What is the Role of AI in the Newsroom? We Ask Hackernoon’s CEO”