“If an article is generated entirely by ChatGPT or a comparable service, who’s the writer? Who do you put as the author? Or does it matter?”
ChatGPT has been making waves in the content creation industry these past couple of months. But as writers navigate the ropes of this language model, ethical questions begin to arise as publications become wary of content generated by AI.
On today’s episode of Startups On Demand, I am joined by
Cate: It would be easy for me to say “oh look, we’ve got a party trick that’s just for fooling around,” but startups or established companies are leveraging ChatGPT to either expand their product or service, or as a way to create new proprietary tech. So they’re using those new open source elements of generative AI and they’ve added their own technology within to create new products that are solving industry challenges. So it is more profound and faster than I was anticipating.
Cate: We’ve looked at it at Tech EU because we’re interested as everybody else, and we’ve got as much pressure as every other publication on the planet to produce a lot of content. And we tried it out for more roundup listicle-style articles, like “here are 10 companies doing X,” and it may be able to give you 10 companies from a particular region or company and their synopsis, but it’s just not very interesting. It doesn’t necessarily give you color or feel, particularly if you’re looking for stuff in the same sector.
It won’t compare and contrast those – not that I’ve seen so far in that sense. So what we really give you are those bits that make it interesting – is there an interesting origin story? Is there a competitive advantage? Something that’s a little out of the ordinary that makes the story more interesting than just facts.
Cate: Sure! Let’s face it. The growth of this tech is exponential. But it will also depend on applicability. I know there are some publications that are, for example, screening quotes. So they want to make sure that those quotes haven’t been generated by ChatGPT. So people are skeptical.
But when it comes to something like copywriting or journalism, where there’s a large freelance or contractual workforce, it’s probably more of a challenge because there are fewer check announcements in the smaller publications. But there’s also less of a critical body to set standards about the ethical framework of ChatGPT and use these tools responsibly. But I also think it’s exciting in areas like filmmaking and drug discovery and stuff like that, but it will also be challenging in terms of applicability.
Cate: No, not at all. I think we’re all trying it because we’re curious. And that part of the work – you’re curious about stuff and you’d like to try it out. If something’s easy to use, you’re more likely to use it. I think the issue would be: if an article is generated entirely by ChatGPT or a comparable service, who’s the author? Who do you put as the author? Or does it matter? Or maybe this is what these tools are for – a framework where the writers can add color. Is that a bad thing? I don’t think it is. And let’s face it, in the media, this is not new: Bloomberg, MSN News and Yahoo News have been using these tools for almost a decade.
Cate: Particularly if you’re talking about commercial product offerings. I’ve seen people use it for storyboarding for film scripts where they take screenplays and they’re able to do 3D visual elements for parts of the film, and it's obviously not to make a film. It’s there for planning purposes. There are a lot of good use cases, but I think they’re still sorting out which ones would stick.
Also published here.