paint-brush
Navigating the Impact of Generative AIby@cburland
501 reads
501 reads

Navigating the Impact of Generative AI

by Craig BurlandMay 18th, 2023
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Generative AI (GAI) continues to gain mass and momentum. Recent pronouncements that major governments will consider legislation to control GAI is likely to escalate into a global conversation. But, it’s naïve to think politicians can overcome the physics of innovation. Clever humans will find new ways to use this tool, for good and bad, and continually adapt.
featured image - Navigating the Impact of Generative AI
Craig Burland HackerNoon profile picture


Like a snowball rolling downhill, AI – specifically generative AI – continues to gain mass and momentum, gathering likely and unlikely content in the process. Almost daily, announcements tumble out regarding new uses of Generative AI (GAI) – improving healthcare, streamlining business, and creating art. And while some marvel at the innovation, others urgently sound the alarm about misuses from malware to misinformation, from weaponization to humanity’s subjugation. Recent pronouncements that major governments around the world will consider legislation to control GAI is likely to escalate into a global conversation. But, it’s naïve to think politicians can overcome the physics of innovation. There’s no putting the GAI genie back in the bottle.


Taking a step back, as little as three years ago, if a product didn’t have AI it was considered last-generation.  From BI to EDR, products needed AI/ML to make sense of the volumes of data collected and aggregated. There was simply too much data for humans to make sense of it all. Finding needles in haystacks was the job of the AI/ML engines – nuggets we couldn’t see, patterns we wouldn’t find. The virtuous cycle of compute power and mass storage fueled the development of generative AI, moving beyond the analysis of past data and into the generation of new and unique content. In hindsight, GAI was a logical evolution of technology that started when we embraced Big Data.  But despite this progression, tools like ChatGPT have evoked fears pulled from science fiction movies.



ChatGPT burst into the public consciousness in late 2022, providing answers in well-wrapped, easily understandable language. Adding to its mystique was the ability to refine query results with additional questions or challenges. It felt like a conversation. Search engine queries couldn’t do this. No publicly available tool could interact in a way that felt like a conversation. Use cases and misuse cases exploded as more and more people pushed the bounds of what was possible. The constant stream of headlines drew the attention of thought leaders, scholars, and legislators.


In March, more than 100 AI researchers and engineers authored an open letter expressing concerns about GAI’s ability to be used in deepfakes, create misinformation, and the development of harmful content. The concerns were forward-looking but rooted in a desire to develop guidelines for GAI before the momentum of innovation became unstoppable. They called for the development of ethical guidelines, the creation of tools to detect and disarm harmful content, and the education of the public.


Multiple governments responded to the letter, echoing the author’s concerns and pledging to develop governance. While the US government wasn’t among those that responded to the letter, on May 4th, President Biden announced actions to address concerns with GAI. But, in the short term, any regulation will largely be ceremonial and practically unenforceable.


Governments will be hard-pressed to curtail building new models, slow expanding capabilities, or ban malicious use cases.  Unlike nuclear material or enrichment facilitates these models can be constructed and proliferate anywhere on the globe without detection.  Ethical guidelines also won’t slow the snowball rolling as most of the malicious uses will come from sources of questionable intent.


Generative AI is neither good nor bad. It is a tool. This innovation has tremendous potential and troubling downsides.  Clever humans will find new ways to use this tool, for good and bad, and will continually adapt as it evolves.


Is legislation the answer? Perhaps, but only for a sliver of what GAI could do.


Are ethical guidelines the answer? Maybe, but only for those researchers and developers who are guided by a moral compass.


Is the response to generative AI more generative AI, tuned to detect and label non-human output? This answer seems the most plausible, and almost a certainty.


As these engines evolve and misuses multiply, content filters and scanners will as well, ensuring that the output adheres to the norms of human vs non-human content. There’s no putting the generative AI genie back in the bottle. The solution is to get one of your own.