paint-brush
Calls to Regulate AI Are Growing Louder: Can Governments Initiate A Pro-Innovation Approach?by@jwolinsky
219 reads

Calls to Regulate AI Are Growing Louder: Can Governments Initiate A Pro-Innovation Approach?

by Jacob WolinskyApril 20th, 2023
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Calls for a pro-innovation approach to introduce more laws and policies to regulate Artificial Intelligence have grown increasingly loud in recent weeks. Twitter and Tesla CEO Elon Musk, Apple co-founder Steve Wozniak, and OpenAI scientist Yonas Kassa, among others, added their names to the growing list of signatories calling to “pause” ongoing development of AI tools and systems.

People Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - Calls to Regulate AI Are Growing Louder: Can Governments Initiate A Pro-Innovation Approach?
Jacob Wolinsky HackerNoon profile picture

Calls for a pro-innovation approach to introduce more laws and policies to regulate Artificial Intelligence (AI) have grown increasingly loud in recent weeks.


The sudden turn of events comes after key figures in the tech community, including Twitter and Tesla CEO Elon Musk, Apple co-founder Steve Wozniak, and OpenAI scientist Yonas Kassa, among others, added their names to the growing list of signatories calling to “pause” ongoing development of AI tools and systems.


In March, the Future of Life Institute published an open letter, cautioning AI developers and labs of the potential risks and harmful threats AI systems can have on the future of humanity if development continues at its current pace.


After decades of governments and software companies being pitted against one another over the potential regulation of artificial tools and systems, it seems that many of them are now climbing into the same boat as fears of AI outsmarting humanity and rendering civilization obsolete start to simmer.


The ever-large “danger race” as critics have called it has sided public and private opinions considering regulation.


While institutional frameworks already exist to protect user privacy and data, these only exist within the regulatory framework of the digital ecosystem and change throughout jurisdictions.


The quest to establish a regulatory approach requires broader and more widespread appeal from both public and private investors, but it raises the question of whether we can settle and implement a streamlined regulatory framework that can protect users and slow down the AI “danger race.”

The Sudden Cause of Action

The race to develop and launch the next generation of AI systems has become increasingly heated. Multinational tech giants have devoted millions, some even billions of dollars of investments to fund the research and development of advanced AI tools.


ChatGPT from OpenAI was perhaps the grain that completely tipped the scale. Backed by Microsoft, a similar version of ChatGPT will now become embedded in its Office applications, including programs such as Word, Excel, PowerPoint, and Outlook.


Google parent company Alphabet has also announced an in-house version of AI models and has already released Bard, a chatbot that runs advanced AI software and programming.


The Chinese technology groups, Baidu and Alibaba, have released news that they are now also looking to develop native AI systems that can be used across different platforms that are already integrated with their systems.


Alibaba wants to roll out a ChatGPT-style system called Tongyi Qianwen that will be initially added to Alibaba’s workplace messaging app, DingTalk. After successful application, Tongyi Qianwen could see winder rollouts in the coming year.


Artificial Intelligence at the helm of multinational conglomerate tech giants, with both the prowess and resources to develop advanced AI systems in a time where user privacy and cyber security issues have yet to be resolved, is the plume to a raging wildfire many are ignoring.


While there are initial risk factors that come with developing tools that are smarter, better, and more efficient than humans, answers to issues regarding privacy and data protection, among other things, become topics of interest for policymakers.


Yet, with all the investment and advanced programming, there’s still little we understand about how these systems work, and whether user data and information is used securely and with the consent of every person it interacts with.


Furthermore, a recent report by the global investment bank, Goldman Sachs, suggests that AI systems could replace more than 300 million human jobs in the coming decades.


While it could mean a productivity boom and the development of new jobs, it would, however, come at the cost of replacing a quarter of work-related tasks in the United States and Europe.


Other risks include the spreading of misinformation, something which government regulators have been struggling to get a better hand on as of late.


Discriminatory issues relating to race and age have also become talking points in more recent months.


This has prompted several states to introduce new legislation that could protect children against exposure to sensitive and harmful content and hold companies accountable for their actions based on discrimination.


This time, it seems as if lawmakers aren’t divided on which side of the political aisle they may be sitting on.


Both Democratic and Republican lawmakers are hoping to strike pro-innovation initiatives in the coming years that will seek to regulate AI more aggressively while seeking ways of how it can be progressively introduced into the public sphere.

Where Are the Calls Coming From?

From all over the world, it seems.


Italy has become the first Western country to block ChatGPT after the Italian data-protection authority cited privacy concerns relating to ChatGPT’s model.


Although Italy isn’t the first to do so, countries including China, North Korea, Iran, and Russia have all blocked the use of ChatGPT in recent months.


In America, Federal Regulators under the Biden Administration have released a public survey to help obtain input on policies that could help mitigate the risks of AI and hold those accountable for their actions.


The National Telecommunications and Information Administration published an “AI Accountability Request for Comment” to help with the proposed assessment of regulations.


California, Connecticut, Illinois, and Texas, among others, are some of the states that have moved ahead to initiate regulatory frameworks for AI systems and companies.


Elsewhere in the United Kingdom, the Information Commissioner’s Office, an independent data regulator, has said they are in support of AI developments but are open to challenging those that are not compliant with current data protection laws and regulations.


On March 29, 2023, the UK government published a white paper outlining its pro-innovation approach to regulating AI in the coming years.


Though the government seeks to develop further models and strategies that would make it a science and technology powerhouse of the world, the vision is to find a balance that would see humans and AI coexist more adequately.


The Australian government has established the National AI Centre, a government-funded institution that seeks to develop and grow the country’s AI ecosystem.


Alongside it is the Responsible AI Network, a division that helps to form responsible practices, standards, and laws for the regulatory framework of AI.


The European Union (EU) is perhaps leading the race in terms of AI regulation with the creation of the Artificial Intelligence Act.


If the proposed law comes into effect, it would ban applications and systems that create risks to users, and further stretch its influence to other applications that are currently unregulated.


Then there’s China, which has focused its regulatory efforts on designing systems that target algorithm applications that generate harmful and sensitive content accessible to viewers of all ages.


What’s perhaps worrisome is that the list doesn't stop there.


In a 2018 study by Bristows, 42% of British citizens said that AI should be regulated by the UK central government, while a further 18% feel that the European Union or United Nations should impose regulatory frameworks to control the development and deployment of AI.


Another study found that two-thirds of Australians feel that more can be done to protect and safeguard users from AI and have called on the government to act fast on their demands.


A study by Pew Research Center found that 45% of American adults are both equally concerned and excited about the prospects of Artificial Intelligence.


This, however, doesn't completely overshadow the 37% that are more concerned than excited about the future of AI in the public ecosystem.


While concerns are being voiced loud and clear, the more problematic aspects are that introducing regulation or updating existing policies and laws could still take several years.


By the time governments have finally managed to get grip on regulation, it would have perhaps been too late already.


Private organizations have over the years called for more ethical guidance, rather than introducing full-scale regulation.


This could perhaps set the tone for more balanced industry competition, while at the same pace allowing policymakers to better understand how to develop an open-ended regulatory framework.


This, however, doesn’t come without opposition, as some academic scholars have argued that stringent regulations could hinder AI’s full potential.


The systemic development of regulation could introduce efforts such as creative destruction, an effort that sees the dismantling of an existing structure and then reassembling it to understand more innovative practices.


This would be a beneficial approach, lending private and public institutions more time to better understand how they can formally regulate something that is already a widespread application.

Finding a Possible Solution

Looking forward, it seems as if the regulatory argument becomes somewhat blurred, even as governments and public opinion start to lean towards a more constructive understanding of how humanity can protect itself against the prowess of Artificial Intelligence.


While there is room for further innovation and a broader understanding of how humanity and technology can coexist, it would, however, require government entities and AI companies to work together to find plausible solutions that can help promote the development of AI but not completely hinder it from its full potential.


Yet, this raises the question, as to what could or would be the full potential of Artificial Intelligence, and how far are we willing to go to put it to the test?


There’s a lot that one needs to factor into the equation, and in the short term, a six-month suspension of AI development could give policymakers just enough time to initiate a conversation that could help broaden the assessment of AI regulation.


What they do afterward would be a race against time and would require governments and private institutions to sit in the same boat about how they can control an industry that has the potential to become a destructive weapon.