paint-brush
LLMs: Is NIST's AI Safety Consortium Relevant Amid California's SB 1047?by@step
New Story

LLMs: Is NIST's AI Safety Consortium Relevant Amid California's SB 1047?

by stephenAugust 29th, 2024
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Having more than 200 organizations is the opportunity for more than 200 different technical approaches to AI safety. The abundance would be useful to combat AI threats, misuses, and risks, both present and future, from all sources and destinations, not just major sources, commercial ones, or from friendly locations. There is no evidence that all of the companies are currently developing technical research in AI safety. There is no aggregation of their work, or collaborative collection or lookout for technical approaches to safety on display for all of them, to show that there is a major difference to the consortium. Those who are working in AI safety would have been doing it anyway without the consortium, limiting the approach to solutions that is necessary in the unknowns ahead for AI safety, alignment, and governance.
featured image - LLMs: Is NIST's AI Safety Consortium Relevant Amid California's SB 1047?
stephen HackerNoon profile picture

How big of a problem is digital piracy? It can be argued that regulation and litigation have ensured mitigation from the mainstream. However, when some AI companies needed training data, why were pirated copies of contents allegedly [accessible to be] crawled and scraped?


Why are several other unlawful events possible online, in ways that are not in the physical world—so to speak? What is peculiar about digital that makes it difficult for regulation and litigation to be decisive in eradicating problems?


One easy-to-identify issue, especially with the internet—in recent decades—is that development has been ahead of safety. Safety is often added by some of the initializing companies, then tests in the real world with flaks that have resulted in adjustments.


Though policy, framework, governance, terms, best practices, and so forth have been necessary to internet platforms, what would have been efficient is broad possibilities for technical angles to safety. Technical safety with evolving approaches from several corners would have done better than anything else, for the internet, than reliance on a traditional law-led approach.


The same problem may follow AI safety, which regulations are trying early to correct—after the errors from social media and the internet. Regulating social media earlier may have worked in many forms, but there would have been numerous lapses that would have resulted in some of the same problems that eventually happened. Social media is as harmful as the technical shortages for safety.


Assuming for some social media users, there were plugins that went through pages for them, even before seeing what was there, or there were plugins that sort to reinterpret what they were seeing or hearing, not just to fact-check, or there were plugins that developed usage parallels so that going to social media would have been unnecessary in some instances, they may have helped much.


These are not just descriptions of what to do in retrospect, but things that are still necessary, which do not seem like any team is doing. Reliance is generally on the companies for safety, but they may do as it is possible or as it benefits the business. Assuming there are efforts from too many sources on technical solutions, maybe more would have been possible against past and current disadvantages of social media.


In a way, seeking to correct that led to the establishment of the US AI Safety Institute, which followed the UK AISI and was followed by the EU AI Office. The purpose, in part, is to do technical research for AI safety.


The US AISI also has a safety consortium, consisting of several organizations whose CRADA—Consortium Cooperative Research and Development Agreement, states that, "The purpose of the NIST Artificial Intelligence Safety Institute Consortium (“Consortium”) is to establish a new measurement science that will enable the identification of proven, scalable, and interoperable measurements and methodologies to promote safe and trustworthy development and use of Artificial Intelligence (AI), particularly for the most advanced AI (“Purpose”). Technical expertise, Models, data, and/or products to support and demonstrate pathways to enable safe and trustworthy AI systems,  Infrastructure support for Consortium projects in the performance of the Research Plan, Facility space and hosting of Consortium Members’ participants, workshops, and conferences."


The consortium was announced in February, which is enough time for all the organizations included to have an AI safety department, lab, or desk, with a link available on the consortium's list. It is probably not enough to have a mandate, but out of necessity to actually follow through actively with work and varying approaches for the objectives. AI companies on the list are developing products and safety.


Others may have their paths, but it appears that it is the same pattern where engineering and safety are bundled by major firms as they did for social media.


Having more than 200 organizations is the opportunity for more than 200 different technical approaches to AI safety. The abundance would be useful to combat AI threats, misuses, and risks, both present and future, from all sources and destinations, not just major sources, commercial ones, or friendly locations. There is no evidence that all of the companies are currently developing technical research in AI safety.


There is no aggregation of their work, collaborative collection, or lookout for technical approaches to safety on display for all of them, to show that there is a major difference to the consortium. Those who are working in AI safety would have been doing it anyway without the consortium, limiting the approach to solutions that are necessary for the unknowns ahead for AI safety, alignment, and governance.


California lawmakers just passed SB 1047, which is symbolic but unlikely to be efficient without thorough technical solutions. The targets may comply because it is what they can also do and because they want to survive. But risks from others that the regulations would not encase would remain and may overwhelm compliant models.


There are several angles that AI safety can proceed from, from theoretical neuroscience of how the human mind is cautious to novel technical areas, away from the regimentation of the paths of some AI corporations. There should be as many AI safety institutes as possible and as many AI safety labs and departments as possible, for broad approaches against an intelligence that is capable and not natural which, for now, can be used for some negative purposes, with uncertainties as it improves.


There is a recent story on Reuters, Contentious California AI bill passes legislature, awaits governor's signature, stating that, "California lawmakers passed a hotly contested artificial-intelligence safety bill on Wednesday, after which it will need one more process vote before its fate is in the hands of Governor Gavin Newsom, who has until Sept. 30 to decide whether to sign it into law or veto it. Tech companies developing generative AI - which can respond to prompts with fully formed text, images or audio as well as run repetitive tasks with minimal intervention – have largely balked at the legislation, called SB 1047, saying it could drive AI companies from the state and hinder innovation. The measure mandates safety testing for many of the most advanced AI models that cost more than $100 million to develop or those that require a defined amount of computing power. Developers of AI software operating in the state also need to outline methods for turning off the AI models if they go awry, effectively a kill switch. The bill also gives the state attorney general the power to sue if developers are not compliant, particularly in the event of an ongoing threat, such as the AI taking over government systems like the power grid. As well, the bill requires developers to hire third-party auditors to assess their safety practices and provide additional protections to whistleblowers speaking out against AI abuses."


There is a new press release, U.S. AI Safety Institute Signs Agreements Regarding AI Safety Research, Testing and Evaluation With Anthropic and OpenAI, stating that, "Today, the U.S. Artificial Intelligence Safety Institute at the U.S. Department of Commerce’s National Institute of Standards and Technology (NIST) announced agreements that enable formal collaboration on AI safety research, testing and evaluation with both Anthropic and OpenAI. Each company’s Memorandum of Understanding establishes the framework for the U.S. AI Safety Institute to receive access to major new models from each company prior to and following their public release. The agreements will enable collaborative research on how to evaluate capabilities and safety risks, as well as methods to mitigate those risks. Additionally, the U.S. AI Safety Institute plans to provide feedback to Anthropic and OpenAI on potential safety improvements to their models, in close collaboration with its partners at the U.K. AI Safety Institute."


Feature image source