paint-brush
Reconciling AI Governance and Cybersecurityby@shashankyadav
375 reads
375 reads

Reconciling AI Governance and Cybersecurity

by Shashank YadavOctober 1st, 2023
Read on Terminal Reader
Read this story w/o Javascript

Too Long; Didn't Read

In the realm of AI security and governance, metapolicies often take a back seat, leaving the evolving threat landscape inadequately addressed. Multilateral collaboration, blending multistakeholderism and top-down approaches, emerges as a crucial solution. This article delves into the challenges, politics, and potential reforms necessary to safeguard the processes and policies driving AI systems, rather than just the outcomes.
featured image - Reconciling AI Governance and Cybersecurity
Shashank Yadav HackerNoon profile picture



Recently, Sam Altman has been touring the world attempting (perhaps) a regulatory capture of global AI developments. No wonder that OpenAI does not like open-sourced AI, at all. Nevertheless, this post isn’t about AI development, but its security and standardization challenges. Generally, an advanced cyber threat environment, as well as the defenders’ cyber situation awareness and response capabilities (henceforth referred to together as ‘security automation’ capabilities) are both overwhelmingly driven by automation and AI systems. To get an idea, take something as simple as checking and answering your Gmail today, and enumerate the layers of AI and automation that can constitute securing and orchestrating that simple activity.


Thus, all organizations of a noticeable size and complexity have to rely on security automation systems to affect their cybersecurity policies. What is often overlooked is that there also exist some cybersecurity “metapolicies” that enable the implementation of these security automation systems. These may include the automated threat data exchange mechanisms, the underlying attribution conventions, and knowledge production/management systems. All these enable a detection and response posture often referred to by marketers and lawyers as “active defense” or “proactive cybersecurity”. However, if you pick up any national cybersecurity policy, you’d be hard-pressed to find anything on these metapolicies – because they are often implicit, brought into national implementations largely by influence and imitation (i.e. network effects) and not so much by formal or strategic deliberations.


These security automation metapolicies are important to AI governance and security because in the end, all these AI systems, whether completely digital or cyber-physical, exist within the broader cybersecurity and strategic matrix. We need to be asking whether retrofitting the prevalent automation metapolicies would serve well for the future of AI or not.


Avoiding Path Dependency

Given a tendency towards path dependency in automated information systems, what has worked alright so far is getting further entrenched into the newer and adjunct areas of security automation, like the intelligent/connected vehicle ecosystem. Further, the developments in the security of software-on-wheels are being readily co-opted across a variety of complex automotive systems, from fully digitized tanks that hold the promise of decreased crew size and increased lethality to standards for automated fleet security management and drone transportation systems. Consequently, there is a rise in vehicle SOCs (Security Operations Centers) that operate on the lines of cybersecurity SOCs and use similar data exchange mechanisms, borrowing the same implementations of security automation and information distribution. That would be perfectly fine if the existing means were good enough to blindly retrofit into the emerging threat environment. But they are far from it.


For example, most cybersecurity threat data exchanges make use of the Traffic Light Protocol (TLP), however TLP itself is only a classification of information – its execution, and any encryption regimes to restrict distribution as intended, is left to the designers of security automation systems. Thus there is a need for not just more fine-grained and richer controls over data sharing with fully or partially automated systems but also for ensuring compliance with it. Much of threat communication policies like the TLP are akin to the infamous Tallinn manual, in the way that these are almost an expression of opinions that cybersecurity vendors may consider implementing, or may not. It gets more problematic when threat data standards are expected to cover automated detection and response (as is the case with automotive and industrial automation) – and may or may not have integrated an appropriate data security and exchange policy for lack of any compliance requirements to do so.


Another example of inconsistent metapolicies, out of numerous others, can be found in the recent rise of language generation systems and conversational AI agents. The thing is that not all conversational agents are ChatGPT-esque large neural networks. Most of these have been in deployment for decades as rules-based, task-specific language generation programs. Having a “common operating picture” by dialogue modeling and graph-based representation of context between such programs (as an organization operating in multiple domains/theaters could require) was an ongoing challenge before the world stumbled upon “attention is all you need”. So now basically we have a mammoth of legacy IT infrastructure in human-machine interface and a multi-modal AI automation paradigm that challenges it. Organizations undergoing “digital transformation” not only have to avoid inheriting legacy technical debts but also consider the resources and organizational requirements for efficiently operating an AI-centric delivery model. Understandably, some organizations (including governments) may not want a complete transformation right away. Lacking standardized data and context exchange between the emerging and the legacy automated systems, many users are likely to continue with a paradigm they are most familiar with, not the one that is most revolutionary.


In fact much of cybersecurity today hinges on these timely data exchanges and automated orchestration, and thus these underlying information standards become absolutely critical to modern (post-industrial) societies and the governance of cyber-physical systems. Yet, instead of formulating or harmonizing the knowledge production metapolicies needed to govern AI security in a hyper-connected and transnational threat environment – we seem to be falling into the doomer traps of existential deliverance and unending uncanny valleys. That said, one of the primary reasons for the lack of compliance and a chaotic standards development scenario in security data production is the lack of a primary governance agent.


The Governance of (Cyber) Security Information

Present automation-centric cyber threat information-sharing standards generally follow a multistakeholder governance model. That means these follow a fundamentally bottom-up life-cycle approach, i.e. a cybersecurity information standard is developed and is then pushed “upward” for cross-standardisation with ITU and ISO. This upward mobility of technical standards is not easy. The Structured Threat Information Expression (STIX), which is perhaps the de facto industry standard now for transmitting machine-readable Cyber Threat Intelligence (CTI), is still awaiting approval from ITU.  Not that it’s really needed, because the way global governance in technology is structured, it is led by industry and not nations. The G7 has gone to the extent of formalizing this, and some members even blocking any diplomatic efforts towards any different set of norms.


This works well for those nation-states that have the requisite structural and productive capacities within their public-private technology partnerships. Consequently, the global governance of cyber technology standards becomes a reflection of the global order. Sans the naming of cyber threat actors, this still had been relatively objective by nature so far. But it is no longer true with the integration of online disinformation into offensive cyber operations and national cybersecurity policies – not only the conventional information standards can run into semantic conflicts, newer value-driven standards over information environment are also popping up. Since the production and sharing of automation-driven social/political threat indicators can be shaped by and affect political preferences, as the threats of AI generated information and social botnets rises, the cybersecurity threat information standards also slide from a sufficiently objective to a more subjective posture. And states can do little to reconfigure this present system because the politics of cybersecurity standards has been deeply intertwined with their market-led multistakeholder development.


Cyber threat attributions are a good case in point. MITRE began as a DARPA contractor, and today serves as an industry wide de-facto knowledge base for computer network threats and vulnerabilities. Of the Advanced Persistent Threat groups listed in MITRE ATT&CK, close to 1/3 of cyber threats are Chinese, another 1/3 come from Russia/Korea/Middle-East/India/South-America etc, and the remaining 1/3 (which contain the most sophisticated TTPs, the largest share of zero-day exploitations, and a geopolitically aligned targeting) remain unattributed. We’ll not speculate here but an abductive reasoning about the unattributed threat cluster may leave readers with some ideas about the preferences and politics of global CTI production.


A fact of life is that in cyberspace power-seeking states have been playing the role of

governance actors and sophisticated offenders at the same time, so this market led multistakeholderism has worked out well for their operational logic – promulgating a global politics of interoperability. But it is bad for the production of cyber threat knowledge and security automation itself, which sometimes can get quite biased and politically motivated over the internet. Society has walked this path long enough to not even think of it as a problem when moving into a world surrounded by increasingly autonomous systems.

A Way Forward

With the social AI risks looming larger, states intending to implement a defensible cybersecurity automation posture today might have to navigate a high signal-to-noise ratio in cybersecurity threat information, multiple CTI vendors and metapolicies, as well as constant pressures from industry and international organizations about “AI ethics” and “cyber norms” (we’ll not venture into a discussion of “whose ethics?” here). This chaos, as we noted, is an outcome of the design of bottom-up approaches. However, top-down approaches can lack the flexibility and agility of bottom-up approaches. For this reason, it is necessary to integrate the best of multistakeholderism with the best of multilateralism.


That would mean rationalizing the present bottom-up setup of information standards under a multilateral vision and framework. While we do want to avoid partisan threats data production, we also want to make use of the disparate pool of industry expertise which requires coordination, resolution, and steering. While some UN organs, like the ITU and UNDIR, play an important role in global cybersecurity metapolicies – they do not have the sort of top-down regulatory effect needed to govern malicious social AI over the internet or implement any metapolicy controls over threat sharing for distributed autonomous platforms. Therefore, this integration of multistakeholderism with multilateralism needs to begin at the UNSC itself, or any other equivalent international security organization.


Not that this was unforeseen. When the first UN resolution was made assessing Information Technologies in 1998, particularly the internet, some countries had explicitly pointed out that these technologies would end up at odds with international security and stability, hinting at the required reforms at the highest levels of international security. Indeed, the UNSC as an institution has not co-evolved well with digital technologies and the post-internet security reality. The unrestricted proliferation of state-affiliated APT operations is just but one example of its failure to regulate destabilizing state activities. Moreover, while the council seems still stuck in a 1945 vision of strategic security, there is enough reason and evidence to relocate the idea of “state violence” in light of strategically deployed offensive cyber and AI capabilities.


While overcoming the resilience of global order and its entrenched bureaucracies is not going to be easy, if reformed in its charter and composition, the council (or its replacement) could serve as a valuable institution to fill the void that emerges from the lack of a primary agent in guiding the security and governance standards driving security automation and AI applications in cyberspace.

It Is The Process

At this point, it is necessary that we call out certain misunderstandings. It seems that regulators have some ideas about governing “AI products”, at least the EU’s AI Act suggests the same. Here we must take a moment of silence and quiet to reflect on what is “AI” or “autonomous behavior” – and it will soon dawn upon most of us that the present methods of certifying products may not be adequate for addressing adaptive systems rooted in continuous learning and exchanging data with the real world. What we’re trying to say is that the regulators perhaps need to seriously consider the pros and cons of a product-centric vs a process-centric approach to regulating AI.


AI, at the end, is an outcome. It is the underlying processes and policies, from data engineering practices and model architectures to machine-to-machine information exchanges and optimization mechanisms, where the focus of governance and standards needs to be, not on the outcome itself. Further, as software shifts from an object-oriented to an agent-oriented engineering paradigm, regulators need to start thinking about policy in terms of code and code in terms of policy – anything else will always leave a giant gap between intent and implementation.


If the aforementioned chaos of today’s multistakeholder cybersecurity governance is anything to go by, for AI security and governance we need an evidence-based (consider data that led to final CTI, and engaging with new types of technical evidence) threat data orchestration, runtime verification of AI-driven automation in cyber defense and security systems, clear non-partisan channels and standards for cyber threat information governance, and a multilateral consensus on the same. Focusing on the final AI product alone can leave much unaddressed and potentially partisan – as we see from the ecosystem of information metapolicies that drive security automation systems worldwide – hence we need to focus on better governing the underlying processes and policies that drive these systems and not the outcomes of those processes and policies.


Also published here.