paint-brush
What the EU AI Act Means for the Blocby@linked_do

What the EU AI Act Means for the Bloc

by George AnadiotisJune 21st, 2023
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

If GDPR is anything to go by, the EU AI Act is a big deal. Here’s what and how it’s likely to effect, its blind spots, roadmap, and how you can prepare for it
featured image - What the EU AI Act Means for the Bloc
George Anadiotis HackerNoon profile picture

The EU Parliament just voted to bring the EU AI Act regulation into effect. If GDPR is anything to go by, that’s a big deal. Here’s what and how it’s likely to effect, its blind spots, what happens next, and how you can prepare for it based on what we know.


The last few months have been an onslaught of AI news, product releases and conversations. A big part of the conversation has been around regulatory frameworks or lack thereof. When it comes to AI regulation, the EU AI Act is the one dominating the conversation, and for good reason.


The EU Parliament’s Press Release on the draft that went into vote claimed that “once approved, they will be the world’s first rules on Artificial Intelligence”. Even if that’s not entirely accurate (Russia and China already have their own set of rules on AI), there are two things that make the EU AI Act stand out.


First, according to the experts, the EU AI Act is the most comprehensive and up-to-date regulatory framework on AI at this time. As AI is heavily reliant on data, the EU AI Act works in conjunction with data-related regulation in the EU.


Second, EU regulation has a precedent of creating a knock-on effect and influencing regulation and practices in other parts of the world as well. Case in point – GDPR.


We connected with Aleksandr Tiulkanov, a seasoned professional operating on the intersection of law, regulation, data protection and AI, to discuss the latest developments, blind spots, opportunities, and the roadmap for the relevant EU frameworks.

AI, data protection and GDPR

Tiulkanov is a commercial contracts lawyer turned into tech lawyer, specializing in data protection and AI. He holds a Master of Laws (LL.M.) on Innovation, Technology and the Law from the University of Edinburgh and is a Certified Information Privacy Professional/Europe (CIPP/E). He has worked at Deloitte Legal for a number of years, and consulted the Council of Europe on digital regulation, specifically around AI and data.


As most of us have not been exposed to this multitude of stakeholders, we were curious to know how they perceive and define AI. Unsurprisingly, definitions abound, and they are also often tied to specific interests. EU regulators have chosen to adopt the definition proposed by the OECD:


“The OECD defines an Artificial Intelligence (AI) System as a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments.”


This is a broad definition that does not hang on specific techniques, but rather, on capabilities. Therefore, it includes everything from rule-based systems to deep learning. In practice, however, what most people mean when they talk about AI is systems built using some variation of machine learning. That means that the machine learning models used have been trained on data.

What most talk about when they talk about AI is machine learning. But that’s not all there is to AI, and the definition used by the EU AI Act is not technology-specific. Image: Nvidia


Whoever builds machine learning models must have obtained their data on some legal basis, according to GDPR. The EU AI Act builds on GDPR in that respect. As Tiulkanov explained, as far as personal data is concerned, GDPR defines three different legal bases for obtaining those.


First, consent. That is, individuals are explicitly asked to give their consent to have their data collected and processed. Second, contractual provisions. That is, service providers may incorporate certain provisions into the terms and conditions for a certain digital product. Third, legitimate interest. This, however, is rather fuzzy:


“Sometimes organizations would argue that they have a legitimate interest in collecting the information, maybe scraping it over the internet and combining it into data sets and then eventually training their machine learning models”, Tiulkanov said.

Regulation blind spots

This provides a mechanism for many organizations claiming legitimate interest to collect personal data without consent or contractual provisions. That means that if organizations have the processes and roles required by GDPR in place, they can claim compliance.


As Tiulkanov noted, there are lots of companies using web scraped data to train their models. The question of whether and to what extent it is legal to do this is open. Even if it is legal, what precautions have to be taken to take into account the interests of those people whose data are being scraped?


“First, some data might not be accurate. Plus, models may infer additional data based on what they have. If A and B is true, then C is true as well because we have a prior history and statistically, it might be the case. It’s a huge issue potentially in terms of privacy because people affected may not be willing to have this information disclosed”, said Tiulkanov.


Another topic also governed by GDPR, as well as preceding EU regulation, is automated decisions. If individuals are the subject of an automated decision affecting their rights, GDPR dictates that those decisions can be questioned and possibly appealed.


Again, however, this provision is conditional. Individuals have the right not to be subjected to automated processing as long as it’s fully automated. The key word here is “fully”. If there is a human in the loop, then this provision does not hold. Naturally, organizations processing data can claim that they have a human in the loop, therefore they are exempt.

AI audits

These scenarios all point to one overarching question: is there a mechanism for auditing claims made for organizations, be it around legitimate interest, automated processing, or anything else? The short answer: no.


There are discussions about potentially introducing audits at least to cover some subset of AI systems, Tiulkanov said. He referred to New York City local law 144 on Automated Employment Decision Tools.


This legislation requires third party independent audits of AI systems, but only for a specific subset of AI systems: those used in employment and recruiting operations, specifically where the policymakers thought that there might be high risk of bias.


At this point, audits of AI systems are practically non-existent. The EU AI Act won’t change that either, at least not immediately


That’s the closest match to something like an independent audit of AI systems. The EU AI Act does not include such provisions. What it does include is conformity assessments, but there’s a big difference there. Conformity assessments are done by organizations themselves, not third party auditors.


Tiulkanov thinks that independent audits can be a good thing for everyone, including vendors. It can be a good thing for the general public, because it will make sure organizations play by the book. It can be a good thing for the economy, because it will create a new market for audits. And it can also be a good thing for vendors, because it will provide assurance and credibility for their products.

Regulation vs. Innovation

When discussing the AI regulation landscape worldwide, Tiulkanov noted that traditionally the US have followed a lighter touch approach compared to the EU, and AI is no exception.

“There is sometimes this argument that maybe the reason why everything in terms of scaling and coming up with new ideas around AI systems is to a large extent is coming right now from the United States is because of lax regulations.


Others may be arguing in the contrary direction that actually if you have this regulation in place, you have some legal certainty which allows you to plan your business activities in a long term way. You will be sure that there is a certain continuity and you are at least compliant with the set of regulations which is in place and it’s not going to change that much in the future. So arguments can be made both ways”, Tiulkanov commented.


In the end, regardless of how much pressure vendors try to put on policy makers, the decision of whether to comply with regulation or pull out of certain regions altogether mostly comes down to pragmatism. If the market is sufficiently big and important, vendors will probably choose to comply. But Tiulkanov is not the only one who points out regulation as an opportunity.

Keeping up with the technology

We’ve been hearing about the EU AI Act for quite some time now, so we wondered if there are important differences between previous versions and what the EU Parliament voted on last week. As Tiulkanov pointed out, the latest version of the EU AI Act has taken the latest developments on AI large language models (LLMs) into account.


The main changes between the initial European Commission proposal in 2021 have come about via amendments by the European Council in December 2022 and the European Parliament in March 2023. Even back in December 2022, amendments were made to include the notion of general purpose AI systems.


Originally, the EU AI Act expected AI system developers to know precisely what the uses of their systems would be. That expectation was not realistic, especially in light of LLMs. The EU AI Act classifies AI systems into 4 categories according to the perceived risk they pose.


Unacceptable risk systems are banned entirely (although some exceptions apply), high-risk systems are subject to rules of traceability, transparency and robustness, low-risk systems require transparency on the part of the supplier, and minimal risk systems for which no requirements are set. That has not changed.


What has changed is that the notion of general purpose AI systems powered by foundation models is introduced. Uses of the system may or may not be regulated, depending on how they are categorized.


In addition, there are now specific requirements for foundation models, including for things such as data governance and compute. That means foundation models can be evaluated, and will probably converge towards compliance.

Keeping an eye out for standards

So, what happens now? There is still some way to go before the EU AI Act comes into effect. First, there is the process of the so-called trilogue. In the context of the European Union’s ordinary legislative procedure, a trilogue is an informal negotiation involving representatives of the European Parliament, the Council of the European Union and the European Commission.


Tiulkanov believes that it’s not very likely for the current draft to change significantly, except perhaps on topics related to the use of biometric identification in public places. If all goes well, it’s possible the trilogue can conclude by the end of 2023. Then the final text can be enacted in early 2024, with a grace period of two years.


The EU AI Act is likely to be enacted in early 2024, with a grace period of two years


However, there is a catch. As Tiulkanov explained, in order for the EU AI Act to be enforceable in practice, there are certain standards that need to be in place as well.


“They are likely to include the requirements that would enable you to set up, let’s say, a data governance structure which is envisaged by the Act. Or to set up a risk management system. Those may only apply to high risk uses of AI systems, but, there have to be some technical standards to align the approaches.


I don’t think there will be overly prescriptive in terms of methods of achieving compliance, but there still has to be some alignment in terms of what we actually need to do. These technical standards are set on the European level. It is assumed that if you comply with those AI standards, you also comply with this AI Act. It’s a presumption of compliance. It’s the usual rule around regulations and technical standards in Europe”, Tiulkanov noted.


In other words: the clock is ticking. Better keep an eye on those standards to come out, as the devil is in the details – or lack thereof.


Also published here.