Multiple countries are committed to signing the world’s first artificial intelligence treaty, which will soon be ratified and go into effect. Is it too little, too late? Will it adversely affect research and development in the field? The only way to predict its impact is to dissect the framework’s language and analyze existing AI regulations.
The Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law is the world’s first AI Treaty. After spending years in development and being adopted in May 2024 by the Council of Europe — a 46-member human rights organization — it opened for signature in September.
This treaty is a legal framework that covers an algorithm’s entire life cycle. Interestingly, the Council of Europe defines an AI system
The 57 countries and entities that negotiated the AI Treaty now get an opportunity to sign it. World leaders like the European Union, the Council of Europe, the United Kingdom, and the United States were among the first to sign. The treaty will go into effect
The distinction between signing and ratification is essential. While the first indicates a country’s approval, the latter is legal consent. Since the Council of Europe’s AI Treaty is legally binding, the Council of Europe must follow through with this second step to enforce it. If it proceeds, each nation will have to develop a governance framework.
This treaty requires those involved to make broad commitments to safety, privacy, and ethics, leaving the finer details open to interpretation so each party can navigate them as they see fit. Whether they adopt legislative, administrative, or investigative measures, they must make an effort to ensure accountability for AI’s impacts on human rights, democracy, and the rule of law.
Although advanced AI has only existed for a few years, it has already greatly impacted most industries.
Sectors like health care, manufacturing, and agriculture have historically been slow to adopt the latest technologies but have readily accepted artificial intelligence. For instance,
With these algorithms entering every industry from education to retail, inaction is risky. In terms of safety, there is a significant cybersecurity threat posed by large language models collecting massive amounts of user information. A single data breach could result in a countless number of credit card fraud, impersonation__,__ and identity theft cases.
Unchecked development could be unsafe or unethical even without considering potential security issues. Advanced algorithms can produce hundreds of thousands of words in seconds, process thousands of documents in moments, and extract hidden insights without preparation. In other words, they are incredibly powerful — and there is virtually no barrier to entry.
What happens if a terror group poisons a dataset to spread misinformation? How does the government punish a company for stealing personally identifiable information from users? Until the law catches up, regulatory and law enforcement agencies are at a disadvantage. The world is working together to regulate AI because it recognizes the potential danger.
AI is rapidly evolving, leaving regulatory agencies struggling to keep up. While the world’s first AI Treaty marks a turning point in regulation, it is not the first step nations have taken to control this technology’s growth. Many have been concerned since it first began trending, prompting them to work quickly to put new laws on the books.
In 2021, the EU proposed regulations to control AI. By 2024, it had developed and adopted the world’s first comprehensive AI regulation. The EU AI Act
No comprehensive regulation exists in the United States. However, the Biden Administration
International standards also exist, though, like many governance frameworks, they aren’t enforceable. For instance, ISO/IEC 42001 — published by the International Organization for Standardization and International Electrotechnical Commission —
Since the AI Treaty doesn’t specify rules, laws, or punishments, there’s no telling how it will affect the average AI user yet. Although it goes into effect just three months after being ratified by five countries, most nations will need more time to draft legislation. They will likely prioritize sharing and refining frameworks over implementation for a few years.
Technically, a follow-up mechanism exists to enforce the treaty’s requirements. Countries must periodically report to the Conference of the Parties to share what steps they’ve taken. However, deviation won’t exactly get them in trouble. This system of accountability can only incentivize progress — there likely won’t be any legitimate punishment for noncompliance.
For the time being, AI companies and users are safe from regulation. Eventually, though, the federal government will create rules that influence advanced algorithms' development, testing, and utilization. It may be sooner than later since the legally binding framework they have just entered into incentivizes them to accelerate their progress.
Without details on how major world powers expect to proceed with regulation, there’s no telling how their involvement in the AI Treaty could influence the future of algorithm-based technology. That said, extrapolating the results of the few existing comprehensive governance frameworks can help narrow the possibilities.
People’s first reaction to regulation is usually adverse — they expect it to make their lives more difficult or unnecessarily convoluted. In reality, legal frameworks are essential for protecting companies and end users from unforeseen adverse outcomes. Since AI’s capabilities remain largely unexplored, guidance is critical.
Even if the United States or the United Kingdom were to create a comprehensive governance framework today, their rules would only be enforceable within their borders. The emergence of the world’s first AI treaty marks a promising turning point for regulation.
Moreover, regulation may ultimately incentivize research and development. Patent and copyright law are excellent examples of this — they prevent one-to-one copies, forcing people to innovate. Holding AI companies to a higher standard could result in better, more trustworthy products instead of seemingly endless, low-effort imitations.
Realistically, there is a significant chance AI companies — and firms using AI — will be noncompliant with whatever regulation their governments enforce because of the AI Treaty. While they will likely be given a grace period to fix those issues, a decent percentage will lack the resources to do so. As a result, they may go out of business, hindering innovation.
Some experts may have an issue with the treaty’s language. Its main goal — to hold AI accountable for its impacts on human rights, democracy, and the rule of law —
Moreover, since the AI Treaty does not define specific measures countries must take, some may go too far. Regulating algorithms is uncharted territory. They may compensate for uncertainty by attempting to make their rules future-proof. In doing so, they could stifle research and development, preventing growth in domestic AI.
The AI treaty likely won’t stifle innovation, but only time will tell. Industry professionals concerned about what this convention means for the future of research and development should follow their country’s regulatory agencies closely post-ratification. This way, they can act proactively to ensure they don’t risk noncompliance.