You can jump to any part of the United States International Cyberspace & Digital Policy Strategy here. This part is 19 of 38.
One of the most pressing challenges for digital solidarity is developing common approaches to governing critical and emerging technologies such as AI. The speed of innovation, the scale of the competition, and the stakes for our values, security, and prosperity demand concerted action. With AI technologies, we will not have the luxury of time or of pursuing narrow interests that have often slowed our ability to develop shared principles and interoperable regulatory approaches in other parts of the digital economy.
Shaping shared values and governance principles on the development, deployment, and use of AI is increasingly central to American digital diplomacy. The United States is engaging allies, partners, the private sector, civil society, the technical community, and other stakeholders in discussions at the G7, Global Partnership on Artificial Intelligence, the Council of Europe, OECD, UN, UNESCO, and other fora to manage the risks of AI and ensure its benefits are widely distributed. In addition, we will need to work together to invest in the science research and infrastructure necessary to measure, evaluate, and verify advanced AI technology systems.
In July 2023, President Biden announced voluntary commitments from seven leading AI companies to advance the safe, secure, and transparent development of AI technology. Eight more companies (including one foreign-based company) signed on to the commitments in September. The United States internationalized and expanded on the voluntary commitments through the G7 Hiroshima AI process led by Japan to tackle generative AI, with leaders releasing an International Code of Conduct for Organizations Developing Advanced AI systems in October 2023. We continue to work on broadening acceptance of the Code of Conduct by more countries and companies beyond G7 member countries.
The United States joined twenty-seven other countries at the UK AI Safety Summit and signed the Bletchley Declaration, which encourages transparency and accountability from actors developing frontier AI technology. The United States and the United Kingdom have also signed a memorandum of understanding between their respective AI Safety Institutes advancing the science of measuring, evaluating, and addressing AI risks as a first step toward a global consensus on the scientific underpinnings of AI safety. These efforts outline a role for national governments, promote international cooperation, and encourage innovation by providing technically rigorous guidelines for introducing safe, secure, and trustworthy AI technology. At the same time, USAID and several other international development donors entered into a partnership to promote safe, secure, and trustworthy AI development in low- and middle-income countries in Africa and other parts of the world.
In October 2023, President Biden issued an Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. This Order establishes a process to develop new standards for AI safety and security and seeks to protect citizens’ privacy, promote innovation and competition, and advance equity and human rights. The Order tasked the Department of State with strengthening U.S. leadership abroad on AI issues. The Department of State and USAID, in collaboration with the Department of Commerce, are leading an effort to establish an AI in Global Development Playbook to harness AI’s benefits and manage its risks. Relatedly, the Department of State plans to lead an interagency task force on detecting, authenticating, and labeling synthetic content, which aims to facilitate information sharing and mobilize global commitments to both label authentic government-produced content and detect synthetic content. In addition, working with the Department of Homeland Security (DHS), the Department of State is engaging international partners to help prevent, respond to, and recover from potential critical infrastructure disruptions resulting from the incorporation of AI into critical infrastructure systems or the malicious use of AI against those systems. The Department of State and USAID are also working with interagency partners, including the National Institute of Standards and Technology (NIST), National Science Foundation (NSF), and Department of Energy, to develop a human rights risk management framework for AI and a global AI research agenda.
The Department of State is also building broad-based support for the Political Declaration on Responsible Military Use of AI and Autonomy. While there are important discussions ongoing in Geneva under the framework of the Convention on Certain Conventional Weapons (CCW) – which the United States will continue to support – the scope of those discussions only covers one possible military use of AI, namely autonomous weapon systems. The Political Declaration is the first effort to articulate principles and best practices covering all military applications of AI technologies.
Continue Reading Here.
This post was originally published on May 6, 2024, by the U.S Department of State