< Back to more papers

The Future of International Scientific Assessments of AI’s Risks

AUTHORS

Hadrien Pouget, Claire Dennis, Jon Bateman, Robert F. Trager, Renan Araujo, Belinda Cleeland, Malou Estier, Gideon Futerman, Oliver Guest, Carlos Ignacio Gutierrez, Vishnu Kannan, Casey Mahoney, Matthijs Maas, Charles Martinet, Jakob Mökander, Kwan Yee Ng, Seán Ó hÉigeartaigh, Aidan Peppin, Konrad Seifert, Scott Singer, Maxime Stauffer, Caleb Withers, and Marta Ziosi

ABSTRACT

Managing the risks of artificial intelligence (AI) will require international coordination among many actors with different interests, values, and perceptions. Experience with other global challenges, like climate change, suggests that developing a shared, science-based picture of reality is an important first step toward collective action. In this spirit, last year the UK government led twenty-eight countries and the European Union (EU) in launching the International Scientific Report on the Safety of Advanced AI. The UK-led report has accomplished a great deal in a short time, but it was designed with a narrow scope, limited set of stakeholders, and short initial mandate that’s now nearing its end. Meanwhile, the United Nations (UN) is now moving toward establishing its own report process, though key parameters remain undecided. And a hodgepodge of other entities—including the Organisation for Economic Co-operation and Development (OECD), the emerging network of national AI Safety Institutes (AISIs), and groupings of scientists around the world—are weighing their own potential contributions toward global understanding of AI. How can all these actors work together toward the common goal of international scientific agreement on AI’s risks? To discuss the way forward, Oxford Martin School’s AI Governance Institute and the Carnegie Endowment for International Peace brought together a group of experts at the intersection of AI and international relations in July. Drawing from that discussion, six major ideas emerged: (1) No single institution or process can lead the world toward scientific agreement on AI’s risks. (2) The UN should consider leaning into its comparative advantages by launching a process to produce periodic scientific reports with deep involvement from member states. (3) A separate international body should continue producing annual assessments that narrowly focus on the risks of “advanced”1 AI systems, primarily led by independent scientists. (4) There are at least three plausible, if imperfect candidates to host the report dedicated to risks from advanced AI. (5) The two reports should be carefully coordinated to enhance their complementarity without compromising their distinct advantages. (6) It may be necessary to continue the current UK-led process until other processes become established.