What's In Blue

Posted Wed 18 Dec 2024
  • Print
  • Share

Artificial Intelligence: High-level Briefing

Tomorrow morning (19 December), the Security Council will convene for a briefing on artificial intelligence (AI) under the “Maintenance of international peace and security” agenda item. US Secretary of State Antony Blinken will chair the meeting, which is one of the signature events of the US’ December Council presidency. Briefings are expected from UN Secretary-General António Guterres; Yann LeCun, Chief AI Scientist at Meta’s Fundamental AI Research (FAIR) team; and Fei-Fei Li, Professor in the Computer Science Department at Stanford University and Co-Director of Stanford’s Human-Centered AI Institute.

According to a concept note prepared by the US, the meeting aims to build on previous Council discussions on emerging technologies and recent efforts by member states to advance global dialogue on these challenges. The concept note poses several questions to help guide the discussion, including:

  • How can the international community ensure the responsible development and use of AI?
  • How can the international community govern AI, so as to contribute to international peace and security while narrowing digital divides?
  • What additional steps can member states take to foster a robust international ecosystem to help identify and mitigate the risks posed by AI, while also avoiding a patchwork of global governance that could hamper innovation and deny developing countries access to the benefits of AI?
  • What additional steps can member states take to ensure appropriate safeguards to mitigate potential risks associated with the application of AI in the military domain?

The Secretary-General’s July 2023 policy brief A New Agenda for Peace recognised AI as both an enabling and a disruptive technology that is being increasingly employed in a wide array of civilian, military, and dual-use applications. It highlighted that AI’s increasing ubiquity, coupled with its rapid scalability, lack of transparency, and pace of innovation, presents potential risks to international peace and security and poses governance challenges. The policy brief urged member states to address these risks by developing frameworks to regulate AI-enabled systems within the peace and security domain. One proposal in this regard was the establishment of a global body to mitigate the associated risks of AI while leveraging its potential to advance sustainable development, possibly drawing on governance models such as those of the International Atomic Energy Agency (IAEA), the International Civil Aviation Organization (ICAO), and the Intergovernmental Panel on Climate Change (IPCC).

The final report of the High-Level Advisory Body on AI (HLAB-AI), established by the Secretary-General in October 2023 to guide the international community on AI governance, did not recommend establishing such an agency at the moment. It noted, however, that if AI risks intensify, member states may need to consider establishing a more robust international institution with monitoring, reporting, verification, and enforcement powers. The report recommended creating an International Scientific Panel on AI to provide impartial and reliable scientific knowledge about AI; convening a UN-led policy dialogue on AI governance; establishing a global AI fund; and developing a capacity-building network to address technological divides.

At the Summit of the Future on 22 September, member states adopted the Pact for the Future and its annexes: the Global Digital Compact (GDC) and the Declaration on Future Generations. Action 27(d) of Chapter 2 of the Pact, which is focused on international peace and security, committed member states to continuing to assess the risks and opportunities associated with the military applications of AI throughout their life cycle, in consultation with relevant stakeholders. Furthermore, the GDC decided to establish, in line with the recommendations of the HLAB-AI, a multidisciplinary Independent International Scientific Panel on AI and to initiate, within the UN, a Global Dialogue on AI Governance.

The Security Council has only recently begun exploring the linkages between AI and international peace and security. In recent years, Council members have convened several formal meetings to address various aspects of these developments, including discussions on technology and peacekeeping (18 August 2021); technology and security (23 May 2022); artificial intelligence (18 July 2023); and anticipating the impact of scientific developments on international peace and security (21 October 2024). Council members have also organised informal Arria-formula meetings on similar topics, including emerging technologies (17 May 2021); artificial intelligence and hate speech, disinformation and misinformation (19 December 2023), and most recently on the potential of science for peace and security (17 May).

Discussions on AI in the context of peace and security have taken place primarily within specialised forums such as the Open-ended Working Group (OEWG) on security of and in the use of information and communications technologies (ICTs) and the Group of Governmental Experts (GGE) on emerging technologies in the area of lethal autonomous weapons systems (LAWS).

The General Assembly has also been increasingly active in addressing AI-related issues. On 21 March, it adopted its first resolution on AI, titled “Seizing the opportunities of safe, secure and trustworthy artificial intelligence systems for sustainable development”, which highlighted risks such as digital divides, structural inequalities, and threats to human rights. On 1 July, the General Assembly adopted a China-led resolution titled “Enhancing international cooperation on capacity-building of artificial intelligence”. Two separate groups of friends were subsequently established: the Group of Friends on Artificial Intelligence for Sustainable Development, established by the US and Morocco, and the Group of Friends for International Cooperation on AI Capacity-building, co-chaired by China and Zambia.

On 16 October, the General Assembly’s First Committee adopted a resolution, co-led by the Netherlands and the Republic of Korea (ROK), titled “AI in the military domain and its implications for international peace and security”. The text affirmed the applicability of international law, including the UN Charter, international humanitarian law, and human rights law in the use of AI in the military domain and stressed the importance of responsible, human-centric AI use. The text received 165 votes in favour, two votes against, and six abstentions. All Council members—except Russia, which voted against—voted in favour of the text. In its explanation of vote, Russia cited concern that the resolution could fragment ongoing multilateral efforts, particularly those under the GGE on LAWS. Moscow also criticised the inclusion of criteria for the “responsible application” of AI and its recognition of regional initiatives, which it argued should not dictate norms for international discussions on military AI governance.

At tomorrow’s briefing, Council members are expected to address a broad range of AI-related issues, highlighting national and regional initiatives to regulate and establish standards for its use. Several members are likely to stress the applicability of international law, including the UN Charter, international humanitarian law, and human rights law, to AI’s military applications. These members may underline the need for AI systems to be safe, secure, and trustworthy, while advocating for inclusive international cooperation to develop interoperable safeguards and standards that promote innovation and prevent governance fragmentation.

Several Council members are also expected to highlight the need to engage on AI-related issues with a diverse range of stakeholders, including non-governmental organisations, the private sector, and academia. Calls for greater support to developing countries to address digital and technological divides are anticipated from several members. Additionally, some members are likely to stress the importance of leveraging AI technologies to enhance the UN’s work and improve the Council’s decision-making processes. In this regard, there may be references to resolution 2518 of 30 March 2020 and the presidential statement of 24 May 2021, which reaffirmed the Council’s commitment to integrating new technologies to strengthen peacekeeping operations’ situational awareness and operational capacity.

Council members are expected to present diverging views on the Security Council’s role in addressing AI-related threats to international peace and security. Some members may argue that, as the primary organ for maintaining international peace and security, the Council must stay abreast of technological advancements to effectively anticipate and prevent threats to global peace and security. These members may reference a 21 October Swiss-authored presidential statement in which the Council expressed its commitment to more systematically consider scientific advances, particularly with regard to their impact on international peace and security. Other members may caution against framing the issue narrowly within a security context, advocating for broader discussions in the General Assembly and specialised forums to avoid duplication. Russia, in particular, has expressed concerns about pre-empting outcomes from processes such as the OEWG on the security and use of ICTs and the GGE on LAWS.

Sign up for What's In Blue emails

Subscribe to receive SCR publications