Arria-formula Meeting on Artificial Intelligence
This afternoon (4 April), Security Council members will hold an Arria-formula meeting on artificial intelligence (AI) titled “Harnessing safe, inclusive, trustworthy AI for the maintenance of international peace and security”. The meeting is being organised by Greece, together with France and the Republic of Korea (ROK), and co-sponsored by Armenia, Italy, and the Netherlands. The expected briefers are Giannis Mastrogeorgiou, Special Secretary for Strategic Foresight at the Presidency of the Hellenic Government and Coordinator of Greece’s National Advisory Committee on AI; Yasmin Afina, AI Researcher in the Security and Technology Programme at the UN Institute for Disarmament Research (UNIDIR); and Charlotte Scaddan, Senior Adviser on Information Integrity at the UN Department of Global Communications.
The meeting, which will begin at 3 pm EST and take place in Conference Room 2, will be broadcast on UNTV. It will be open to representatives of all UN member states and permanent observers, as well as civil society organisations accredited to the UN.
According to the concept note circulated by the co-organisers, today’s meeting seeks to build on previous Security Council discussions on AI and examine what practical responses the UN, including the Security Council, can take to support national, regional, and global initiatives on the responsible use of AI that contribute to the maintenance of international peace and security. These responses may include regulation, non-proliferation, and the prevention of the diversion or misuse of AI capabilities in the military domain, as well as measures to enhance the rule of law, democratic values, social cohesion, and economic development.
The concept note acknowledges AI’s transformative potential for multilateral peace and security efforts, including its implications for peacekeeping operations and special political missions mandated by the Security Council. While AI can enhance capabilities such as training, logistics, landmine detection, and surveillance, its misuse can also undermine trust in peacekeepers, propagate misinformation and hate speech, and facilitate malicious cyberattacks.
Today’s meeting aims to assess the current state of discussions on AI within the UN, including the Security Council, and to offer member states a platform to share their national experiences and lessons learned on concrete benefits of AI that may contribute to the maintenance of international peace and security. The concept note poses several questions to help guide the discussion, including:
- How can initiatives supporting open, inclusive, and trustworthy AI contribute to the technology’s responsible use and help uphold international peace and security?
- How can the UN, including the Security Council within its mandate, leverage the responsible use of AI to promote international peace and security?
- What practical responses are needed to address challenges to information integrity, such as malicious cyberattacks, the dissemination of hate speech, and disinformation campaigns, especially in the context of UN peace operations?
The Security Council has only recently begun exploring the linkages between AI and international peace and security. The UK convened the Council’s first formal meeting on this topic during its July 2023 presidency, followed by a US-convened meeting in December 2024. (For more background on the Council’s engagement on AI, see our 20 December 2024 report on “Future of the Pact: Recommendations for Security Council Action”.)
UN discussions on AI in the context of peace and security have taken place primarily within specialised forums such as the Group of Governmental Experts (GGE) on emerging technologies in the area of lethal autonomous weapons systems (LAWS), which was established in 2016 under the auspices of the Convention on Certain Conventional Weapons. The General Assembly has also been increasingly active in addressing AI-related issues. On 24 December 2024, it adopted a resolution, co-drafted by the Netherlands and ROK, titled “AI in the military domain and its implications for international peace and security”. The text affirmed that international law—including the UN Charter, international humanitarian law, and international human rights law—remains fully applicable to the use of AI in military contexts. It also underscored the need for the responsible and human-centred use of AI in this domain.
At the Summit of the Future held on 22 September 2024, member states adopted the Pact for the Future and its annexes: the Global Digital Compact (GDC) and the Declaration on Future Generations. Action 27(d) of Chapter 2 of the Pact, which is focused on international peace and security, committed member states to continue to assess the existing and potential risks associated with the military applications of AI. Furthermore, the GDC decided to establish a multidisciplinary Independent International Scientific Panel on AI and to initiate, within the UN, a Global Dialogue on AI Governance.
Perspectives on AI have also been shaped by multilateral discussions held outside the UN system. In February 2023, the Netherlands and ROK hosted the first Responsible Artificial Intelligence in the Military Domain (REAIM) Summit, resulting in a joint Call to Action. At that summit, the US also launched the Political Declaration on Responsible Military Use of AI and Autonomy, outlining state-level measures for military AI governance. The second REAIM Summit, co-hosted by Kenya, the Netherlands, the ROK, Singapore, and the UK in September 2024, resulted in the REAIM Blueprint for Action. Broader efforts toward AI safety include the Bletchley Declaration from the November 2023 AI Safety Summit held in the UK and the Seoul Declaration, adopted at the AI Seoul Summit in May 2024, both promoting international collaboration on AI safety. In February 2025, France convened the AI Action Summit, adopting a joint statement on inclusive and sustainable AI and endorsing the Paris Declaration on Maintaining Human Control in AI-enabled Weapon Systems, reinforcing commitments to responsible military AI use, international law, and global cooperation.
At today’s meeting, the briefers are expected to give a detailed overview of current initiatives and challenges at the intersection of AI, information integrity, and international peace and security. Mastrogeorgiou may highlight Greece’s approach to AI policy, referencing the country’s High-Level Advisory Committee established in November 2023 and the report it published the following year. Afina is likely to share insights from UNIDIR’s analysis of AI’s security implications, including its 10 March briefing note, which supports member states in developing national perspectives on AI. She may also mention UNIDIR’s AI Policy Portal, a tool to enhance transparency and multilateral information-sharing on global AI policies. Scaddan is likely to focus on the UN’s response to AI-generated misinformation and disinformation, and she may spotlight the UN Global Principles for Information Integrity, an initiative launched in June 2024 that aims to build a safer and more ethical information environment.
This afternoon, Council members are expected to highlight relevant initiatives on AI. While many are likely to acknowledge the valuable contributions that these initiatives have made in shaping a comprehensive global understanding of AI safety and the responsible use of the technology in the military domain, some members may note the limitations of such initiatives. Regarding the General Assembly resolution on AI in the military domain, for instance, Russia criticised the text’s inclusion of criteria for the “responsible application” of AI, as well as its recognition of regional initiatives, which Russia argued should not dictate norms for international discussions on military AI governance. Nevertheless, other members may emphasise the importance of operationalising widely supported principles by developing concrete confidence-building measures, particularly in light of the current absence of a comprehensive international regulatory framework on AI.
Several members are likely to call for equitable access to AI benefits and to capacity-building resources. They may emphasise the importance of international support in bridging the digital divide and advocate for inclusive global mechanisms that ensure all countries, regardless of their level of technological development, play a meaningful role in shaping the AI governance agenda. A 3 April report on technology and innovation by the UN Conference on Trade and Development (UNCTAD) highlights this imbalance, noting that while 100 companies—mostly from China and the US—account for 40 percent of global private investment in AI research and development, 118 countries—primarily from the Global South—are “missing” from current international discussions on AI governance.
Some members may also discuss how the UN, including the Security Council, can better integrate AI into its own operations. They are likely to reference ongoing efforts to this end, such as the “UN smart camp” projects—which seek to integrate new technologies into UN field facilities—and new UN-led studies exploring the potential applications of AI in peacekeeping, many of which are being advanced with the support of Council members such as ROK and the UK.
Nonetheless, as in previous Council discussions, differing perspectives are expected to emerge regarding the appropriate role of the Council in AI considerations. While some members may support integrating AI-related issues within the Council’s existing mandates, others may contend that such matters are better suited to deliberations in the General Assembly or within specialised forums, such as the GGE on LAWS.