Artificial Intelligence: High-level Briefing
Tomorrow morning (18 July), the Security Council will hold a high-level briefing titled “Artificial Intelligence: Opportunities and Risks for International Peace and Security”. James Cleverly, the Secretary of State for Foreign, Commonwealth, and Development Affairs of the UK, will chair the meeting. Briefings are expected from UN Secretary-General António Guterres; Jack Clark, Co-founder of Anthropic; and Yi Zeng, Professor at the Institute of Automation, Chinese Academy of Sciences.
Tomorrow’s briefing is one of the signature events of the UK’s July Council presidency, and the Council’s first formal meeting on artificial intelligence (AI). According to the concept note prepared by the UK, the meeting will provide an opportunity for members to exchange views on the possible implications of AI on international peace and security and to promote its safe and responsible use.
In recent years, there have been significant advances in the development of AI technologies, which are becoming increasingly sophisticated and accessible. The concept note describes the potential of AI to facilitate the UN’s efforts to promote international peace and security. In this respect, it says that AI could be used to improve conflict analysis and early warning, monitor ceasefires, and support mediation efforts. On the other hand, AI poses a serious risk if misused by states and non-state actors to contribute to instability and exacerbate conflict situations, including through the spread of online disinformation and hate speech. AI technologies could also potentially be used to increase cyber-attack capabilities, and to design bioweapons and weapons of mass destruction.
The concept note offers some guiding questions for the briefing, including:
- How can Council Members promote the safe and responsible development of AI to maintain international peace and security, whilst seizing the opportunities it brings for sustainable development?
- How can AI be used to enhance the UN’s peace and security toolkit?
- How can the Council better monitor and prevent the emerging risk that the development and use of AI could exacerbate conflicts and instability?
The meeting is part of the broader campaign by the UK to bring more attention to the challenges posed by the rapid development of AI. On 29 March, the UK published a white paper with recommendations for the AI industry, outlining a holistic approach for regulating the use of AI. In a 7 June press release, UK Prime Minister Rishi Sunak announced that the UK will host the first major summit on AI safety this fall. The summit is expected to foster discussions on the risks of AI and options for multilateral action to mitigate them. Another objective of the summit is to provide a platform for countries’ collaborative efforts in developing a shared approach to alleviating these risks.
At tomorrow’s meeting, Guterres is expected to emphasise the need to harness the power of AI to drive forward the 2030 Agenda for Sustainable Development. He may also stress the need for enhanced multilateral efforts to govern the AI domain. At the opening of the AI for Good Global Summit on 6 July, Guterres stressed that member states “must urgently find consensus around essential guardrails to govern the development and deployment of AI for the good of all”. He also mentioned plans to establish a High-Level Advisory Body on AI. Guterres may also provide an update on the process of the Global Digital Compact. The Compact was envisioned in the Secretary-General’s Our Common Agenda report as an agreement outlining “shared principles for an open, free and secure digital future for all” to be agreed at the Summit of the Future in September 2024. In January, President of the General Assembly Csaba Kőrösi appointed Rwanda and Sweden as co-facilitators to lead the intergovernmental process on the Compact.
Some Council members are expected to emphasise the human rights risks posed by AI, including surveillance technologies that can be used to target communities or individuals. Others may refer to the widening technological divide between developed and developing countries, which may exacerbate new forms of inequality.
In recent years, Council members have shown increased interest in addressing the role of emerging technologies and their implications for international peace and security. Members have organised meetings on related aspects, including technology and security (23 May 2022); technology and peacekeeping (18 August 2021); and cybersecurity (29 June 2021). Certain AI-related issues have also been raised in informal Arria-formula meetings as was organised by China, together with then-Council members Kenya and Mexico, on the impact of emerging technologies on international peace and security (for more, see our What’s in Blue story of 17 May 2021). In tomorrow’s meeting, Council members are likely to offer differing views on whether the Council is the appropriate forum to address AI.
While the Security Council is only beginning to discuss AI, the United Nations Educational, Scientific and Cultural Organization (UNESCO) has played an active role in AI ethics and governance. In November 2019, during UNESCO’s 40th General Conference, member states tasked the organisation with developing the first global normative instrument on the ethics of AI. This decision led to the formation of an Ad Hoc Expert Group (AHEG) composed of 24 members appointed by the Director-General of UNESCO. The AHEG produced a draft recommendation on the ethics of AI. Albeit non-binding, the text, which was adopted by all 193 member states, provides a comprehensive framework covering a range of ethical issues related to AI, including transparency, accountability, privacy, and the principles of design and deployment in a manner that upholds human rights. The text recommends that member states set up suitable legal and institutional structures to ensure the ethical application of AI, stimulate ethical AI research, and promote the exchange of AI ethics information and best practices.