Arria-formula Meeting on “Artificial Intelligence: Its Impact on Hate Speech, Disinformation and Misinformation”
This afternoon (19 December), Albania and the United Arab Emirates (UAE) will convene an Arria-formula meeting titled “Artificial intelligence: its impact on hate speech, disinformation and misinformation”. The meeting will be chaired by Omran Sharaf, the UAE’s Assistant Minister of Foreign Affairs and International Cooperation for Advanced Science and Technology, and Ambassador Ferit Hoxha, the Permanent Representative of Albania to the UN. Briefings are expected from Under-Secretary-General for Global Communications Melissa Fleming; Rahaf Harfoush, digital anthropologist and member of the Secretary-General’s High-Level Advisory Body on Artificial Intelligence; and Jennifer Woodard, co-founder of Insikt Intelligence—a startup researching how to apply artificial intelligence (AI) to combat online harms.
The meeting, which will begin at 3 pm EST and take place in Conference Room 12, will be broadcast on UNTV. It will be open to representatives of all UN member states and permanent observers.
According to the concept note prepared by the co-organisers, today’s meeting aims to provide an opportunity for member states, the UN, civil society, and the private sector to exchange views on the challenges presented by AI relating to the spread and amplification of hate speech, misinformation, and disinformation. It also aims to explore how AI tools can be leveraged to mitigate these risks. The meeting will allow participants to discuss best practices in public-private partnerships that encourage innovation and uphold transparency, accountability, and robust oversight of AI systems. A key focus will be on ensuring that these systems are grounded in non-discrimination, inclusivity, and human rights.
The concept note asserts that AI can “compound threats and intensify conflicts by spreading hate speech, undermining the integrity of public information, including the interpretation of international humanitarian law, amplifying bias and exacerbating human rights abuses, contributing to increased polarisation and instability on a vast scale”. Moreover, it states that “malicious actors could exploit AI for cyberattacks, disinformation, misinformation or extremist rhetoric, to interfere in elections or contribute to increased intercommunal tensions”.
The Security Council has consistently acknowledged that hate speech, misinformation, and disinformation can exacerbate tensions and undermine trust within communities and of UN personnel deployed in the field. The Council has considered incitement as a significant factor in conflicts, capable of escalating violence and complicating conflict resolution efforts. Security Council resolutions 1373 and 1624—which were adopted in 2001 and 2005, respectively—condemned the incitement of terrorist acts driven by extremism and intolerance.
In resolution 2686 of 14 June, which was co-authored by the UAE and the UK, the Council recognised that “hate speech, racism, racial discrimination, xenophobia, related forms of intolerance, gender discrimination, and acts of extremism can contribute to driving the outbreak, escalation and recurrence of conflict” and urged states and international and regional organisations “to publicly condemn violence, hate speech and extremism motivated by discrimination including on the grounds of race, ethnicity, gender, religion or language, in a manner consistent with applicable international law, including the right to freedom of expression”.
In recent years, Council members have also shown increased interest in addressing the role of emerging technologies and their implications for international peace and security. Members have organised meetings on related aspects, including technology and security (23 May 2022); technology and peacekeeping (18 August 2021); and cybersecurity (29 June 2021). Certain AI-related issues have also been raised in informal Arria-formula meetings, such as those organised by China, together with then-Council members Kenya and Mexico, on the impact of emerging technologies on international peace and security in May 2021, and by Kenya on addressing and countering hate speech and preventing incitement to discrimination, hostility, and violence on social media, held in a closed format in October 2021. Furthermore, Council members have discussed cyber and digital threats to international peace and security in the context of the exploitation of Information and Communication Technologies (ICTs) for terrorist purposes. In resolution 2129 of 17 December 2013 on the mandate of the Counter-Terrorism Executive Directorate (CTED) the Security Council acknowledged the growing nexus between terrorism and ICTs and the use of such technologies to incite, recruit, fund and plan terrorist acts.
More recently, on 18 July, the UK organised a high-level briefing titled “Artificial Intelligence: Opportunities and Risks for International Peace and Security”, during which several Council members highlighted the potential of AI to exacerbate conflict, particularly through the dissemination of misinformation. Members also emphasised the need to develop an ethical and responsible framework for the international governance of AI.
Today’s meeting is intended to build on this briefing and Security Council resolution 2686 on tolerance and international peace and security. The concept note poses several points for discussion, including:
- What tools should the Security Council use to assess and determine the disruptive capability of AI in the framework of its overall mandate but also in country-specific contexts?
- How can AI support the full, equal, and meaningful participation of women and the empowerment of youth, including in conflict and post-conflict situations and between displaced populations and host communities?
- What measures can be taken to prevent and combat the AI-enabled spread of intolerant ideology, incitement to hatred, misinformation, and disinformation in the context of armed conflict, in a manner consistent with international law?
- What role can AI tools play to support effective strategic communications by UN peace operations in support of their mandates, as well as to enhance UN missions’ protection, information gathering and situational awareness?
At today’s meeting, Fleming is expected to highlight the UN’s role in combatting online incitement to discrimination and violence, as well as its commitment to leveraging AI technology for conflict prevention. She may also update Council members on progress in developing a Code of Conduct for maintaining information integrity on digital platforms. This Code aims to offer a coordinated global response to information threats, grounded in human rights, including the rights to freedom of expression, opinion, and access to information.
Council members may be interested in hearing from Harfoush on the work and latest findings of the High-Level Advisory Body on AI—a multistakeholder panel established by Secretary-General António Guterres in October to undertake analysis and provide recommendations for the international governance of AI. The advisory board is expected to release an interim report this month containing an assessment of the risks and opportunities posed by AI, as well as options for global AI governance. The report is expected to inform deliberations on a “Global Digital Compact” (GDC), which is part of the pact of the Summit of the Future. (In line with UN General Assembly resolution 76/307 of 8 September 2022, the summit aspires to reinvigorate the multilateral system and to culminate in the adoption of “a concise, action-oriented outcome document entitled ‘A Pact for the Future’, agreed in advance by consensus through intergovernmental negotiations.) The GDC is currently in the process of moving from the consultation phase to the negotiation phase.
In his policy brief titled A New Agenda for Peace, released in July, the Secretary-General offered several recommendations pertaining to AI governance, including for member states to elaborate national strategies on the responsible design, development, and use of AI, consistent with their obligations under international humanitarian law and human rights law; to agree on a global framework to regulate and strengthen oversight mechanisms for the use of data-driven technology, including AI, for counterterrorism purposes; and to consider creating a new global body on AI.
Council members are likely to highlight national and regional efforts aimed at regulating AI. In July, China issued interim measures for the development of generative AI. In October, the Group of Seven (G7) agreed on guiding principles on AI and a voluntary code of conduct for AI developers under the Hiroshima AI process, while US President Joe Biden issued an executive order which established standards on “safe, secure, and trustworthy” AI. On 8 December, the EU agreed on the AI Act, a regulatory framework for AI which sets limitations on the use of biometric identification systems by law enforcement, among other regulations.
Council members agree on the need to develop robust ethical AI standards that ensure transparency, accountability, impartiality, and protection of human rights. However, a key issue for some members is how to adequately address misinformation, disinformation, and hate speech while protecting human rights, particularly the freedom of expression.
At today’s meeting, several Council members are expected to emphasise the human rights risks posed by AI, including surveillance technologies that can be used to target communities or individuals. Some of these members may also highlight the dangers posed by AI in the dissemination of misleading information aimed at distorting facts and interfering in democratic processes, including the use of deepfakes.
Some Council members are likely to provide country-specific examples of how digital technologies are either contributing to peace and security or to instability and the exacerbation of conflict. Several members are expected to mention that the spread of disinformation is negatively affecting the situation in conflict-affected countries, while others may refer to how digital technologies are providing important information regarding humanitarian corridors and assistance in conflict situations. Some members may highlight that disinformation has exacerbated tensions and contributed to increasing risks to UN personnel in the Central African Republic (CAR) and Mali.
Council members are likely to offer diverging perspectives on whether the Security Council is the appropriate forum to address AI. During the 18 July high-level briefing on AI, several members suggested that the Council has a responsibility to proactively monitor developments around AI and the threat it may pose to the maintenance of international peace and security. Other members—including Brazil and Russia—were more cautious in drawing a direct link between AI and international peace and security. Brazil said that “while the Council should remain vigilant and ready to respond to any incidents involving the use of AI, we should also be careful not to overly securitize the topic by concentrating discussions in this Chamber”, adding that the General Assembly is the forum best suited for discussion on AI. Similarly, Russia argued that as the issue is being discussed in the context of the Open-ended Working Group on security of and in the use of ICTs, the Council should avoid the duplication of work in this regard.
Another point of division may be on the need to establish an international body on AI. Some Council members have expressed support for the creation of a new UN entity—possibly modelled after the International Atomic Energy Agency (IAEA), the International Civil Aviation Organisation (ICAO), or the Intergovernmental Panel on Climate Change (IPCC)—to mitigate existing and potential risks associated with AI and to establish and administer internationally agreed mechanisms of monitoring and governance. AI researchers, however, have cautioned that existing models may not be adequate to govern AI, given the low barrier to entry to the field for non-state actors. Other Council members have emphasised that all self-regulatory procedures adopted by private companies should comply with the national legislation of the countries where those companies operate. During the 18 July briefing on AI, for example, Russia expressed its opposition to establishing supranational oversight bodies for AI, adding that it considers “the extraterritorial imposition of norms in that area unacceptable”.