United Nations Youth4Disarmament Forum hosts Expert Panel on the “Destabilizing Effects of Artificial Intelligence and Information and Communications Technology on Nuclear Stability."

On 16 October, as part of the inaugural United Nations Youth4Disarmament Forum supported by the Government of the Republic of Korea, young leaders joined nuclear experts and diplomats for an in-depth expert panel on the “Destabilizing Effects of Artificial Intelligence and Information and Communications Technology on Nuclear Stability.”

The Expert Panel provided youth participants with an interactive opportunity to meaningfully engage with experts on the nexus of nuclear weapons and emerging technologies. The experts helped the participants to refine their draft outcome document with key recommendations to be circulated with the UN General Assembly First committee delegations in December 2025.

Opening the session, moderator Ms. Umi Ariga-Maisy, Research Fellow at the Japan Institute of International Affairs, framed the discussion around the urgent need to understand the risks emerging from the intersection of AI, cyber technologies, and nuclear command systems (NC3s). The panel of experts and forum participants were introduced to reflect on the challenges and opportunities presented by the destabilizing effects of Artificial Intelligence and ICTs on Nuclear Stability.

Forum participant, Ms. Dina Tawfik, Fellow at the Center for Energy and Security Studies, noted that recent studies found that AI tended to choose more escalatory options. She also highlighted other risks related to algorithmic opacity, automation biases, and misperception of nuclear weapons.

A roundtable discussion among participants and panelists. 

Mr. Giacomo Persi Paoli, Head of the Security and Technology Programme at the United Nations Institute for Disarmament Research (UNIDIR), highlighted that AI developments have generally been underestimated and that the next generation potentially could outperform performance by the next generation of AI in 2027. He also noted that even though the integration of AI in Nuclear Command, Control and Communications systems (NC3) could easily be avoided, the challenge of AI integration into the nuclear decision-making process could be a vulnerability from both a security and cybersecurity standpoint.  

Ms. Beyza Unal, United Nations Office for Disarmament Affairs Head of the Science and Technology Unit, outlined the many vulnerabilities at the intersection of AI, cyber, and nuclear systems. She noted that many NC3 and production infrastructures still rely on outdated technologies and fragile supply chains, leaving them open to potential weak points in their cybersecurity. She furthermore emphasized the multiple security challenges of the outpacing of cyber technology in comparison to the length it takes to integrate new technologies onto pre-existing nuclear weapons systems.  

UNODA Head of Science and Technology Beyza Unal speaking in the Forum about Artificial Intelligence.  

Ms. Patricia Jaworek, Director of the Global Nuclear Policy Program at the Nuclear Threat Initiative (NTI), explained how AI-related risks have begun to feature within the NPT discussions and related initiatives such as the Stockholm Initiative’s contributions at the third Preparatory Committee and recent multilateral efforts such as the 2024 Biden–Xi Joint Statement. She emphasized that the AI-nuclear nexus represents not only technological and structural challenges, but also political and institutional ones as well. 

Ms. Erika Campos, Second Secretary at the Permanent Mission of Brazil to the United Nations, highlighted the two resolutions before this year’s First Committee which address aspects of AI and emerging technologies. She also encouraged engagement through mechanisms such as the Nuclear-Weapon-Free Zones and the Group of Governmental Experts on Nuclear Disarmament Verification to manage the issues of the AI-nuclear nexus. 

Forum Participant Mr. Leonard Günzel, Alumnus, Responsible Innovation in AI for Peace and Security, UNODA, underscored the lack of education in responsible engineering. He emphasized the need for responsible and ethical engineering education and encouraged both the policy and the engineering community to engage more with societal issues and each other. 

Panelist, experts and participants engaging in discussion.  

During the interactive Q&A portion, participants explored verification challenges, governance frameworks, and ways to bridge the divide between the policy and technical communities. Experts called for shared taxonomies, industry incentives for responsible AI, and the expansion of fellowship and training opportunities to foster interdisciplinary understanding. They also suggested that discussions on AI could be formally integrated into future NPT deliberations on transparency and accountability and underscored that lasting solutions will depend on the innovation and engagement of young leaders.  

In closing, participants and experts underscored the importance of continued dialogue between youth, policymakers, and the scientific community to ensure stability between AI and nuclear technology. They emphasized that bridging the gap between the sector rapidly emerging technology and the governing spheres will be key to preventing miscalculations and continued global security.