Doing Innovation Responsibly – How Responsible AI Practices Can Address Risks to International Peace and Security

Saturday 27 September & Monday 13 October

While advances in civilian AI can offer remarkable positive potential, misuse of civilian AI presents significant risks for international peace and security. Critically, the technical community in the civilian AI space is often unaware of these challenges, under-engaged, or unsure of the role they can play in addressing them.

As part of the Promoting Responsible Innovation in Artificial Intelligence for Peace and Security project, made possible thanks to the generous support of the European Union, the United Nations Office for Disarmament Affairs (UNODA) has been reaching out to AI practitioners and technical experts in academia, industry and civil society, to examine responsible practices, risk management frameworks, and how to address risks to international peace and security from design onwards, including through the Handbook on Responsible Innovation in AI for International Peace and Security.

On 27 September 2025, UNODA joined hosts Google DeepMind and Seoul National University for a special session at the Conference on Robot Learning (CoRL) in Seoul, titled Robot Learning Done Right: Responsibly Developing Foundation Models for Robotics. The session looked at the ethical and responsibility considerations arising from large-scale machine learning models capable of generalizing across a wide range of robotic embodiments and tasks. Robotics foundation models and vision-language-action systems promise to unlock new possibilities in embodied intelligence, from care robotics to adaptive manufacturing, but also raise unique challenges related to safety, accountability, and fairness, and underappreciated risks for international peace and security.

UNODA presents as part of an expert panel for AI and robotics practitioners from around the globe.

The session brought together a multidisciplinary group of expert speakers, including Professor Chihyung Jeon (Korea Advanced Institute of Science and Technology), Professor Sowon Hahn (Seoul National University), Dr Carolina Parada (Google DeepMind), and Mr Charles Ovink (UNODA). Discussions explored key themes in human-robot interaction, bias and fairness, governance and oversight, and the risks of misuse. Panelists emphasized the need for context-aware ethical frameworks that balance innovation with safety, reliability, and trust, especially as these systems become increasingly integrated into social and care environments.

UNODA continued its technical engagement on 13 October 2025, when Charles Ovink joined Princeton University’s programme on Science and Global Security (SGS) to engage researchers and technical experts on advancing responsible approaches to scientific innovation in support of disarmament, arms control, non-proliferation and international peace and security. The Princeton-based programme, with its long-standing contributions to arms control and non-proliferation research and policy, provided a valuable forum to connect the ethical and governance considerations of AI and robotics technologies with lessons from decades of science-for-policy engagement.

UNODA joins young researchers and technical experts at Princeton to discuss responsible practices in AI development, and the ways risk management frameworks can help address risks for international peace and security.

Together, these events underscore the importance of fostering inclusive, inter-disciplinary dialogue on the responsible use of AI, and ensuring that the technical practitioners behind the future of AI are engaged with addressing the risks it can present.

For more information on UNODA’s work on Responsible Innovation and AI, please contact Mr Charles Ovink, Political Affairs Officer, at charles.ovink@un.org.