In early September 2024, the second summit of the REAIM (Responsible Artificial Intelligence in the Military Domain) project, co-organised by the Netherlands and Singapore, took place in South Korea. This summit was a follow-up to the inaugural meeting held in The Hague, the Netherlands, in February 2023. The REAIM Summit addresses one of the most pressing issues of our time, namely the responsible use of Artificial Intelligence (AI) in the military environment. As AI becomes increasingly integral to warfare, the Summit has become a key platform to discuss ethical principles and governance structures to control the impact of AI on international peace and security. Participants included representatives from governments, academia, and technology companies around the world. The summit’s main objective was to articulate the future shape of the military’s use of AI to ensure peace, stability and responsible use.
The summit’s ambition is to stimulate a debate that would lead to the development of international guidelines for the military use of AI. Many countries, including the United States, the United Kingdom, Japan and China, have been involved in the discussions, although not all countries have contributed equally. India, for example, has taken a “wait and observe” approach, reflecting the cautious attitude of some countries towards participating in the development of international frameworks for AI. The underlying ambition of REAIM is also embodied in the slogan of this year’s summit. “Responsible AI for a Safer Tomorrow” speaks to the desire to create a technological future in which the role of AI in the military does not lead to unintended escalation, violations of international law, or catastrophic impacts on society.
A turning point in the global effort to regulate AI in the military domain was the aforementioned February summit. This summit brought together representatives from 57 countries and many experts from different fields to discuss the evolving role of AI in the military. The central theme was AI’s ability to transform military operations – from surveillance and reconnaissance to decision-making and combat. However, the challenges posed by AI were also highlighted, particularly in terms of autonomous weapon systems and control algorithms that could operate without direct human control. The summit ended with a “call to action” of sorts, which encouraged continued dialogue on the responsible use of AI. Although this pilot summit was not legally binding, it succeeded in generating global interest in the topic and setting the stage for more substantial outcomes at future meetings.
The recent Seoul Summit moved the discussion forward by introducing “The Action Plan”, a key document outlining principles and guidelines for the responsible use of AI in the military. The plan, which 60 countries have endorsed, emphasises the importance of human oversight of the use of AI for military purposes. It goes on to set out a number of key priorities, starting with the principle that human responsibility cannot be delegated to machines. Decisions made using AI tools, especially those involving the use of lethal force, must always involve human judgment. This provision is intended to prevent AI from being solely responsible for decisions in combat, thereby reducing the risk of unintended escalation or conflicts caused by incorrect ‘judgements’ by machines.
The document also stresses the importance of risk management, particularly with regard to the potential role of AI in nuclear weapon systems. The action plan stresses the need to conduct risk assessments before deploying AI systems in sensitive areas, in particular, to prevent AI from contributing to the proliferation of weapons of mass destruction. Another key pillar of the plan was the ethical use of AI, with signatories committing to ensure that the use of AI is designed to minimise harm to civilians and maintain human control over lethal actions. These guidelines aim to create a more transparent and accountable framework for deploying AI in military environments and ensure that technological advances do not overtake ethical considerations.
However, the plan also revealed significant geopolitical contradictions. Not all countries supported the document, with China and around 30 other countries abstaining from endorsing it. These divergent views highlight the complexity of developing a globally consistent approach to the application of AI for military purposes, as different countries have different strategic interests and concerns regarding the regulation of such technologies. “The ‘Action Plan’ thus reflects a growing awareness that, although AI is slowly but surely becoming a permanent part of military strategies, reaching an international consensus on its use will still take some time.
The role of AI in the military is expanding rapidly, and its applications range from logistics planning and inventory management to advanced monitoring and autonomous weapons systems. AI-DSS (Artificial Intelligence Decision Support Systems) are already in use in several countries with advanced militaries. They serve to improve battlefield awareness, increase precision in targeting, and reduce human error. For example, Ukraine is developing AI drones that can autonomously identify and engage targets, while Israel is reportedly using AI-based surveillance systems to monitor and target adversaries. Although the use of AI in combat raises significant ethical and legal concerns, such as the risk of algorithmic bias and miscalculation, its integration into military systems is now a reality but one that requires careful management.
The risks associated with deploying AI in war are not just theoretical. The proliferation of autonomous weapon systems and artificial intelligence decision-making tools raises concerns about unintended consequences, including the possibility of miscalculations that could escalate conflicts. In particular, the role of AI in nuclear weapons systems could have disastrous consequences if human oversight is inadequate. “The ‘Action Plan’ thus calls for increased vigilance and strict risk management protocols in these sensitive areas and emphasises that humans must retain control of AI systems in any military context.
In addition to the discussion panels, the REAIM 2024 Summit also showcased a number of technological innovations from leading tech giants. Korean companies such as HD Hyundai and Korea Aerospace Industries (KAI) showcased advanced AI-based military technologies. Notable, for example, was HD Hyundai’s “Tenebris,” an unmanned naval vessel equipped with sophisticated AI to carry out reconnaissance missions. Designed for covert operations in hostile environments, this 17-meter vessel represents a new dimension in naval warfare. KAI also introduced the KF-21 and FA-50 unmanned aerial systems, which integrate AI technology to enhance combat effectiveness and reduce human errors.
The 2023 and 2024 REAIM Summits have established themselves as critical platforms for addressing the ethical and practical challenges posed by military AI. ‘The Action Plan’ presented at the September Summit provides a roadmap for the future of its management while emphasising the need to continue to maintain a human element presence. There is also an emphasis on transparency and international cooperation. As AI technologies continue to evolve, their role in military operations will only increase, making it all the more urgent to establish clear rules and ethical guidelines for their deployment.
Despite the significant progress made at the REAIM Summits, achieving a global consensus on AI governance remains a challenge. The fact that major actors such as China have not yet endorsed an ‘Action Plan’ illustrates the difficulties in establishing universally accepted standards. As military AI continues to evolve, the need for international cooperation and responsible governance is increasingly important.