That Artificial Intelligence (AI), as an enabling technology, now holds the extraordinary potential to transform every aspect of military affairs has been amply evident in the ongoing war in Ukraine and Israel’s counterattacks in Gaza and Lebanon.
It now dominates military operations, producing autonomous weapons, command and control, intelligence, surveillance, and reconnaissance (ISR) activities, training, information management, and logistical support.
As AI is shaping warfare, there is now immense competition among the major military powers of the world to bring about more AI innovations. China seems to be leading the race here if the regular concerns of the American strategic elites in this regard are any indication.
Until recently, the United States was said to be at the forefront of AI innovation, benefiting from leading research universities, a robust technology sector, and a supportive regulatory environment. However, now China is said to have surpassed the U.S. in all this. China is feared to have emerged as a formidable competitor of the U.S., with its strong academic institutions and innovative research.
Militarily speaking, Chinese advances in autonomy and AI-enabled weapons systems could impact the military balance while potentially exacerbating threats to global security and strategic stability.
Americans and their allied nations seem to be worried that the Chinese military could rush to deploy weapons systems that are “unsafe, untested, or unreliable under actual operational conditions” in striving to achieve a technological advantage.
Their greater worries are that China could sell AI-powered arms to potential adversaries of the United States “with little regard for the law of war.”
Andrew Hill and Stephen Gerras, both Professors at the U.S. Army College, have just written a three-part essay arguing that the United States’ potential adversaries are likely to be very motivated to push the boundaries of empowered military AI for three reasons: demographic transitions, control of the military, and fear of the United States.
They point out that regimes such as Russia and China are grappling with significant demographic pressures, including shrinking working-age populations and declining birth rates, which will threaten their military force structures over time. AI-driven systems offer a compelling solution to this problem by offsetting the diminishing human resources available for recruitment. In the face of increasingly automated warfare, these regimes can augment their military capabilities with AI systems.
Moreover, for Hill and Gerras, totalitarian regimes face a deeper internal challenge that encourages the development of AI – “the inherent threat posed by their own militaries.” Autonomous systems offer the dual advantage of reducing dependence on human soldiers, who may one day challenge the regime’s authority while increasing central control over military operations. In authoritarian settings, minimizing the risk of military-led dissent or coups is a strategic priority.
From a geopolitical perspective, Hill and Gerras point out that Russia and China will feel compelled to develop empowered military AI, fearing a strategic disadvantage if the United States gains a technological lead in this domain. That is why they will always work towards “maintaining a competitive edge by aggressively pursuing these capabilities.”
The two Professors of the U.S. Army College argue vociferously that “We underestimate AI at our own peril” and would like unrestrained and unconditional support for AI.
However, there are other analysts and policymakers, perhaps the majority, who simultaneously realize that the augmentation of military capabilities due to AI could be a double-edged sword, as the same AI can cause unimaginable damages when misused.
They seem to favor devising rules to ensure that AI complies with international law and establishing mechanisms that prevent autonomous weapons from making life-and-death decisions without appropriate human oversight. Legal and ethical considerations of AI applications are the need of the hour, so their argument goes. And they seem to have growing global support.
In fact, the United States government is initiating global efforts to build strong norms that will promote the responsible military use of artificial intelligence and autonomous systems.
In November last year, the U.S. State Department suggested “10 concrete measures” to guide the responsible development and use of military applications of AI and autonomy.
The 10 Measures
1. States should ensure their military organizations adopt and implement these principles for the responsible development, deployment, and use of AI capabilities.
2. States should take appropriate steps, such as legal reviews, to ensure that their military AI capabilities will be used consistent with their respective obligations under international law, in particular international humanitarian law. States should also consider how to use military AI capabilities to enhance their implementation of international humanitarian law and to improve the protection of civilians and civilian objects in armed conflict.
3. States should ensure that senior officials effectively and appropriately oversee the development and deployment of military AI capabilities with high-consequence applications, including, but not limited to, such weapon systems.
4. States should take proactive steps to minimize unintended bias in military AI capabilities.
5. States should ensure that relevant personnel exercise appropriate care in the development, deployment, and use of military AI capabilities, including weapon systems incorporating such capabilities.
6. States should ensure that military AI capabilities are developed with methodologies, data sources, design procedures, and documentation that are transparent to and auditable by their relevant defense personnel.
7. States should ensure that personnel who use or approve the use of military AI capabilities are trained so they sufficiently understand the capabilities and limitations of those systems in order to make appropriate context-informed judgments on the use of those systems and to mitigate the risk of automation bias.
8. States should ensure that military AI capabilities have explicit, well-defined uses and that they are designed and engineered to fulfill those intended functions.
9. States should ensure that the safety, security, and effectiveness of military AI capabilities are subject to appropriate and rigorous testing and assurance within their well-defined uses and across their entire life cycles. For self-learning or continuously updating military AI capabilities, States should ensure that critical safety features have not been degraded through processes such as monitoring.
10. States should implement appropriate safeguards to mitigate risks of failures in military AI capabilities, such as the ability to detect and avoid unintended consequences and the ability to respond, for example, by disengaging or deactivating deployed systems, when such systems demonstrate unintended behavior.
It may be noted that at a parallel level, South Korea convened a two-day international summit in Seoul early this month (September 9-10), seeking to establish a blueprint for the responsible use of artificial intelligence (AI) in the military.
Incidentally, it was the second such summit, the first being held in The Hague last year. Like last year, China participated in the Seoul summit.
The Seoul summit, co-hosted by the Netherlands, Singapore, Kenya, and the United Kingdom, was themed “Responsible AI in the Military Domain” (REAIM). According to reports, it drew 1,952 participants from 96 countries, including 38 ministerial-level officials.
The 20-clause “Blueprint” that was adopted was divided into three key sections: the impact of AI on international peace and security, the implementation of responsible AI in the military domain, and the envisioned future governance of AI in military applications.
It warned that “AI applications in the military domain could be linked to a range of challenges and risks from humanitarian, legal, security, technological, societal or ethical perspectives that need to be identified, assessed and addressed.”
The blueprint notably stressed the “need to prevent AI technologies from being used to contribute to the proliferation of weapons of mass destruction (WMDs) by state and non-state actors, including terrorist groups.”
The document also emphasized that “AI technologies support and do not hinder disarmament, arms control, and non-proliferation efforts; and it is especially crucial to maintain human control and involvement for all actions critical to informing and executing sovereign decisions concerning nuclear weapons employment without prejudice, to the ultimate goal of a world free of nuclear weapons.”
The blueprint highlighted the importance of applying AI in the military domain “in a responsible manner throughout their entire life cycle and in compliance with applicable international law, in particular, international humanitarian law.”
Incidentally, while 61 countries, including the U.S., Japan, France, the United Kingdom, Switzerland, Sweden, and Ukraine, have endorsed the blueprint, China, despite sending a government delegation to the meeting and attending the ministerial-level dialogue there, chose not to support it.
It should be noted that the blueprint is legally “non-binding,” which means that those endorsing it may not actually implement it. However, this did not seem to impact China’s decision not to endorse the Seoul blueprint.
In a subsequent press conference, Chinese Foreign Ministry spokesperson Mao Ning said that China believes in upholding “the vision of common, comprehensive, cooperative and sustainable security, reaching consensus on how to standardize the application of AI in the military domain through dialogue and cooperation, and building an open, just and effective mechanism on security governance.”
She stressed that “all countries, especially the major powers, should adopt a prudent and responsible attitude when utilizing relevant technologies, while effectively respecting the security concerns of other countries, avoiding misperception and miscalculation, and preventing arms race.”
According to her, “China’s principles of AI governance: adopt a prudent and responsible attitude, adhere to the principle of developing AI for good, take a people-centered approach, implement agile governance, and uphold multilateralism, which were well recognized by other parties.”
Viewed thus, China seems to have concluded that the Seoul blueprint (endorsed by 61 countries) or, for that matter, 10 measures of the U.S. State Department (which, incidentally, have been endorsed by 47 countries) are not necessarily “prudent,” “responsible attitude” and not enough to “respect the security concerns of other countries, avoiding misperception and miscalculation, and preventing arms race.”
In a way, this vindicates what Professors Hill and Gerras have written.