Artificial Intelligence Can Trigger ‘Nuclear-Level Catastrophe’, 36% Of AI Experts Say, As US Lawmakers Keen To Curb Its Use

US lawmakers are sounding the alarm over using artificial intelligence (AI) in the nation’s nuclear arsenal, fearing that the technology could potentially fire off warheads on its own.

A bipartisan group of lawmakers, consisting of three Democrats and one Republican, have put forward a bill in the US House of Representatives that seeks to curb the development of artificial intelligence systems capable of launching nuclear attacks without human oversight. 

The proposed bill seeks to proactively block any potential Defense Department policies that may result in deploying autonomous AI systems capable of launching nuclear weapons. 

In a recent Fox News interview, US Representative Ken Buck, a Colorado Republican, emphasized the potential danger of deploying AI systems for nuclear weapons without human supervision, calling it “reckless” and “dangerous.” 

While he acknowledged that AI could enhance national security, he stressed the importance of prohibiting its use in autonomous nuclear launch decisions.

220405_USAF_sentinel_gfx
US Air Force artist’s rendering of the Sentinel in flight. (Credit: US Air Force)

“So you see sci-fi movies, and the world is out of control because AI has taken over – we’re going to have humans in this process,” he added. 

Buck referred to Hollywood’s portrayal of AI taking control of nuclear weapons in movies like ‘WarGames’ and ‘Colossus: The Forbidden Project’ and cautioned that employing AI without human intervention would be unsafe and irresponsible.

Representative Ted Lieu of California concurred with Buck’s concerns and acknowledged AI’s potential benefits and risks, stating that while it can transform society, it can also threaten human lives. 

Lieu, along with two other Democrats, Representative Don Beyer of Virginia and Senator Edward Markey of Massachusetts, is a leading proponent of the AI legislation seeking to prevent any potential dangers of autonomous AI systems in the nation’s nuclear arsenal.

In April, Senate Majority Leader Chuck Schumer unveiled a comprehensive framework that calls for companies involved in AI development to permit external experts to assess their technology before it is made available for public use. 

The framework aims to promote transparency and accountability in the development of AI while ensuring its potential benefits are accessible to all.

While the notion of an AI-driven nuclear conflict may have been viewed as a work of fiction in the past, numerous experts now regard it as a credible risk. 

According to a recent survey by the Stanford Institute for Human-Centered Artificial Intelligence, 36% of AI experts believe that AI can potentially cause a “nuclear-level catastrophe.”

Artificial-Intelligence
Representational Image

Pentagon’s Stance On The Use Of AI

The Pentagon has allocated US$1.8 billion exclusively for the research and development of AI capabilities in its fiscal year 2024 budget.

The investment reflects the US military’s growing focus on AI as a critical element in maintaining national security and its commitment to staying at the forefront of technological innovation in this field.

The US Central Command (CENTCOM) recently appointed its first AI advisor, Dr. Andrew Moore, signaling a greater emphasis on leveraging the advantages of the fast-evolving technology in the military.

Meanwhile, Schuyler Moore, the chief technology officer at the CENTCOM, earlier stated that AI is viewed as a “light switch” that aids individuals in interpreting data and guiding them in the right direction.

She emphasized that the Pentagon’s position is that there must always be a human in the loop making the final decision and that AI would be complementary to human judgment rather than a substitute.

Moore added that one application of AI within CENTCOM’s domain is its utilization in combating the illegal weapons shipments around Iran. 

She indicated that the military believes that AI could assist in reducing the number of potentially suspicious shipments by comprehending regular shipping patterns and flagging any anomalies that deviate from the norm.

Officials at the Pentagon have repeatedly stated that the US is in a race to develop a more ethical and responsible AI system than the systems developed by its adversaries. 

They have expressed concerns that some nations are becoming alarmingly proficient in using AI for unauthorized surveillance. They have highlighted the need for the US to be prepared to counter such tactics if used against the country.