The Military Dangers Of AI

Imagine a world where machines controlled by artificial intelligence (AI), replace humans in all business, industrial and professional roles. It is a horrifying thought. As prominent computer scientists have warned us, AI-governed system are a threat.

The tendency to

Critical errors and unexplainable "hallucinations" can have potentially disastrous outcomes. There's a more terrifying scenario that could occur with the proliferation of super intelligent machines.

Popular culture has long held the belief that super-intelligent computer systems could run amok, killing humans. In the prophetic

Film 1983

Matthew Broderick plays a teenager hacker who disables the supercomputer WOPR, also known as War Operation Plan Response (pronounced "whopper"). The supercomputer nearly causes a nuclear war between the United States of America and the Soviet Union. The '


The'movie franchise', which began with the 1984 original film, also envisioned a supercomputer that was self-aware and called Skynet. It, too, had been designed to control U.S. nukes, but chose to eliminate humanity instead, seeing us as a danger to its very existence.

Supercomputers killing people is no longer a science fiction concept. It's now a very real possibility.


December 2019: Testing Advanced Battle Management Systems aboard the destroyer USS Thomas Hudner during a joint exercise. (Defense Visual Information Distribution Service Public domain)

The development of many different '

The major military powers have also been rushing to develop automated battlefield decision making systems or what could be called "robotic combat devices" or "robotic combat devices".

robot generals

AI-powered systems may be used to give combat orders to American troops in wars to come. They will tell them where, when and how to kill enemy soldiers or respond to fire from their adversaries. Robot decision-makers may even be able to control America's nuclear weapons in some scenarios. This could lead to a nuclear conflict that would end humanity.

Take a deep breath. It may seem impossible to install a C2 system powered by AI. Nevertheless,

The U.S. Department of Defense works hard to develop hardware and software required in an increasingly rapid, systematic fashion

Air Force, for instance, submitted its budget for 2023.

requested $231 million

to develop the

Advanced Battlefield Management System

(ABMS), a network of AI-enabled sensors and computers that collects data and interprets it on enemy operations, and provides a menu of attack options to pilots and ground troops. As technology advances,

The system

Will be capable

Sending 'fire' orders directly to'shooters', largely bypassing the human control


Will Roper was the assistant secretary for acquisition, technology and logistics at the Air Force. He described it as a tool for machine-to-machine exchange of data that offers options for deterrence or for early engagement or on-ramp (a military display of force).


In a 2020 interview, Roper will discuss the ABMS System. Roper suggested that the name of the system would need to be changed as it evolved.

Skynet, however much I'd love to do it as a sci fi thing, is not an option. I don't believe we can go there.

While he may not be able to go there, the rest of us might.

But that's just the beginning. The Air Force's ABMS will be the nucleus for a constellation of sensors, computers and other devices that will link together.

You can find out more about it here.

U.S. combat forces,

Joint All-Domain Command-and-Control System

(pronounced 'Jad C-two'). 'JADC2 will enable commanders make better decisions through the collection of data from multiple sensors, the processing of the data with artificial intelligence algorithms, and the recommendation of the optimal engage the target', the Congressional Research Service


In 2022,

AI and the Nuclear Trigger

JADC2 is initially designed to coordinate operations between 'conventional,' or non nuclear American forces. It is anticipated that JADC2 will eventually be used to

Link up

'JADC2 is intertwined with NC3,' said General John E. Hyten Vice Chairman of the Joint Chiefs of Staff.

In a 2020 interview. He added, in typical Pentagonese: "NC3 must inform JADC2 while JADC2 must inform NC3."

Imagine a future in which a military conflict between the U.S. and China in the South China Sea, or near Taiwan, would lead to a heightened level of fighting between the opposing air and navy forces. Imagine the JADC2 ordering an intense bombardment on enemy bases and command system in China, triggering reciprocal attack against U.S. installations and a lightning-fast decision by JADC2 retaliating with tactical nuclear weapons.

(Brecht Bug, Flickr, CC BY-NC-ND 2.0) 2019 Terminator Dark Fate Billboard Ad in New York. (Brecht Bug, Flickr, CC BY-NC-ND 2.0)

Analysts in the arms-control community have long been concerned that such nightmare scenarios could lead to an unintentional or accidental nuclear war. The growing automation of military C2 has caused anxiety amongst national security officials and not only them.

When I asked Lieutenant General Jack Shanahan about this risky scenario in 2019, he replied that it was possible.

You can also contact us by clicking here.

"You will not find a stronger advocate of the integration of AI capabilities into the Department of Defense.

There is one area that I hesitate, and it relates to nuclear command and control

This is a 'human decision' that must be made, so we have to be very cautious. He added that due to the 'immaturity' of technology, we should take a long time to test, evaluate, and refine [AI before applying it to NC3].

In spite of these warnings, in the years that followed, the Pentagon has accelerated the development and deployment of automated C2 systems. The Department of Defense submitted its budget for 2024.


The JADC2 will receive $1.4 billion

Then, it asked

Another $1.8 billion

For other types of AI-related military research.

Pentagon officials admit that it may be some time until robot generals are commanding large numbers of U.S. soldiers (and autonomous weapons in battle), but they've already launched projects to test and perfect such links. The Army's Robotics Program is one example.

Project Convergence

, involving a number of field exercises to validate ABMS components and JADC2 systems. In an August 2020 test at Yuma Proving Ground, Arizona, the Army used airborne and ground sensors to track simulated enemies and then processed that data on computers with AI at Joint Base Lewis McChord, Washington state. These computers then issued fire orders to ground-based gunnery at Yuma. The Congressional Research Service said later that 'this entire sequence was supposedly completed within 20 seconds'



Many aspects of the Navy's AI program, Project Overmatch, have been kept under wraps. Overmatch, according to Admiral Michael Gilday chief of naval operations is

You can also click here to learn more about

"to enable a Navy which swarms at sea, delivering lethal and nonlethal effect from near and far, on every axis and in every domain."

There is little else known about the project.

Human Extinction and 'Flash Wars

Despite the secrecy that surrounds these projects, ABMS, JADC2, Convergence and Overmatch can be viewed as building blocks of a future Skynet mega-network, a network of supercomputers, designed to command U.S. military forces in armed conflict, including nuclear weapons. The Pentagon's move in this direction will bring us closer to a future where AI has the power to kill or save American soldiers, as well as opposing forces and civilians caught up in the crossfire.

This prospect is a cause for serious concern. Consider the possibility of algorithmic errors or miscalculations.

These algorithms, as top computer scientists have warned about, are not safe.

Inexplicable errors and, using the AI term du jour, "hallucinations" -- that is, results that seem reasonable but are completely illusionary

It's easy to imagine that such computers could 'hallucinate' an impending enemy attack, and launch a war which otherwise might have been avoided.

But that's far from the worst danger. There's also the likelihood that America's enemies will equip their forces in a similar way with robot generals. Other words, future conflicts will likely be fought between AI systems linked to nuclear weapons, with unpredictable but potentially catastrophic results.

Public sources have revealed little about the efforts of Russian and Chinese military command-and control systems. However, both countries are believed to be building networks similar to the Pentagon's JADC2. In 2014, Russia opened a National Defense Control Center in Moscow. This centralized command center is responsible for assessing threats around the world and initiating any military action that may be necessary. The NDCC, like JADC2, is a centralized command post.

You can also find out more about

Collect information from various sources on the enemy's movements and give senior officers guidance on possible actions.

China is said be working on a more complex, but similar, project under the name of "Multi-Domain Precision Warfare" (MDPW). Pentagon's report 2022 on Chinese military development states that the People's Liberation Army is the military of China.

Being trained and equipped

AI-enabled computer networks and sensors will be used to quickly identify vulnerabilities in the U.S. operating system, and then combine forces from across domains to launch precise strikes against these vulnerabilities.

Imagine a future conflict between the U.S., Russia, or China, in which JADC2 is the commander of all U.S. troops, and NDCC, MDPW, and NDCC, respectively, are the commanders of the Russian and Chinese forces.

Also, consider that

All three systems can experience hallucinations and errors

How safe will the humans be if robot generals decide it's time for them to 'win war' by nuking their opponents?

Admiral Michael Gilday by 2020 (DoD, Lisa Ferdinando)

This scenario may seem outlandish, but it is not, according to the National Security Commission on Artificial Intelligence. The commission was created by Congress and chaired jointly by Eric Schmidt, the former Google head, and Robert Work the former deputy secretary for defense. The Commission is of the opinion that AI-enabled, autonomous weapon systems, when properly tested and used, will provide substantial military and humanitarian benefits. However, unchecked use could lead to unintended conflict and crisis instability.


In its Final Report. It stated that such dangers may arise 'because the interaction between AI-enabled weapon systems and autonomous weapons on the battlefield is complex and challenging, and has not been tested' -- i.e., when AI battles AI.

It's possible, even though it may seem like an extreme scenario that opposing AI could trigger a

"Flash war" - a catastrophic event

The military equivalent to a Wall Street 'flash crash,' when super-sophisticated algorithms cause panic selling before humans can restore order. The infamous "Flash Crash" of May 6, 2010 was caused by computer-driven trading. It led to a 10% drop in the value of the stock market.

According to

Paul Scharre, of the Center for a New American Security who studied this phenomenon first, said that a'military equivalent' of Wall Street crises would occur when opposing forces' automated command systems 'become stuck in a cascade escalating encounters.' He noted that in such a scenario, "autonomous weapons can lead to catastrophic death and destruction in an instant."

There are no current measures to prevent such a catastrophe. Nor have there been any discussions among major powers about devising measures to do so. As the National Security Commission on Artificial Intelligence pointed out, crisis-control measures must be taken urgently to incorporate 'automated conflict escalation triggers' into these systems. A catastrophic version of World War III is all too likely. The dangerous immaturity and reluctance to put any restrictions on AI weaponization by Beijing, Moscow and Washington could lead to the destruction of humanity in a future conflict.