Pentagon’s AI initiatives accelerate hard decisions on lethal autonomous weapons

By FRANK BAJAK

Nov. 25, 2023

NATIONAL HARBOR, Md. (AP) — Artificial intelligence employed by the U.S. military has piloted pint-sized surveillance drones in special operations forces’ missions and helped Ukraine in its war against Russia. It tracks soldiers’ fitness, predicts when Air Force planes need maintenance and helps keep tabs on rivals in space.

Now, the Pentagon is intent on fielding multiple thousands of relatively inexpensive, expendable AI-enabled autonomous vehicles by 2026 to keep pace with China. The ambitious initiative — dubbed Replicator — seeks to “galvanize progress in the too-slow shift of U.S. military innovation to leverage platforms that are small, smart, cheap, and many,” Deputy Secretary of Defense Kathleen Hicks said in August.

While its funding is uncertain and details vague, Replicator is expected to accelerate hard decisions on what AI tech is mature and trustworthy enough to deploy – including on weaponized systems.

There is little dispute among scientists, industry experts and Pentagon officials that the U.S. will within the next few years have fully autonomous lethal weapons. And though officials insist humans will always be in control, experts say advances in data-processing speed and machine-to-machine communications will inevitably relegate people to supervisory roles.

That’s especially true if, as expected, lethal weapons are deployed en masse in drone swarms. Many countries are working on them — and neither China, Russia, Iran, India or Pakistan have signed a U.S.-initiated pledge to use military AI responsibly.

MORE

Insider Q&A: Pentagon AI chief on network-centric warfare, generative AI challenges

Nov. 20, 2023

By FRANK BAJAK

The Pentagon’s chief digital and artificial intelligence offer, Craig Martell, is alarmed by the potential for generative artificial intelligence systems like ChatGPT to deceive and sow disinformation. His talk on the technology at the DefCon hacker convention in August was a huge hit. But he’s anything but sour on reliable AI.

Not a soldier but a data scientist, Martell headed machine-learning at companies including LinkedIn, Dropbox and Lyft before taking the job last year.

Marshalling the U.S. military’s data and determining what AI is trustworthy enough to take into battle is a big challenge in an increasingly unstable world where multiple countries are racing to develop lethal autonomous weapons.

The interview has been edited for length and clarity.

Q: How should we think about AI use in military applications?

A: All AI is, really, is counting the past to predict the future. I don’t actually think the modern wave of AI is any different.

Q: Pentagon planners say the China threat makes AI development urgent. Is China winning the AI arms race?

A: I find that metaphor somewhat flawed. When we had a nuclear arms race it was with a monolithic technology. AI is not that. Nor is it a Pandora’s box. It’s a set of technologies we apply on a case-by-base basis, verifying empirically whether it’s effective or not.

Q: The U.S. military is using AI tech to assist Ukraine. How are you helping?

A: Our team is not involved with Ukraine other than to help build a database for how allies provide assistance. It’s called Skyblue. We’re just helping make sure that stays organized.

Q: There is much discussion about autonomous lethal weaponry – like attack drones. The consensus is humans will ultimately be reduced to a supervisory role — being able to abort missions but mostly not interfering. Sound right?

A: In the military we train with a technology until we develop a justified confidence. We understand the limits of a system, know when it works and when it might not. How does this map to autonomous systems? Take my car. I trust the adaptive cruise control on it. The technology that is supposed to keep it from changing lanes, on the other hand, is terrible. So I don’t have justified confidence in that system and don’t use it. Extrapolate that to the military.

Q: The Air Force’s “loyal wingman” program in development would have drones fly in tandem with fighter jets flown by humans. Is the computer vision good enough to distinguish friend from foe?

A: Computer vision has made amazing strides in the past 10 years. Whether it’s useful in a particular situation is an empirical question. We need to determine the precision we are willing to accept for the use case and build against that criteria – and test. So we can’t generalize. I would really like us to stop talking about the technology as a monolith and talk instead about the capabilities we want.

MORE

Big Pentagon internet mystery partially solved

By FRANK BAJAK

April 25, 2021

This image has an empty alt attribute; its file name is 1000.jpeg



BOSTON (AP) — A very strange thing happened on the internet the day President Joe Biden was sworn in. A shadowy company residing at a shared workspace above a Florida bank announced to the world’s computer networks that it was now managing a colossal, previously idle chunk of the internet owned by the U.S. Department of Defense.

That real estate has since more than quadrupled to 175 million addresses — about 1/25th the size of the current internet.

”It is massive. That is the biggest thing in the history of the internet,” said Doug Madory, director of internet analysis at Kentik, a network operating company. It’s also more than twice the size of the internet space actually used by the Pentagon.

After weeks of wonder by the networking community, the Pentagon has now provided a very terse explanation for what it’s doing. But it has not answered many basic questions, beginning with why it chose to entrust management of the address space to a company that seems not to have existed until September.

FULL STORY