Future of Warfare

Published — December 21, 2018

The Pentagon tries to win hearts and minds in Silicon Valley

Airman 1st Class Sandra, 7th Intelligence Squadron, explains the how Project Maven works at a booth during the 70th Intelligence, Surveillance and Reconnaissance Wing’s 2018 Innovation Summit July 24, 2018 at College Park, Maryland. (DVIDS)

An early stumble prompts the Defense Department to change its approach to the tech community but hasn’t lessened its commitment to put artificial intelligence into weaponry.

This story is published in partnership with WIRED. 

Introduction

The American military is desperately trying to get a leg up in the field of artificial intelligence, which top officials are convinced will deliver victory in future warfare. But internal Pentagon documents and interviews with senior officials make clear that the Defense Department is reeling from being spurned by a tech giant and struggling to develop a plan that might work in a new sort of battle — for hearts and minds in Silicon Valley.

The battle began with an unexpected loss. In June, Google announced it was pulling out of a Pentagon program—the much-discussed Project Maven—that used the tech giant’s artificial intelligence software. Thousands of the company’s employees had signed a petition two months earlier calling for an end to its work on the project, an effort to create algorithms that could help intelligence analysts pick out military targets from video footage.

Inside the Pentagon, Google’s withdrawal brought a combination of frustration and distress — even anger — that has percolated ever since, according to five sources familiar with internal discussions on Maven, the military’s first big effort to utilize AI in warfare.

“We have stumbled unprepared into a contest over the strategic narrative,” said an internal Pentagon memo circulated to roughly 50 defense officials on June 28. The memo depicted a department caught flat-footed and newly at risk of alienating experts critical to the military’s artificial intelligence development plans.

“We will not compete effectively against our adversaries if we do not win the ‘hearts and minds’ of the key supporters,” it warned.

Maven was actually far from complete and cost only about $70 million in 2017, a molecule of water in the Pentagon’s oceanic $600 billion budget that year. But Google’s announcement exemplified a larger public relations and scientific challenge the department is still wrestling with. It has responded so far by trying to create a new public image for its AI work and by seeking a review of the department’s AI policy by an advisory board of top executives from tech companies.

The reason for the Pentagon’s anxiety is clear: It wants a smooth path to use artificial intelligence in weaponry of the future, a desire already backed by the promise of several billion dollars to try to ensure such systems are trusted and accepted by military commanders, plus billions more in expenditures on the technologies themselves.

The exact role that AI will wind up playing in warfare remains unclear. Many weapons with AI will not involve decision-making by machine algorithms, but the potential for them to do that will exist: “Technologies underpinning unmanned systems would make it possible to develop and deploy autonomous systems that could independently select and attack targets with lethal force,” as a Pentagon strategy document said in August.

Developing artificial intelligence, officials say, is unlike creating other military technologies. While the military can easily turn to big defense contractors for cutting-edge work on fighter jets and bombs, the heart of innovation in AI and machine learning resides among the non-defense tech giants of Silicon Valley. Without their help, officials worry, they could lose an escalating global arms race in which AI will play an increasingly important role, something top officials say they are unwilling to accept.

“If you decide not to work on Maven, you’re not actually having a discussion on if artificial intelligence or machine learning are going to be used for military operations,” Chris Lynch, a former tech entrepreneur who now runs the Pentagon’s Defense Digital Service, said in an interview. AI is coming to warfare, he says, so the question is which American technologists are going to engineer it?

Lynch, who recruits technical experts to spend several years working on Pentagon problems before returning to the private sector, said that AI technology is too important, and that the agency will proceed even if it has to rely on lesser experts. But without the help of the industry’s best minds, Lynch added, “we’re going to pay somebody who is far less capable to go build a far less capable product that may put young men and women in dangerous positions, and there may be mistakes because of it.”

Google isn’t likely to shift gears soon. Less than a week after announcing that the company would not seek to renew the Maven contract in June, Google released a set of AI principles which specified that the company would not use AI for “weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.”

Some defense officials have complained since then that Google was being “unpatriotic,” noting that the company was still pursuing work with the Chinese government, the top U.S. competitor in artificial intelligence technology.

“I have a hard time with companies that are working very hard to engage in the market inside of China, and engaging in projects where intellectual property is shared with the Chinese, which is synonymous with sharing it with the Chinese military, and then don’t want to work for the US military,” General Joe Dunford, the chairman of the Joint Chiefs of Staff, commented while speaking at a conference in November.

In December testimony before congress, Google CEO Sundar Pichai acknowledged that Google had experimented with a program involving China, Project Dragonfly, aimed at developing a model of what government-censored search results would look like in China. However, Pichai testified that Google currently “has no plans to launch in China.”

Project Maven’s aim was to simplify work for intelligence analysts by tagging object types in video footage from drones and other platforms, helping analysts gather information and narrow their focus on potential targets, according to sources familiar with the partly classified program. But the algorithms did not select the targets or order strikes, a longtime fear of those worried about the intersection of advanced computing and new forms of lethal violence.

Many at Google nonetheless saw the program in alarming terms.

“They immediately heard drones and then they thought machine learning and automatic target recognition, and I think it escalated for them pretty quickly about enabling targeted killing, enabling targeted warfare,” said a former Google employee familiar with the internal discussions.

Google is just one of the tech giants that the Pentagon has sought to enlist in its effort to inject AI into modern warfare technology. Among the others: Microsoft and Amazon. After Google’s announcement in June, more than a dozen large defense firms approached defense officials offering to take over the work, according to current and former Pentagon officials.

But Silicon Valley activists also say the industry cannot easily ignore the ethical qualms of tech workers. “There’s a division between those who answer to shareholders, who want to get access to Defense Department contracts worth multimillions of dollars, and the rank and file who have to build the things and who feel morally complicit for things they don’t agree with,” the former Google employee said.

In an effort to bridge this gulf and dampen hard-edged opposition from AI engineers, the Defense Department has so far undertaken two initiatives.

The first, formally begun in late June, was to create a Joint Artificial Intelligence Center meant to oversee and manage all of the military’s AI efforts, with an initial focus on PR-friendly humanitarian missions. It’s set to be run by Lt. Gen. Jack Shanahan, whose last major assignment was running Project Maven. In a politically shrewd decision, its first major initiative is to figure out a way to use AI to help organize the military’s search and rescue response to natural disasters.

“Our goal is to save lives,” Brendan McCord, one of the chief architects of the Pentagon’s AI strategy, said while speaking at a technical conference in October. “Our military’s fundamental role, its mission, is to keep the peace. It is to deter war and protect our country. It is to improve global stability and it’s to ultimately protect the set of values that came out of the enlightenment.”

The second initiative is to order a new review of AI ethics by an advisory panel of tech experts, the Defense Innovation Board, which includes former Google CEO Eric Schmidt and LinkedIn co-founder Reid Hoffman.

That review, designed to develop “principles” for the use of AI by the military, is being managed by Joshua Marcuse, a former adviser to the Secretary of Defense on innovation issues who is now executive director of the board. Set to take about nine months, the advisory panel will hold public meetings with AI experts, while an internal Pentagon group also considers questions. Then it will forward recommendations to Secretary of Defense James Mattis about the ways that AI should or should not be injected into weapons programs.

“This has got to be about actually looking in the mirror and being willing to impose some constraints on what we will do, on what we won’t do, knowing what the boundaries are,” Marcuse said in an interview.

To make sure the debate is robust, Marcuse said that the board is seeking out critics of the military’s role in AI.

“They have a set of concerns, I think really valid and legitimate concerns, about how the Department of Defense is going to apply these technologies because we have legal authority to invade people’s privacy in certain circumstances, we have legal authority to commit violence, we have legal authority to wage war,” he said.

Resolving those concerns is critical, officials say, because of the difference in how Washington and Beijing manage AI talent. China can conscript experts to work on military problems, whereas the United States has to find a way to interest and attract outside experts.

“They have to choose to work with us, so we need to offer them a meaningful, verifiable commitment that there are real opportunities to work with us where they can feel confident that they’re the good guys,” Marcuse said.

Despite his willingness to discuss potential future constraints on AI usage, Marcuse said he didn’t think the board would try to change the Pentagon’s existing policy on autonomous weapons that depend on AI, which was put in place by the Obama administration in 2012.

That policy, which underwent a minor, technical revision by the Trump administration in May 2017, doesn’t prevent the military from using artificial intelligence in any of its weapons systems. It mandates that commanders have “appropriate levels of human judgment” over any AI-infused weapons systems, although the phrase isn’t further defined and remains a source of confusion within the Pentagon, according to multiple officials there.

It does, however, require that before a computer could be programmed to initiate deadly action, the weapons system that contains it must undergo special review by three senior Pentagon officials – in advance of its purchase. To date that special review hasn’t been undertaken.

In late 2016, during the waning days of the Obama administration, the Pentagon took a new look at the 2012 policy and decided in a classified report that no major change was needed, according to a former defense official familiar with the details. “There was nothing that was held up, there was no one who thought ‘oh we have to update the directives,’­­­­­­­­” the former official said.

The Trump administration nonetheless has internally discussed making it clearer to weapons engineers within the military – who it fears have been reluctant to inject AI into their designs—that the policy doesn’t ban the use of autonomy in weapons systems. The contretemps in Silicon Valley over Project Maven at least temporarily halted that discussion, prompting the department’s leaders to try first to win the support of the Defense Innovation Board.

But one way or another, the Pentagon intends to integrate more AI into its weaponry. “We’re not going to sit on the sidelines as a new technology revolutionizes the battlefield,” Marcuse said. “It’s not fair to the American people, it’s not fair to our service members who we send into harm’s way, and it’s not fair to our allies who depend on us.”

Read more in National Security

Share this article

Join the conversation

Show Comments

Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments