As AI implementation continues to widen — The Department of Defense is developing AI ethics guidelines for tech contractors

Ricky S
4 min readNov 21, 2021

--

When Google employees learned of their company’s involvement in Project Maven, a contentious US military attempt to use artificial intelligence to analyze surveillance video, they were furious. Thousands of people took to the streets to protest. In a letter to Google’s leadership, they stated, “We feel that Google should not be in the business of war.” Approximately a dozen employees have resigned. The deal with Google was not renewed in 2019.

Project Maven is still active, and other tech giants such as Amazon and Microsoft have subsequently supplanted Google. The US Department of Defense, on the other hand, is well aware that it has a trust problem. That’s something it’ll have to deal with if it wants to keep up with the latest technologies, particularly AI, which will necessitate collaboration with Big Tech and other non-military groups.

The Defense Innovation Unit, which awards DoD contracts to corporations, has established “ethical artificial intelligence” standards that third-party developers will be required to follow while developing AI for the military, whether it’s for an HR system or target recognition.

Companies can use the guidelines to follow a step-by-step procedure during planning, development, and implementation. They include methods for determining who might use the technology, who might be affected by it, what those problems might be, and how to avoid them — both before and after the system is developed.

If the DoD’s standards are adopted or altered by other agencies, the effort could change how AI is produced by the US government. Goodman and his colleagues have delivered them to NOAA and the Department of Transportation, and are in talks with ethics groups at the Department of Justice, the General Services Administration, and the Internal Revenue Service, according to him.

The guidelines’ goal, according to Goodman, is to ensure that tech contractors follow the Department of Defense’s existing AI ethical norms. Following a two-year study commissioned by the Defense Innovation Board, an advisory body of renowned technology experts and businesspeople established in 2016 to bring Silicon Valley’s spark to the US military, the Department of Defense revealed these principles last year. Former Google CEO Eric Schmidt served as chairman of the board until September 2020, and current members include Daniela Rus, director of MIT’s Computer Science and Artificial Intelligence Lab.

Some detractors, though, are skeptical that the work will result in substantial change.

The board engaged a number of specialists during the research, including strong opponents of the military’s use of AI, such as members of the Campaign for Killer Robots and Meredith Whittaker, a former Google researcher who helped organize the Project Maven demonstrations.

Whittaker, who is currently the faculty director of the AI Now Institute at New York University, was unavailable for comment. However, according to Courtney Holsworth, an institute spokesperson, she attended one meeting and clashed with senior members of the board, including Schmidt, about the organization’s future. “She was never properly consulted,” Holsworth claims. “Claiming that she was could be interpreted as a sort of ethics-washing, in which the presence of disagreeing voices during a tiny phase of a protracted process is utilized to pretend that a certain outcome had widespread support from important stakeholders.”

Can the Department of Defense’s standards still assist to develop confidence if there isn’t widespread support? “There will be some who will never be pleased by whatever set of ethics rules produced by the Department of Defense because they find the idea absurd,” Goodman adds. “It’s critical to be honest about the scope of what guidelines can and cannot accomplish.”

The recommendations, for example, include no mention of the employment of lethal autonomous weaponry, a technology that some advocates believe should be prohibited. However, as Goodman points out, regulations controlling such technology are made at a higher level. The guidelines’ goal is to make it easier to create AI that complies with the standards. And part of that process entails making any concerns raised by third-party developers explicit. “Deciding not to pursue a particular system is a valid use of these rules,” says Jared Dunnmon of the DIU, who coauthored them. “You can determine if it’s a good idea or not.”

According to one Defense Department advisor, “constructive engagement” will be more effective than opting out.

Margaret Mitchell, an AI researcher at Hugging Face who co-led Google’s Ethical AI team with Timnit Gebru before both were fired, agrees that ethics rules, at least in theory, might help make a project more visible for people working on it. Mitchell had a front-row seat to the Google protests. “People ended up leaving specifically because of the absence of any type of clear guidelines or openness,” she adds. “People ended up leaving specifically because of the lack of any sort of clear guidelines or transparency.”

The issues aren’t black and white for Mitchell. “I believe some Google employees believed that all work with the military was bad,” she says. “I am not one of those individuals.” She’s been speaking with the Department of Defense on how it might work with businesses while adhering to its ethical ideals.

She believes the Department of Defense still has a long way to go before gaining the trust it need. One issue is that some of the standards’ language is open to interpretation. “The department will take conscious steps to eliminate unintended bias in AI capabilities,” they say, for example. What about deliberate bias? This may appear to be nitpicking, yet differences in interpretation are dependent on such details.

The usage of military technology is difficult to monitor because it usually necessitates security clearance. Mitchell would like to see DoD contracts include independent auditors with the required clearance who can guarantee corporations that the requirements are being followed. “Employees require some assurance that guidelines are being applied correctly,” she explains.

--

--

Ricky S
Ricky S

No responses yet