16 April 2019

Economics Sure, but Don’t Forget Ethics with Artificial Intelligence

Richard Kuzma and Tom Wester

The widening rift between the Pentagon and Silicon Valley endangers national security in an era when global powers are embracing strategic military-technical competition. As countries race to harness the next potentially offsetting technology, artificial intelligence, the implications of relinquishing their competitive edge could drastically change the landscape of the next conflict. The Pentagon has struggled—and continues to struggle—to make a solid business case for technology vendors to sell their products to the Defense Department. This is made especially urgent by Russia and China’s increasing artificial intelligence capabilities and newly created national strategies. While making the economic case to Silicon Valley is critical, building a lasting relationship will necessitate embracing the ethical questions surrounding the development and employment of artificial intelligence on the battlefield. Ultimately, this requires more soul-searching on the part of both military leaders and technologists.

It is hard to overstate the importance of ethics in discussions of artificial intelligence cooperation between the military and private sector. Google chose not to renew a Department of Defense contract with Project Maven—a Defense Department project using artificial intelligence for drone targeting—when thousands of employees protested the company’s involvement. Thousands of artificial intelligence researchers followed their letter by signing a pledge to not build autonomous weapons. Computer science students from Stanford and other top-tier universities, a primary talent pipeline for tech companies, wrote a letter to Google’s chief executive officer saying they would not interview with Google if it did not drop its Maven contract. 



Google Employees Petition Against Project Maven (sUAS News)

However, Google isn’t the only case where working with the military has prompted debate and resulted in modifications to corporate behavior. Microsoft employees wrote an open letter demanding the company not bid on the Pentagon’s new cloud contract or sell augmented reality headsets to the Army, and Amazon employees wrote a letter to Jeff Bezos demanding he not sell facial recognition technology to the U.S. government, saying, “We demand a choice in what we build, and a say in how it is used.” Even at Clarifai, a company heavily involved in Project Maven, the project was said to have “damaged morale, complicated recruitment, and undermined trust within the company.”

The Defense Department is beginning to recognize the importance of artificial intelligence and machine learning, and realizes the engine of innovation with regards to artificial intelligence lies almost exclusively in the private sector. As such, the Pentagon needs help from commercial leaders and innovators to stay competitive. This requires courting tech chief executive officers charged principally with increasing shareholder value. More importantly, however, it necessitates working closely with the engineers who develop the technology. As in the military, people are a technology company’s most valuable resource. As such, tech executives should pay as close of attention to their engineers as to their bottom line. Additionally, the Pentagon must not only demonstrate it is an attractive potential customer, but must also show how the handiwork of engineers will help, rather than harm, humanity. In this effort, the Defense Department is making significant progress in marketing itself as a customer as well as creating mechanisms for companies to effectively work with the military; however, it still needs to do more to address the ethical challenges to build a lasting relationship.

MAKING THE BUSINESS CASE

The Pentagon has made great strides in positioning itself a more viable tech customer. For example, the Defense Innovation Unit has successfully executed contracts with dozens of technology companies not only in Silicon Valley, but across the country. It has lowered barriers to entry by creating the Commercial Solutions Opening, which de-jargons military requirements by publicizing plain-language problem statements on easily accessible websites rather than the sometimes hundreds of pages long requirements documents that can be found on Fed Biz Opps, a labyrinth only defense contractors can understand. Additionally, Other Transaction Authorities, the contracting mechanism used in the Commercial Solutions Opening, are more flexible and operate orders of magnitude faster than traditional defense contracting.These authorities do more to protect vendors’ intellectual property rights and provide non-dilutive capital to companies, making them a great resource for start-ups and amplifying any external (non-Department of Defense) investments.

But the Defense Innovation Unit is not the only option. If technology companies aren’t mature enough to sell a commercial product to the military in a short timescale, they still have options. The military offers early-stage research and development funding through Small Business Innovation Research grants and partners with technology accelerators, like Techstars.

Then Secretary of Defense Ash Carter at the Texas Advanced Computing Center and Visualization Lab, March 31, 2016. (SFC Clydell Kinchen/DOD Photo)

Ultimately, the Defense Department is recognizing the danger of its sluggish acquisition system and taking concrete steps to make doing business with the military easier. Cultural change must follow these efforts, but it’s promising the heads of Army, Navy, and Air Force acquisition all have backgrounds in rapid acquisition. Yet, while the Pentagon has made strides in building a better economic case for company executives, it has shown less concrete progress in addressing the ethical concerns of the research and engineering pioneers of artificial intelligence.

ALL ABOUT TALENT

“The real ‘arms race’ in artificial intelligence is not military competition, but the battle for talent.” The Department of Defense Chief Information Officer recently testified before the House Armed Services Committee’s Emerging Threats and Capabilities Subcommittee, saying the Pentagon’s new Joint Artificial Intelligence Center is “all about talent.” The Department of Defense’s Chief Data Officer said, “Finding, borrowing, begging and creating [artificial intelligence] talent is a really big challenge.” The best artificial intelligence talent is in the private sector, so the Department of Defense must go outside the Pentagon to tap into this talent pool.

There are only a few thousand artificial intelligence researchers worldwide capable of leading new artificial intelligence projects. Known as 10x programmers, these individuals are capable of doing the work of 10 people. Tech giants like Google, Amazon, and Microsoft spend hundreds of millions of dollars to acquire and retain this talent; after all, it’s the life blood of their organizations. Without it, their products suffer, profits shrink, and business models collapse. In her War on the Rocks article, Rachel Olney notes, “When Google chose not to pursue follow-on work from the Maven contract, it was driven by an employee petition. For a company as large as Google, Maven was a fairly small contract and the public relations fiasco was hardly worth the revenue it would bring.” However, the heart of the economic decision sits atop an ethical decision: should artificial intelligence experts build products for war? If the Pentagon cannot convince engineers to do that, efforts to effectively integrate and capitalize on the novel commercial products they create will prove to be futile.

NEITHER THE COMMERCIAL SECTOR, NOR THE MILITARY IS INFALLIBLE

Some fear Russia and China will militarize artificial intelligence without the ethical constraints to which the United States military is subject. The Chinese government has already rolled out ethically murky projects such as the new social credit system built on a panopticon of financial surveillance and facial recognition. But the U.S. military operates on a code of ethics, not a code of economics, and recent studies have found Americans hold the most institutional trust in the military; Amazon and Google rank second and third. That said, technologists will not, and should not, take at face value the Pentagon’s assurances the military will use their creations for moral purposes. History is filled with technologists who regretted the end-uses of their creations, from the idea that Alfred Nobel established the Nobel Peace Prize due to his regret over creating dynamite to the inventor of pepper spray, Kamran Loghman, and his regret over its use by police against protesters. The story of the world’s most destructive weapon, the atomic bomb, cannot be told without the regret of its contributors. Upon witnessing detonation of the first nuclear bomb, Robert Oppenheimer, the father of the atomic bomb, once said, "We [at the Manhattan Project] thought of the legend of Prometheus, of that deep sense of guilt in man's new powers that reflects his recognition of evil, and his long knowledge of it." Albert Einstein, whose discoveries led to the development of the bomb, penned a letter to President Roosevelt saying, “Had I known that the Germans would not succeed in developing an atomic bomb, I would have done nothing.”

BY COOPERATING WITH TECH COMPANIES ON ETHICS, THE PENTAGON DOES MORE THAN BETTER ITSELF BY BECOMING MORE EDUCATED ON EMERGING ETHICAL DILEMMAS; IMPORTANTLY, IT HAS THE CHANCE TO POSITIVELY INFLUENCE THE FUTURE OF ARTIFICIAL INTELLIGENCE.

This is not to say the commercial sector is infallible. After all, market forces alone have not delivered a consistent alignment of societal values, especially over time. For instance, Facebook did not do enough to prevent ethnic cleansing in Myanmar and paid teenagers to install spyware on their devices. Further, Google is accused of violating European privacy laws in seven countries and planned to launch one censored search engine in China and another in Russia. By cooperating with tech companies on ethics, the Pentagon does more than better itself by becoming more educated on emerging ethical dilemmas; importantly, it has the chance to positively influence the future of artificial intelligence.

WHAT THE PENTAGON SHOULD FOCUS ON

The Google Maven letter, signed by over 3,100 employees at the company, says, “We cannot outsource the moral responsibility of our technologies to third parties.” This is something both the Pentagon and Silicon Valley can agree on, which is why technologists need to be intimately involved in how their work is deployed and battlefield commanders must understand the risks in using artificial intelligence when making life and death decisions. Both algorithms and the data that feed them are affected by people, who maintain an inherent level of bias, enabling small bugs to have large, unforeseen impacts. For example, some algorithms exhibit systematic bias. Codified discrimination is a key unintended consequence of artificial intelligence. Google’s facial recognition algorithm mis-identified photos of African-American individuals as gorillas, a Flickr algorithm labeled Native-American dancers as “costume,” and Nikon cameras questioned “Did someone blink?” after taking a photo of an Asian-American person’s face. In military systems, bias can have a much higher cost; misidentifying a military-aged male could threaten his life.
EVEN IF TECHNOLOGISTS REFUSE TO DEVELOP OR SELL CERTAIN TECHNOLOGIES TO THE MILITARY, THEIR HESITATIONS SHOULD BE HELD IN HIGH ESTEEM, BECAUSE THEY CAN HELP THE MILITARY MITIGATE WORST-CASE-SCENARIOS. 


Newsweek covers from September 12, 1983 (left) and July 18, 1988 (right), on the USS Vincennes incident. (Newsweek/Wikimedia)

Yet, just as the Pentagon seeks the best technologists, it must seek out brilliant ethicists to address technology deployment problems. Department of Defense Directive 3000.09 on autonomy in weapon systems is a start, but only that. Some suggest allowing people to stay in the loop will suffice. However, even with human involvement artificial intelligence decision making can result in tragedy, as when the USS Vincennes shot down a civilian passenger jet. Deeper conversations are needed on how to find, address, and combat bias in algorithms, ensure proper disclosure of algorithm faults and failures, and consider how artificial intelligence interacts with human rights. All these efforts must be underpinned by public, accountable conversations of how artificial intelligence should be used. Even if technologists refuse to develop or sell certain technologies to the military, their hesitations should be held in high esteem, because they can help the military mitigate worst-case-scenarios. To use the phrasing of computer security expert Bruce Schneier, the Pentagon must ensure it is “doing the right things [morally]” rather than just “doing things right [by the letter of the law].”

THE DEPARTMENT OF DEFENSE MUST ACT

The Pentagon is moving in the right direction when it comes to pursuing safe and ethical artificial intelligence principles, but has failed to clearly articulate how it will take action. The Joint Artificial Intelligence Center was established with a “strong commitment to military ethics and [artificial intelligence] safety,” and the Defense Innovation Board is creating ethics guidelines and recommendations for artificial intelligence development and implementation, currently holding public listening sessions to hear the concerns of industry, academia, and the public. Furthermore, the recently released unclassified summary of the 2018 Department of Defense Artificial Intelligence Strategy has a section specifically outlining the Department’s desire to serve as a leader “in military ethics and [artificial intelligence] safety.” These commitments and listening sessions are critical to starting an open, direct, and ongoing dialog between technologists and the Pentagon.

But dialog means little without concrete action behind it. What remains to be seen is how much the Defense Department is willing to integrate with the engineers, designers, and ethicists working on developing artificial intelligence products for the military. To build upon its artificial intelligence ethics efforts and concretely demonstrate its commitment to responsible and ethical artificial intelligence use, the Pentagon should: 

Solidify an ongoing dialog in which trusted advisors can be briefed on and give input to ongoing military projects. These experts should have a similar role as Defense Innovation Board experts, but should also have more working-level knowledge of artificial intelligence research and experience deploying artificial intelligence systems at scale. Follow Facebook’s lead: after its privacy issues, it hired three of its biggest critics.

Integrate wherever possible. Bring in artificial intelligence talent to help lead projects such as reforming the military's talent management system or helping to streamline health care. Allow technologists to get exposure to the military, and the military to capitalize on the talent of leading edge technologists. Start on non-controversial projects, such as helping to enhance individualized job detailing as well as ongoing efforts in predictive maintenance and humanitarian response. Once successful, the Pentagon should then slowly transition towards efforts with a more apparent military foci, such as imagery analysis and target classification.

Enable military members to capitalize on experiences working in the private sector, specifically with regards to artificial intelligence. Expand existing programs such as the U.S. Navy’s Tours with Industry or the U.S. Air Force’s Education with Industry, and enable highly talented individuals with technical experience to pursue extended artificial intelligence related residencies.

Create virtuous circles. The technology community won’t trust the ethics advisor program from the outset. The Department of Defense must use the ethics advisors and enable them to apply the brakes on small-scale, non-controversial projects like back-end business practices (e.g., logistics, maintenance, talent management) at first. Learning to mitigate, identify, acknowledge, and address bias in non-controversial projects will provide lessons learned for doing it in higher-stakes projects. Small successes will build trust and confidence in the process.

CONCLUSION

The proliferation of artificial intelligence will likely continue to increase in pace, largely driven by the commercial sector. While the economics of contracting will play a key role in bringing companies to the table to work with the U.S. Department of Defense, ethics will be critical in ensuring they remain for the long haul. The ethical dialogue is an important one and cannot simply be brushed off as “catchy headlines or impassioned debate.” The United States has a free and open society, where technologists with ethical qualms aren’t forced to cooperate with the government. Russia and China don’t face this constraint. If the U.S. military is serious about maintaining its competitive edge with respect to artificial intelligence, ethics must play a central role in its strategy for working with the private sector. 

Richard Kuzma is a Navy surface warfare officer passionate about how the Defense Department adapts to emerging technologies, particularly artificial intelligence. He is an alum of the Defense Innovation Unit and the Harvard Kennedy School, and he is an affiliate at Harvard’s Technology and Public Purpose Project. 

Tom Wester is a Navy surface warfare officer. He received a Bachelors in Mathematics from the United States Naval Academy in and a Masters in Management Science and Engineering from Stanford University, where he focused on national security and technology policy. Tom is a plank owner and served as a project manager for the space portfolio at the Defense Innovation Unit. 

The views expressed here are the authors’ and do not reflect those of the Defense Innovation Unit, the Department of the Navy, the Department of Defense, or the U.S. Government.

No comments: