23 September 2019

Intel, Ethics, and Emerging Tech: Q&A with Cortney Weinbaum


Cortney Weinbaum studies intelligence and cyber policy as a senior management scientist at RAND. Her research has helped the intelligence community improve its data collection and analysis and identify emerging technologies and their impact on operations. She began her career as an intelligence officer, designing advanced sensors for intelligence gathering. She is the recipient of a General Electric Fellowship and Grant for Women in Physics & Computer Science and a Defense Intelligence Agency Humanitarian Award. Weinbaum serves on the board of directors of Carrie Simon House, a charity that provides transitional housing, life skills, and support and mentoring to young, homeless mothers in the Washington, D.C., area.

What got you interested in intelligence and national security?

I was studying physics in college, with absolutely no idea what I wanted to do for a living. And then, in the beginning of my junior year, 9/11 happened. I reached out to my university's alumni network in Washington and asked, 'What can I do?' My mentors said, 'You have a physics degree? Send in a resume.' I ended up taking a summer internship with the Defense Intelligence Agency that led to a job after college.


Your research has examined challenges facing the intelligence community. What worries you about the future of intelligence?

You can be successful at collecting all of the data in the world, but then be unsuccessful at making sense of it fast enough to be able to respond. We used to be so limited in what we could collect. The first satellites in the Sputnik era were able to see such a small part of Earth at any given time because we really only had to look at the places where we knew the Soviets had their nuclear weapons. Well, now we have commercial imagery that takes images of the entire world every day. We've been so successful at collecting so much more data that now we need to figure out how to make sense of it all.

You've looked at artificial intelligence as one solution. What are some of the risks?

If we don't understand how our algorithms are making their decisions, then we can't judge whether we agree with them.Share on Twitter

I wrote an article in 2016 where I started to question what some of those risks could be. One example I gave was in indications and warning—the field of intelligence that is going to give you an early warning that a country is going to attack. This is the area of intelligence that is ripe for AI and machine learning because time really matters. Sometimes you only get seconds or minutes of warning. You can't wait for an analyst to wake up in the morning, show up at work, get a cup of coffee, and read the latest reporting. The algorithms never sleep.

But if we don't really understand what's going on inside that black box, you might end up with a situation where your algorithm tells you that a country is about to attack, but a human analyst looks at the same data and says, 'I don't see it.' Next thing, there'll be a U.S. president at 2 a.m. having to decide, 'Do I respond or not?' If we don't understand how our algorithms are making their decisions, then we can't judge whether we agree with them.

Your most recent study looked at ethics in scientific research. What made you want to look at that question?

That 2016 article led to a really interesting conversation with the director of the Intelligence Advanced Research Projects Activity. IARPA is responsible for research and development for the intelligence community. They were concerned about the lack of clear and well-defined ethics in emerging disciplines. We talked about AI, synthetic biology, and neurotechnology. They asked us if we could study how research ethics are created across scientific disciplines.

We ended up finding that there are ten ethical principles that are common across all scientific disciplines. If you don't know where to begin, here's a starting point: ten ethical principles that every scientific discipline agrees on.

What's next in your research?

AI will continue to be a hot topic. There's been a lot of discussion about how to create ethics in AI. I would argue we don't need to create them; we actually need to figure out how to apply existing ethics to AI. I'm also working with different agencies to help figure out what the future of their technologies will look like.

Anything else?

As a woman with a physics degree, I feel like I have an obligation to tell women or minorities that this is just an amazing discipline. And there are role models out there. You might not know any in your community, but there are people like you—whatever 'like you' means—who are doing really interesting science, and you should pursue it if you love it. I went to a university with more than 30,000 people, and I was one of three women graduating with a physics degree in my year. If you're trying to find someone you can reach out to, find a professional society, find someone on Twitter. I think people are really welcoming of that engagement and taking on the role of mentor these days, because we feel such a personal obligation to help the next generation.

No comments: