18 June 2018

How Using Algorithms Can Worsen Inequality


Algorithms — a set of steps computers follow to accomplish a task — are used in our daily digital lives to do everything from making airline reservations to searching the web. They are also increasingly being used in public services, such as systems that decide which homeless person gets housing. Virginia Eubanks, political science professor at the State University of New York at Albany, thinks this kind of automation can inadvertently hurt the most vulnerable. She joined the Knowledge@Wharton show on SiriusXM channel 111 to explain this problem, which is the topic of her book, Automating Inequality: How High Tech Tools Profile, Police and Punish the Poor. (Listen to the podcast at the top of the page.)

An edited transcript of the conversation follows.

Knowledge@Wharton: There’s a belief that algorithms are made without bias, but you are dispelling that myth. What do you want people to understand about automation?

Virginia Eubanks: We often believe that our digital decision-making tools, like algorithms or artificial intelligence or integrated databases, are more objective and more neutral than human beings. But one of the arguments I make in Automating Inequality is that we’re smuggling moral and political assumptions into them about who should share in American prosperity.

I write in the book about three different systems. One is an attempt to automate all of the eligibility processes for the state of Indiana in 2006. The second is electronic registry of the un-housed in Los Angeles in 2013. And the third is a predictive model that’s supposed to be able to forecast which children will be victims of abuse or neglect in the future in Allegheny County, Pa., which is where Pittsburgh is located.

Knowledge@Wharton: Let’s talk first about the Los Angeles example. What I found incredible is the fact that if you were in jail or in prison, that’s considered to be housing, so your score can actually be lowered.

Eubanks: The story I tell in Los Angeles is about the Coordinated Entry System. It was billed as the Match.com of homeless services, and there’s some really great intentions behind it and some really smart people working on it. The intention is to match the most vulnerable, unhoused people. There at 58,000 unhoused people in Los Angeles — it’s the second-highest rate in the country — and 75% of those people are completely unsheltered. This is a huge humanitarian crisis, so it makes sense that people are working hard to figure out how to get help to the people most in need as fast as they can.

But one of the things that’s really troubling about this system is that it asks these incredibly invasive and even incriminating questions about people’s behavior to put them on this moral spectrum from most vulnerable to least vulnerable. It asks questions like, ‘Are you having unprotected sex? Are you running drugs for money? Is there an open warrant on you? Have you thought about harming yourself or others?’ The responses to that survey are available to 168 different agencies across Los Angeles County. Some of the information is even available to the Los Angeles Police Department based on an oral request, so no warrant process, no written requests.

“We’re smuggling moral and political assumptions into [algorithms] about who should share in American prosperity.”

The folks who are unhoused who are taking this survey — folks like Gary Boatwright, whom I write about in the book — really feel like they’ve been asked to criminalize themselves in exchange for a slightly better lottery number for housing. When they don’t get resources, it leaves them open because some of those day-to day-activities of being unhoused, like having a place to put your stuff and having a place to go to the bathroom and drinking a beer, are legal when you’re in public. Just staying out on the streets longer puts you at really high risk for ending up in prison, then the algorithm scores you lower because prison counts as housing.

Knowledge@Wharton: Tell us about the example of Allegheny County.

Eubanks: I looked at a system that’s called the Allegheny Family Screening Tool or the AFST. That tool is basically a statistical regression that was run against a big data warehouse that the county has kept since 1999. A team of economists and social scientists built this model, based on the information that’s in that data warehouse, that is supposed to be able to rate the children who are called on to the abuse and neglect hotline in terms of the danger of them being abused or neglected in the future. This tool is supposed to help case workers make better decisions about which calls and which reports they should screen in for investigation.

But there are some things about this tool that are a little troubling. I’ll say first that I think this tool often confuses parenting-while-poor with poor parenting because it only has access to data about families that have received public resources. The kinds of harms that might be happening to middle-class children aren’t represented in the database. The fact of being risk related keeps you pulled into the feedback cycle of more surveillance, so more data, so a higher score. That can really make families feel like they’re on a knife’s edge, like reaching out for any kind of public resource might mean that they’re rated more highly for potential abuse or even that their children will be rated more highly for potential abuse when they become parents themselves.

Knowledge@Wharton: In the examples that you give, you’re talking about a sector of people who are going to be affected by a lot of these issues more than the general public. Does this kind of automation compound their problems?

Eubanks: Many of the families that I spoke to certainly feel that way. From the point of view of the families who are the targets of these systems, it certainly feels like a piling on. It feels like they’re under the microscope. It feels to them like they have been singled out for extra scrutiny and surveillance that leaves them in really dangerous situations or diverts them from getting the resources they need to keep their families safe.

One of the origin points for this book is a conversation I had in 2000 with a young mom who was on public assistance and who said to me, and I think very generously, “You all.” She meant professional, middle-class people. “You all should pay attention to what’s happening to us because they’re coming for you next.”

One of the lessons that folks who are not currently poor and working-class people should take from this book is that after 50 years of digital experiments in public services, in places where there’s a low expectation that your rights will be respected, the architects of these systems are now really aiming for middle-class entitlement programs: Social Security, disability, unemployment, Medicare.

If you look at the Trump administration’s budget proposal, they suggest that they can save $189 billion in these programs over the next 10 years by reducing improper payments and improving program integrity. What that means is collecting and sharing more data, including buying data from commercial data brokers and spreading these kinds of systems way beyond just the public service systems that they’re in now.


“The architects of these systems are now really aiming for middle-class entitlement programs.”

Knowledge@Wharton: Are a lot of these agencies relying too much on algorithms?

Eubanks: We might be seeing the beginning of the end of what I think of as the Wild West period of big data. I think most folks who were designing these systems have felt like they can collect anything they want, use it however they want and keep it for as long as they want.

What I’m hearing in communities, particularly marginalized communities that are deeply impacted by these systems, is that people are not going to stand for that anymore. People have real concerns around informed consent. People have real concerns about how their data is being shared, whether it’s legal and whether it’s morally right.

And they’re developing self-defense mechanisms and strategies around these systems. When I talk to data scientists, I often say it’s time to adjust. We’ve had this remarkably open field for a long time, but I do believe that moment is ending.

Knowledge@Wharton: You’re talking about a digital bias against the poor. But that bias has been a historical norm.

Eubanks: Yes, that’s one of the major points of the book. I talk about this stuff as building a digital poorhouse. I use that metaphor specifically because I think that these systems, although we talk about them often as disruptors, are really more intensifiers and amplifiers of processes that have been with us for a long time, at least since the 1800s.

But the thing that is hopeful, the thing that’s optimistic about the book and about the situation we find ourselves in is that they make these problems so concrete and so visible that they really call us to a moral reckoning to do better.

Knowledge@Wharton: Is there a light at the end of the tunnel? Automation is expanding, so it doesn’t seem like those problems will decrease.

Eubanks: I talk in the book about the deep cultural and political changes we need to think through in order to get to better systems. But after working on this stuff for close to 10 years, I do have some big-picture things to consider that might fall in the category of solutions.

I really believe we need to stop using these systems to avoid some of the most pressing moral and political dilemmas of our time, which is racism and poverty. My great fear with these systems is we’re actually using them as a kind of empathy override, meaning that we are struggling with questions that are almost impossible for human beings to make.

In Los Angeles, there are 58,000 unhoused people and just a handful of resources. I don’t want to be the case worker making that decision. But I also don’t want us to outsource that decision to a machine so we can avoid its human costs. That’s one of my great fears about these systems.

Knowledge@Wharton: What’s a solution to that specific example? Most cities don’t have the personnel to handle such decisions.

Eubanks: I can talk specifically about housing, but one of the things I make really clear in the book is all of the creators of these systems talked to me about them as kinds of triage. They say, “We have this incredible overwhelming need. We don’t have enough resources, so we have to use these systems to make these incredibly difficult decisions.”

“My great fear with these systems is we’re actually using them as a kind of empathy override.”

But that idea that we have to triage, that there’s not enough for everyone and we have to ration resources, that’s a political decision. That’s what I mean when I say we’re using these technologies to avoid important political decisions. In other places in the world, the conditions that unhoused folks in Los Angeles live in is considered a human rights violation. Here, we’re talking about it as if it’s a systems engineering problem.

Knowledge@Wharton: Tell us about your example in Indiana.

Eubanks: In Indiana in 2006, the governor signed a $1.4 billion contract with a collection of high-tech companies, including IBM and Affiliated Computer Systems (ACS) to automate all of the state’s eligibility processes for the welfare programs. How they did that was by replacing local case workers with online forms and regional call centers. That meant folks no longer had a relationship with a local case worker who was responsible for their case or responsible for their family.

When they were thrown back on trying to fill out these really complicated and very long forms by themselves with very little help, most of them were denied benefits for failing to cooperate in establishing eligibility. That meant they had made a mistake somewhere on the form, or the state had made a mistake, or the private contractor had made a mistake. But they didn’t have any help to figure out what that mistake was.

So, millions of people suffered unnecessarily. The project was such a catastrophe that the governor canceled the contract three years into a 10-year contract and then was sued for breach of contract by IBM. In the first round, IBM won and was awarded an additional $50 million on top of the $437 million they had already collected. I’m not sure we’ll ever know the full cost of that experiment. Obviously, the heaviest burden of that cost fell on poor and working families who weren’t able to get the resources they needed. And people died. It was really horrific. But there was also a larger cost to the taxpayers of the state.

It’s really important to understand that most people, when they are told they are ineligible for these programs, will not try again. Lindsey Kidwell is one of the folks I spoke to for the book. She got kicked off Medicaid during the automation and very courageously stood up for herself, managed to get her benefits back, went through a better period of time and got off benefits entirely.

When I was talking to her just before the book came out, she said she had been through a divorce recently and was probably eligible for services again, but there was no way she was going to apply because the whole process was so horrible that she will do anything in the world except for apply again. So, we’re really blocking people from resources they are entitled to by law and that they need to keep themselves and their families safe and healthy. And that is not who we are as a country.

Knowledge@Wharton: You focus on the disenfranchised, but you said earlier that this will affect the middle class more as we go forward.

Eubanks: Absolutely. I think these tools get tested in places where there’s already an expectation that folks will be forced to trade one of their human rights, like their information or their privacy, for another human right, like food or housing. They’re tested in low-rights environments. But once they’re tested there, they’ll come for everyone.

No comments: