8 August 2018

The Challenge of Bias in AI

By Matthew Bey

One of the most prominent topics at South by Southwest (SXSW) Interactive this year was Artificial Intelligence (AI), which has seen an explosion of interest over the last five years. A good AI application requires sifting through copious amounts of data in order for the AI platform to train itself and learn to recognize patterns. The challenge here, and one that several panels at SXSW focused on, was bias in data sets. When data sets are developed by humans, AI will mirror the biases of its creators. In 2015, for example, Google Photos auto-tagged several black people as gorillas because it lacked a database large enough for proper tagging. Other examples illuminate gender biases in machine learning.
Legal Implications of Discriminatory Algorithms


Bias in AI is increasingly a legal matter. AI software is being used to develop credit scores, process loan applications and provide other similar services. An unstructured data set can use a person’s ethnicity or gender – either when provided directly or through deductive reasoning -– to decide whether or not to approve a loan or another financial instrument, which is illegal in many jurisdictions. An AI algorithm trying to access whether or not someone was at-risk of committing a second crime found out that one key variable was the person’s ethnicity. That was then used to bolster arguments to detain certain people, specifically black people, ahead of a trial. The demographic makeup of the tech industry, which is primarily white and Asian men, will also continue to play a role, as these groups will design and develop AI technology.

Because AI algorithms are increasingly ubiquitous in society, their increased application may help to further entrench and institutionalize many of the gender, ethnic and political biases already pervasive in modern society, making it harder to eradicate them in the future. All of these reasons make cultural biases affecting AI’s outcomes certainly a problem worth the time, money and effort of addressing.

But it will not be easy.

AI algorithms are designed to predict things based on the data — that’s kind of the point of using them in the first place. As an analyst here, we strive for accuracy often to a fault; some of our work can even become difficult for non-experts to understand. This concept is known as the ‘accuracy fetish’. There’s a reason (in some cases) why AI algorithms spit out the results that they do. And if accuracy is the ultimate goal, then sometimes using cultural biases allows AI to better predict an outcome.
Correcting for Bias in AI

The technology community is now (finally) exploring ways to address this predicament and adjust for biases in order to ensure that they are limited and not egregious. Google’s GlassBox program tries to manually restrict certain aspects of the machine learning that goes into an AI algorithm without sacrificing accuracy. DARPA is even helping to fund explainable AI, an initiative trying to figure out how algorithms can explain their outcomes. And there is a growing body of academic research trying to address these challenges from a modeling perspective.

All of this is good, but the fact remains that any introduction of human influence will introduce bias. It’s a Catch-22. Some have argued that if a company chooses to shape some of its results in a certain way, it is possible that they are inserting even more bias. For example, when Google addressed the problem of tagging black people as gorillas, they simply removed all of the gorillas from their image process data set. Three years later Google Photos cannot tag pictures of gorillas (or chimpanzees and certain other primates), losing some accuracy.

And that’s what’s at stake with finding ways to remove certain biases all while also keeping accuracy high. There’s always a value in being right more often than being wrong. But in greater societal concerns, a better balance between the need to be right and the need to be fair must be made.

No comments: