Ethics & AI: Can AI be racist?

Artificial Intelligence has enormous potential to transform human’s technological capabilities. But, like a familiar saying advises, with more power comes more responsibility. As AI amasses more power we must take more responsibility for the potential dangers of this technology. Much attention is already being given to many of these dangers, ranging from simply rude behavior to Skynet-esque concerns about all-powerful AI.

We will be examining many of these dangers in a series of blog posts concerning Ethics and AI, if you’d like to stay notified about our next post, sign up for our newsletter here →

In this post, we’d like to examine a danger of AI that has gone largely overlooked: the tendency for AI to adopt harmful human biases, specifically racist biases.

SUBJECTIVE BIASES IN AI

There is a common misconception that AI, and machines in general, tend to improve on subjective human bias. The thought is, what machines lack in empathy and human-ness they make up for with precise calculations and computing power. This might lead many to assume that AI would be free of the subjective biases that plague actual human behavior.

This, however, is not the case. Many AI machines attempt to emulate human behavior and are thus prone to adopting the subjective biases of their creators. This is largely because these AI are trained from data on human interaction, data which reflects these biases. Here at the Peace Innovation Lab we are very concerned with biases and prejudices of many stripes, and as such bias in AI is of large concern.

We can see this in a variety of recent AI experiments. The Google AI image recognition program Vision Cloud identified a picture of a white-colored hand holding a thermometer gun as containing a “hand” and “monocular.” The program identified the same picture with a black-colored hand as containing a “hand” and “gun.” Similarly, Amazon’s facial recognition software Rekognition has shown that, among other racial biases, the software is significantly better at identifying white faces than those of people of color. Examples of similar biases have been found in broader surveys as well. A study from USC and UCLA researchers found that two enormous AI language systems, OpenAI’s GPT-2 model and Google’s recurrent neural network, both contained biases across many demographic lines, including race.

Racism leaking into Google’s image recognition AI, Vision Cloud

googlevision.png

Examples like these are deeply troubling, because even though the biases might seem subtle they can have drastic effects. In the case of Amazon’s “Rekognition,” a software used by many government and police bodies, if people of color are identified less successfully than white people, people of color are more likely to be falsely accused. In other words, the negative effects of Rekognition’s imperfections fall disproportionately on people of color.

THIS BIAS IS FAMILIAR

We can only begin to tackle these racist biases by clearly understanding how AI, and humans, come to these racist biases in the first place. And as previously stated, AI’s use of training data automates a self-learning process in which the AI teaches itself. AI attempting to model some aspect of human behavior will adopt aspects of human social codes, and their accompanying biases. And these implicit biases came from our own social codes.

Over time, humans have developed social codes in a variety of ways, one of the oldest forms being religious code. Although any given religious code was originally written in one way, different interpretations created deviating religious branches. The second order effect of these deviations were the behaviors applied to the interpretation of religious code. Just as our institutions have woven themselves from religion into complex fundamentals of our societies, so do humans mirror the second order effects from our institutions into a new social code that reflects our systemic subjective biases. Consequently, the data AI’s read to understand this social code contains the very same (un)intentional racial biases, and thus the algorithm informs further AI machine learning; biased foundations upon biased foundations.

OVERCOMING MACHINE BIAS

Overall, we urge programmers to recognize that subjective biases leak into AI training data. These biases are based in our social environments, how we have been conditioned to act and think in our societies. There is a heightened responsibility here, with the heightened power of AI, as programmers write the next social code. In order to fix both ends, the programmer must act as a mediator between our future with technology and AI’s broader effects on society.

One organization taking leaps in this field is the Algorithmic Justice League (AJL), a group raising awareness on AI racial injustices. AJL works to improve diversity in AI training data to overcome the negative consequences of biased data sets and unregulated AI power to affect the lives of particularly marginalized communities. In order to create just outcomes using AI, AJL argues inclusive code is determined by diversity in the programmers, skepticism of the fairness of programmer’s code, and appreciation for why coding matters —why social change and equality should be a priority in coding. Racial bias in AI is in itself a complex engineering problem. With this, the hope is that we can reconstruct data sets and AI’s to account for this. One possible way to reconstruct data sets is to create a Generative Adversarial Network (GAN) that can identify accidental and false correlations (bias) through comparison with a purely random “fake” training data set. Although this method is not perfect, it does present a promising jumping off point for dealing with racial biases in AI’s.

We simply cannot avoid a discussion on subjective bias in AI. As AI gains the power to make more and more impactful decisions we must understand and account for the prejudiced, and specifically racist, factors in these decisions. We can do this in broad strokes, with how we use technology in judicial and law enforcement settings. But we can also do this in smaller strokes, regarding how we interact with increasingly intelligent technology and qualify what may seem objective and calculated as subjectively and potentially racist. The rest of this series will continue to examine ethically relevant questions surrounding AI.

Subscribe to our newsletter to stay attuned to these questions as we examine how we, as humans, can use AI technology ethically and peacefully.


PJ FrantzComment