Is Artificial Intelligence Racist?

Racial and Gender Bias in AI

Maurizio Santamicone
6 min readApr 2, 2019

Outspoken US Congresswoman Alexandria Ocasio-Cortez recently said that AI can be biased during a Martin Luther King Jr. day event in New York City. She was of course derided the following day by conservative commentators.

But she is right.

Joy Buolamwini, an MIT scientist and founder of the Algorithmic Justice League, published a research that uncovered large gender and racial bias in AI systems sold by tech giants like IBM, Microsoft, and Amazon. Given the task of guessing the gender of a face, all companies performed substantially better on male faces than female faces. The error rates were no more than 1% for lighter-skinned men whilst for darker-skinned women, the errors soared to 35%. When tasked to classify the faces of Oprah Winfrey, Michelle Obama, and Serena Williams, the systems failed.

License: Joy Buolamwini — Mit Lab Press Kit

These technologies are today penetrating every layer of society. Decisions like determining who is hired, fired, granted a loan, or how long an individual spends in prison have traditionally been performed by humans. Today, they are rapidly made by algorithms or at least algorithms are used in the decision making process for such tasks.

Machine learning algorithms sift through millions of pieces of data and make correlations and predictions about the world. Their appeal is huge: machines can use hard data to make decisions that are sometimes more accurate than a human’s.

License: Joy Buolamwini — Mit Lab Press Kit

Computer vision is used in policing, i.e. identifying suspects in a crowd. Palantir, a company founded by tech billionaire and Trump donor Peter Thiel, has been using its predictive policing technologies in New Orleans for the last six years: this program was so secretive that even council members knew nothing about it. Amazon has been selling police departments a real time face recognition system that falsely matched 28 member of Congress with mugshots. The problem with these systems is that although they are effective on general tasks, they usually return a high number of false positives (in statistics, a false positive is when a statistical test returns a Type I error, like an HIV test resulting positive on an HIV-negative patient). Combine it with racial bias, and we get a racist policing system, independently from who is using it.

The influence of bias is present in plenty of other types of data as well. For instance, a straightforward application of Machine Learning where computers outperform humans is loan approvals. Financial institutions leverage on historical data to train their algorithms over millions of records allowing them to capture patterns in the data that best identify the features of mortgage applicants.

The problem is that algorithms can learn too much. Suppose Thokozane is a brilliant graduate student from a poor Soweto neighborhood that has been working regularly for the past few years, has a clean credit record and has finally decided to apply for a loan to buy his first property. According to all the criteria traditionally used by banks to evaluate someone’s creditworthiness, his application should be approved. The algorithm, however, has another plan. It has learnt in the meantime that applicants from the same poor neighborhood TK is from have in the last few years had most of their applications rejected because of poor credit records, insufficient disposable income ans so on. So TK’s application is surpriringly rejected. In other words, the algorithm has learnt an implicit bias.

TK is of course reasonably upset about this, and decides to ask the bank for an explanation, as he alleges that the algorithm has discriminated racially against him. The bank of course replies that this is not possible, as the algorithms have been intentionally blinded to the race of the applicants (this assumption is optimistic). However, the problem is that while some algorithms like DecisionTrees may enable an auditor to discover if the address information was used in a way to penalize applicants who were born or previously resided in predominantly poverty-stricken areas, other algorithms like Deep Learning are much more sophisticated and tend to be a “black box” to external inspection, and it may prove almost impossible to understand why, or even how, a certain decision has been taken.

In 2017 a paper was published on Science that found that as a computer teaches itself English, it becomes prejudiced against black Americans and women. Using data “scraped” from the web called the “Common Crawl”, a corpus containing approximately 840 billion words, it shows that machines can learn word associations from written texts and that these associations mirror those learned by humans, such as pleasantness and flowers or unpleasantness and insects. So far so good. But it also shows that machine learning absorbs stereotyped biases as easily as any other — for example, associations between female names and family or male names and career. From bad to worse, the researchers found that names associated with being European American were significantly more easily associated with pleasant than unpleasant terms, compared to some African American names.

A computer builds its vocabulary using frequency data, or how often terms appear together. So it found that on the internet, African-American names are more likely to be surrounded by words that connote unpleasantness. Is that because African Americans are unpleasant? Of course not. It’s because people on the internet write and say horrible things.

The bias this time is not so implicit, is it?

So what should be done?

Rachel Thomas, co-founder of fast.ai, a deep learning lab based in San Francisco whose motto is “making neural nets uncool”, advocates for a more open and inclusive AI, that embraces people from different backgrounds and communities, people who don’t “fit in” the current system, because they have a better understanding of how tech can be weaponized against the most vulnerable: “We can’t afford to have a tech that is run by an exclusive and homogenous group creating technology that impacts us all. We need more experts about people, like human psychology, behavior and history. AI needs more unlikely people.

Unsurprisingly, Joy Buolamwini has a similar opinion: “Data centric technologies are vulnerable to bias and abuse. As a result, we must demand more transparency and accountability. We have entered the age of automation overconfident yet underprepared. If we fail to make an ethical and inclusive AI, we risk losing gains made in civil rights and gender equity under the guise of machine neutrality.

So what is “mainstream AI” doing about it?

Let’s take a peek at the Silicon Valley.

Stanford University recently launched amid great fanfare the Institute for Human-Centered Artificial Intelligence, or HAI, lead by Fei Fei Li, one of the most prolific researchers in the field and former Chief Scientist of AI/ML at Google Cloud. HAI will work on topics such as how to ensure algorithms make fair decisions in government or finance, and what new regulations may be required on AI applications. “AI started as a computer science discipline, but now we are in a new chapter. This technology has the potential to do so many good things, but there are also risks and pitfalls. We have to act and make sure it is human benevolent.

HAI has 121 faculty member listed. Not a single one of them is black.

AI desperately needs diversity. Diversity in AI will help reduce biases, and government agencies, corporates and institutions who implement data centric applications need to integrate de-biasing efforts into their data pipelines; they mustn’t just hire lawyers to make their systems compliant, they need to include many different thinkers from psychology, social sciences, philosophy if they want to achieve significantly better results.

--

--

Maurizio Santamicone

AI and Web3 Expert @ Netmind.Ai Founder at Fliptin Technologies, Soulstice Consulting. Investor. Startups. AI, Machine Learning.