Artificial intelligence will dominate more and more of society. Will it be racist and sexist?

Brian Resnick explains research that our systems of artificial intelligence have learned to perpetuate racism and sexism just like people do.

“Many people think machines are not biased,” Princeton computer scientist Aylin Caliskan says. “But machines are trained on human data. And humans are biased.”

Computers learn how to be racist, sexist, and prejudiced in a similar way that a child does, Caliskan explains: from their creators…

Nearly all new consumer technologies use machine learning in some way. Like Google Translate: No person instructed the software to learn how to translate Greek to French and then to English. It combed through countless reams of text and learned on its own…

And it’s increasingly clear these programs can develop biases and stereotypes without us noticing.

Last May, ProPublica published an investigation on a machine learning program that courts use to predict who is likely to commit another crime… The reporters found that the software rated black people at a higher risk than whites.

“Scores like this — known as risk assessments — are increasingly common in courtrooms across the nation,” ProPublica explained. “They are used to inform decisions about who can be set free at every stage of the criminal justice system, from assigning bond amounts … to even more fundamental decisions about defendants’ freedom.”

The program learned about who is most likely to end up in jail from real-world incarceration data. And historically, the real-world criminal justice system has been unfair to black Americans.

This story reveals a deep irony about machine learning. The appeal of these systems is they can make impartial decisions, free of human bias… But what happened was that machine learning programs perpetuated our biases on a large scale. So instead of a judge being prejudiced against African Americans, it was a robot…

Caliskan has seen bias creep into machine learning in often subtle ways — for instance, in Google Translate.

Turkish, one of her native languages, has no gender pronouns. But when she uses Google Translate on Turkish phrases, it “always ends up as ‘he’s a doctor’ in a gendered language.” The Turkish sentence didn’t say whether the doctor was male or female. The computer just assumed if you’re talking about a doctor, it’s a man…

Presumably, if you were talking about a nurse the computer would assume it is a woman, but both of these stereotypes are harmful if they constrain people’s thinking and perpetuate inequities. Resnick continues:

Recently, Caliskan and colleagues published a paper in Science, that finds as a computer teaches itself English, it becomes prejudiced against black Americans and women.

Basically, they used a common machine learning program to crawl through the internet, look at 840 billion words, and teach itself the definitions of those words. The program accomplishes this by looking for how often certain words appear in the same sentence. Take the word “bottle.” The computer begins to understand what the word means by noticing it occurs more frequently alongside the word “container,” and also near words that connote liquids like “water” or “milk.”…

How frequently two words appear together is the first clue we get to deciphering their meaning.

Once the computer amassed its vocabulary, Caliskan ran it through a version of the implicit association test.

In humans, the IAT is meant to undercover subtle biases in the brain by seeing how long it takes people to associate words. A person might quickly connect the words “male” and “engineer.” But if a person lags on associating “woman” and “engineer,” it’s a demonstration that those two terms are not closely associated in the mind, implying bias. (There are some reliability issues with the IAT in humans, which you can read about here.)

Here, instead at looking at the lag time, Caliskan looked at how closely the computer thought two terms were related. She found that African-American names in the program were less associated with the word “pleasant” than white names. And female names were more associated with words relating to family than male names.

…Like a child, a computer builds its vocabulary through how often terms appear together. On the internet, African-American names are more likely to be surrounded by words that connote unpleasantness. That’s not because African Americans are unpleasant. It’s because people on the internet say awful things. And it leaves an impression on our young AI…

Increasingly, Caliskan says, job recruiters are relying on machine learning programs to take a first pass at résumés. And if left unchecked, the programs can learn and act upon gender stereotypes in their decision-making.

“Let’s say a man is applying for a nurse position; he might be found less fit for that position if the machine is just making its own decisions,” she says. “And this might be the same for a women applying for a software developer or programmer position. … Almost all of these programs are not open source, and we’re not able to see what’s exactly going on. So we have a big responsibility about trying to uncover if they are being unfair or biased.”

…Already AI is making its way into the health care system, helping doctors find the right course of treatment for their patients. …But health data, too, is filled with historical bias. It’s long been known that women get surgery at lower rates than men. (One reason is that women, as primary caregivers, have fewer people to take care of them post-surgery.)

Might AI then recommend surgery at a lower rate for women?

…the people who use these programs should be aware of these problems, and not take for granted that a computer can produce a less biased result than a human.

…AI learns about how the world has been. …It doesn’t know how the world ought to be. That’s up to humans to decide.

Similarly, when Microsoft unleashed an AI chatbot onto Twitter, within 24 hours it had learned a lot of the ugly misogynistic, racist side of Twitter and Microsoft had to manually delete a lot of offensive tweets to avoid embarrassment.   Facebook has over 100,000 AI bots and has had problems with its sales bots learning to lie to make sales. The bots were successful in most regards. Most of the humans interacting with the bots thought that they were interacting with real humans.

Advertisements
Posted in Discrimination, Labor

Leave a Comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 40 other followers

Blog Archive