
Brian Resnick explains research that our systems of artificial intelligence have learned to perpetuate racism and sexism just like people do.
“Many people think machines are not biased,” Princeton computer scientist Aylin Caliskan says. “But machines are trained on human data. And humans are biased.”
Computers learn how to be racist, sexist, and prejudiced in a similar way that a child does, Caliskan explains: from their creators…
Nearly all new consumer technologies use machine learning in some way. Like Google Translate: No person instructed the software to learn how to translate Greek to French and then to English. It combed through countless reams of text and learned on its own…
And it’s increasingly clear these programs can develop biases and stereotypes without us noticing.
Last May, ProPublica published an investigation on a machine learning program that courts use to predict who is likely to commit another crime… The reporters found that the software rated black people at a higher risk than whites.
“Scores like this — known as risk assessments — are increasingly common in courtrooms across the nation,” ProPublica explained. “They are used to inform decisions about who can be set free at every stage of the criminal justice system, from assigning bond amounts … to even more fundamental decisions about defendants’ freedom.”
The program learned about who is most likely to end up in jail from real-world incarceration data. And historically, the real-world criminal justice system has been unfair to black Americans.
This story reveals a deep irony about machine learning. The appeal of these systems is they can make impartial decisions, free of human bias… But what happened was that machine learning programs perpetuated our biases on a large scale. So instead of a judge being prejudiced against African Americans, it was a robot…
Caliskan has seen bias creep into machine learning in often subtle ways — for instance, in Google Translate.
Turkish, one of her native languages, has no gender pronouns. But when she uses Google Translate on Turkish phrases, it “always ends up as ‘he’s a doctor’ in a gendered language.” The Turkish sentence didn’t say whether the doctor was male or female. The computer just assumed if you’re talking about a doctor, it’s a man…
Computers learn sex biases from the big data that humans create. So if you talk about a nurse to a computer, it would assume the nurse is a woman. Unfortunately these are the types of stereotypes that perpetuate inequalities by constraining people’s thinking. Resnick continues:
Recently, Caliskan and colleagues published a paper in Science, that finds as a computer teaches itself English, it becomes prejudiced against black Americans and women.
Basically, they used a common machine learning program to crawl through the internet, look at 840 billion words, and teach itself the definitions of those words. The program accomplishes this by looking for how often certain words appear in the same sentence. Take the word “bottle.” The computer begins to understand what the word means by noticing it occurs more frequently alongside the word “container,” and also near words that connote liquids like “water” or “milk.”…
How frequently two words appear together is the first clue we get to deciphering their meaning.
Once the computer amassed its vocabulary, Caliskan ran it through a version of the implicit association test.
In humans, the IAT is meant to undercover subtle biases in the brain by seeing how long it takes people to associate words. A person might quickly connect the words “male” and “engineer.” But if a person lags on associating “woman” and “engineer,” it’s a demonstration that those two terms are not closely associated in the mind, implying bias. (There are some reliability issues with the IAT in humans, which you can read about here.)
Here, instead at looking at the lag time, Caliskan looked at how closely the computer thought two terms were related. She found that African-American names in the program were less associated with the word “pleasant” than white names. And female names were more associated with words relating to family than male names.
…Like a child, a computer builds its vocabulary through how often terms appear together. On the internet, African-American names are more likely to be surrounded by words that connote unpleasantness. That’s not because African Americans are unpleasant. It’s because people on the internet say awful things. And it leaves an impression on our young AI…
Increasingly, Caliskan says, job recruiters are relying on machine learning programs to take a first pass at résumés. And if left unchecked, the programs can learn and act upon gender stereotypes in their decision-making.
“Let’s say a man is applying for a nurse position; he might be found less fit for that position if the machine is just making its own decisions,” she says. “And this might be the same for a women applying for a software developer or programmer position. … Almost all of these programs are not open source, and we’re not able to see what’s exactly going on. So we have a big responsibility about trying to uncover if they are being unfair or biased.”
…Already AI is making its way into the health care system, helping doctors find the right course of treatment for their patients. …But health data, too, is filled with historical bias. It’s long been known that women get surgery at lower rates than men. (One reason is that women, as primary caregivers, have fewer people to take care of them post-surgery.)
Might AI then recommend surgery at a lower rate for women?
…the people who use these programs should be aware of these problems, and not take for granted that a computer can produce a less biased result than a human.
…AI learns about how the world has been. …It doesn’t know how the world ought to be. That’s up to humans to decide.
Amazon created a hiring algorithm which learned sexist bias.
The company’s experimental hiring tool used artificial intelligence to give job candidates scores ranging from one to five stars — much like shoppers rate products on Amazon
So the algorithm rated people just like consumer products. What could go wrong?
machine-learning specialists uncovered a big problem: their new recruiting engine did not like women.
They thought that a computer program would be fairer because how could a computer be sexist?
That is because Amazon’s computer models were trained …by observing patterns in resumes submitted to the company over a 10-year period. Most came from men, a reflection of male dominance across the tech industry…
In effect, Amazon’s system taught itself that male candidates were preferable. It penalized resumes that included the word “women’s,” as in “women’s chess club captain.” And it downgraded graduates of two all-women’s colleges
This issue is going to get more important over time:
55 percent of U.S. human resources managers said artificial intelligence, or AI, would be a regular part of their work within the next five years, according to a 2017 survey
Already many big employers are quietly using proprietary algorithms to screen job applicants and LinkedIn uses AI to screen resumes on behalf of employers.
Similarly, when Microsoft unleashed an AI chatbot onto Twitter, within 24 hours it had learned a lot of the ugly misogynistic, racist side of Twitter. Microsoft had to manually delete a lot of offensive tweets to avoid a corporate scandal. Facebook has over 100,000 AI bots and has had problems with its bots learning to lie to make sales. Although the bots were unethical liars, they were successful at selling and at fooling most people into thinking that the bots they were interacting with were real humans.
Safiya Umoja Noble wrote a book about this called Algorithms of Oppression: How Search Engines Reinforce Racism. She discussed her research with Sean Illing:
I started the book several years ago by doing collective searches on keywords around different community identities. I did searches on “black girls,” “Asian girls,” and “Latina girls” online and found that pornography was the primary way they were represented on the first page of search results. That doesn’t seem to be a very fair or credible representation…
Now, fortunately, Google has responded to this. They suppressed a lot of porn, in part because we’ve been speaking out about this for six or seven years. But if you go to Google today and search for “Asian girls” or “Latina girls,” you’ll still find the hypersexualized content.
For a long time, if you did an image search on the word “beautiful,” you would get scantily clad images of almost exclusively white women in bikinis or lingerie. The representations were overwhelmingly white women.
Safiya Noble also discusses how Google has helped spread radical fringe ideologies like Islamist extremists and white supremacists by creating the impression that their ideas are mainstream and true. For example, Dylann Roof was an ordinary kid who was radicalized by white supremacists online in a fairly short period after he searched for answers to questions on race and was served up exclusively racist websites and then went on to massacre nine African-American worshipers in cold blood in their Church in Charleston, South Carolina in order to try to start a race war.
In his manifesto, Dylann Roof …says that the first event that truly awakened him was the Trayvon Martin story. He says he went to Google and did a search on “black-on-white crime.” Now, most of us know that black-on-white crime is [a much, much tinier problem than white-on-white crime or black-on-black crime] — that, in fact, most crime happens within a community….
So Roof goes to Google and puts in …“black-on-white crime.” …of course, it immediately takes him to white supremacist websites, which in turn take him down a racist rabbit hole of conspiracy and misinformation. …This is how Roof gets radicalized. He says he learns about the “true history of America,” and about the “race problem” and the “Jewish problem.” He learns that everything he’s ever been taught in school is a lie. …and this leads to his “racial awareness.”
Roof came to believe that there is already a secret race war of black-on-white crime due to Google’s misleading search algorithms that think that people who search for that term are wanting white supremacist results rather than the much less exciting factual analysis that is the truth. People think Google is a neutral arbiter of truth, but it is actually serving up the content that its algorithms have learned generate clicks much like Facebook and other social media and that is radicalizing all sorts of fringe groups who are better at generating clicks than the boring, mainstream truth.
Similarly, The Guardian points out that a Google search for the term “Jew” has long showed images of an antisemitic hook-nosed caricature and brought up antisemitic websites like “Jew Watch.”
in 2009, searches for “Michelle Obama” began returning a picture of the first lady’s face retouched to have ape-like features… Searching “rapist” before the US election was likely to bring up at least five images of Bill Clinton in the top 10.
Google’s CEO was recently raked over the coals in congress because an image search for the word “idiot” brings up a lot of photos of President Trump. In fact, right now 3/4 of the top images that appear for “idiot” are pictures of Donald Trump. Even worse, a Google image search for “Trump” brings up hundreds of pictures of idiots!
Men dominate Google image searches for most jobs — even for bartender, probation officer and medical scientist, roles in which women outnumber men. In 57 percent of occupations, image searches indicate the jobs are more male-dominated than they actually are.
Kevin Drum has screenshots of image searches and more links.
Leave a Comment