Artificial Intelligence Isn't Good for Women, But We Can Fix It

"It's actually showing the same biases that society already has, it's extremely problematic.”
Image may contain Human Person Glasses Accessories Accessory Furniture Snow Winter Outdoors Nature and Snowman
BSIP

Artificial intelligence isn't necessarily good for women, but we can make it better.

Because we build and train AI, it reflects our biases and assumptions — and our racism and sexism. That's problematic as AI can be used everywhere: it controls driverless cars and powers voice assistants such as Siri and Alexa, but also helps HR departments sift through resumes, decides who gets parole, and examines medical images. As its uses get more widespread and important, AI missteps and abuses become more dangerous.

If we don't get it right, sexism, racism, and other biases will be literally encoded into our lives, trained on incorrect data that continues to leave women and people of color out of decision making. "We're ending up coding into our society even more bias, and more misogyny and less opportunity for women," says Tabitha Goldstaub, cofounder of AI startup CognitionX. "We could get transported back to the dark ages, pre-women's lib, if we don't get this right."

What is AI?

AI is made of up of a myriad of different, related technologies that let computers "think" and make decisions, helping us automate tasks. That includes ideas such as neural networks, a machine-learning technique that can be trained on datasets before being set loose to use that knowledge. Show it a bunch of pictures of dogs, and it learns what dogs look like — well, sometimes the machines manage it, other times they can't tell chihuahuas from muffins.

AI is meant to make our lives easier. It's good at filtering information and making quick decisions, but if we build it poorly and train it on biased or false data, it could hurt people.

"A lot of people assume that artificial intelligence… is just correct and it has no errors," says Tess Posner, co-founder of AI4All. "But we know that that’s not true, because there's been a lot of research lately on these examples of being incorrect and biased in ways that amplify or reflect our existing societal biases."

How AI can hurt

The impacts can be obvious, such as a resume bot favoring male, "white"-sounding names, but they can also be subtle, says Professor Kathleen Richardson, of the school of computer science and informatics at De Montfort University. "It's not like we go out into the world and the bank machine doesn't work for us because we're female," she says. "Some things do work for us. It's more just about the priorities that we start to have as a society. Those priorities, for example, often become the priorities of a small elite."

For example, researchers from the University of Washington have shown how one image-recognition system had gender bias, associating kitchens and shopping with women and sports with men — a man standing at a stove was labeled as a woman. "The biases that are inherited in our own language and our own society are getting... reflected in these algorithms," Posner says.

And those biased labels and data are used to make decisions that impact lives. Goldstaub points to research at Carnegie Mellon University that found Google's recommendation algorithm was more likely to recommend "high-prestige and high-paying jobs to men rather than to women," while separate research from Boston University showed CV-sifting AI put men at the top of the pile for jobs such as programming.

Another example is COMPAS, an AI-based risk assessment tool used across the U.S. to make decisions about how likely a criminal is to reoffend. "This was shown to be biased against African Americans," Posner said. A report by ProPublica showed that COMPAS rated black people as more likely to reoffend than their white counterparts, and as such was "remarkably unreliable" at its job of forecasting who would break the law again.

"It's actually showing the same biases that society already has, it's extremely problematic," says Posner. "It's affecting people's lives, whether they're getting parole and what decisions the court is making… it's going to further marginalize certain populations."

Consider health care, says Goldstaub. "Men and women have different symptoms when having a heart attack — imagine if you trained an AI to only recognize male symptoms," she says. "You'd have half the population dying from heart attacks unnecessarily." It's happened before: crash test dummies for cars were designed after men; female drivers were 47% more likely to be seriously hurt in accidents. Regulators only started to require car makers to use dummies based on female bodies in 2011.

"It's a good example of what happens if we don't have diversity in our training sets," says Goldstaub. "When it comes to health care, it's life or death — not getting a job is awful, but health care is life or death."

Educating women

There's one obvious way to encourage better systems, says Richardson: "we need more women in robots and AI."

Right now that's not happening. According to the AAUW, only 26% of computing professionals in the U.S. are women — and there used to be more. Back in the 1990s, more than a third of those working in tech were female. According to Google's own diversity figures in 2017, 31 percent of its workforce is women, but only 20 percent of its technical roles are filled by women. And, only 1 percent of all their tech employees (of any gender) are black; 3 percent are hispanic. For AI in particular, Goldstaub suggests about 13 percent of those working in AI are women.

"I believe as a feminist the more women we can get into roles, the more diverse the output will be — and fewer shockers will get through," Goldstaub says.

Thankfully, groups such as AI4ALL have sprung up to help women step into careers in AI by encouraging high school students to take science, technology, engineering, and math (STEM) subjects. "When we look at the research about underrepresented populations and why they don’t go into the field, a lot of the research shows this actually stems back in high school at around age 15, which is when folks get discouraged or lose interest in STEM fields," says Posner.

Why is that? Posner points to a lack of role models, no exposure to technical subjects or innovation, and a general lack of encouragement. To fight back, AI4All shows high school students — in particular girls, those from low-income families, and from different ethnic backgrounds — the path to an AI career, offering educational camps and mentorships with industry leaders. "And then we’re supporting you throughout your career path and into your career, if this is the path that you choose," she says.

To help push selected students toward creating ethical AI, the camps work on projects under the AI for Good banner, which designs systems specifically for humanitarian causes such as computer vision for hospitals or natural-language processing for disaster relief efforts. "We've seen that it's actually really effective to teach rigorous AI concepts in the context of societal impact," she says.

And some of the projects AI4All students have made have been incredible, Posner says — by not including a diverse range of people in AI development, we not only risk biases but also miss out on better ideas. "When we give access to more people incredible things happen and things that we could have never imagined before," she says. "That’s why it’s especially critical to let’s not miss out on the potential inventions and talents of all these amazing underutilized groups."

Being heard

Companies need to remember that there's more to diversity than hiring a token lady for the team, and women shouldn't be made to feel they need to represent their entire gender or race. "A lot of women go into science and the last thing they talk about is sexism or gender or differences like that," Richardson says. "When they enter these fields, the last thing they want to do is make an issue out of being a woman, if you know what I mean."

That means it's not just women's responsibility to encourage their female colleagues to feel comfortable speaking up. "What tends to happen is when the most powerful groups let in other people with less power, is the people with less power go along with the people with the most power," Richardson says. "I've done it myself."

Simply having women in the room isn't enough; they need to be heard — and often enough, that means we need to stand up and make people listen. "You have to be brave and courageous to come in and challenge people with authority and power," Richardson says.

Degendering AI

One way to make AI less problematic for women is to take gender out of the equation. Alexa and Siri have something in common: they're both clearly female characters and female voices. That's taken further with virtual girlfriends such as Gatebox in Japan — and that's before we start talking about sex robots. But Alexa and Siri are a good place to start.

"What they tend to do is keep reproducing this idea of women as sexual objects to be used, to be appropriated," Richardson says, explaining that giving objects female personas cements existing power dynamics. "Women are expected to give away power, to acknowledge and look after men, to laugh at their jokes, flatter their ego — these kinds of things. So when you've got men then creating models of relationships [with AI assistants], they're creating a model of relationship that is very egocentric, not very neutral… I think that's what's underlying a lot of robots and AI."

Because of that, such tools should be gender neutral, Goldstaub argues. "We should degender our AI, so it's like a washing machine rather than a Tamagotchi. Things that are meant to stay as tools should stay as tools."

She adds: "If I was in the room [when the decision that Alexa would be a woman was made], I would have suggested we try some other voices," says Goldstaub. "Clearly that didn't happen."

Algorithmic accountability

Even if we flood the labs and offices developing AI with women, and in particular with women of color — which we should do — there will still be abuses of this technology as well as unintended consequences. And we need to be able to spot both.

That's why some researchers are arguing for algorithmic accountability. As it stands, many machine-learning and AI-based systems are essentially black boxes to end users: put data in, magic happens, and we get an answer. That's problematic when the data being pulled in is demographic, and the output is whether or not to keep an individual in jail pending trial.

We need to see how algorithms work to make sure that they do. That could be through companies that make AI systems opening them up to researchers and regulators, or by forcing developers to publish their methods. Others suggest ethics boards that oversee such projects.

It also means the rest of us need to understand how AI works — and not see it as dark magic. "It's not just developers that need to understand — it's also healthcare workers, law enforcement, criminal justice, policy makers. You wouldn't think that they would have to deal with the impacts of AI, but they absolutely will," Posner says. "So demystifying it so the average person knows this is just a math tool, a technology tool, is important."

No single answer

Such a complicated problem requires multiple solutions: we need to encourage more women into tech and AI development, and support them when they get there; companies need to stop conflating women and objects, and remove gender from AI; and we need transparency around the algorithms we use and not be intimidated or confused by them.

If we don't get this right, there's a risk beyond the immediate damage: we may refuse to use it all, missing out on the potential benefits. "The technology itself also has tremendous potential for good and for creating benefits to human society," Posner says. "But we have to make sure that the ability to create with it and shape it is in the hands of as many people as possible that represent the diverse general population."

Related: It’s Not Just Facebook — Google Has Your Info, Too