Skip to main content

News & events

Gender, power and tech

HCDE’s Os Keyes unpacks how and where bias occurs in artificial intelligence. Spoiler: it’s everywhere.

an artistic rendering of Os Keyes's face

Photo by Dennis Wise / University of Washington

Os Keyes has been shedding light on inequalities and artificial intelligence (AI) since starting Human Centered Design & Engineering (HCDE)’s doctoral program in 2017. Working at the intersections of ethics, power and AI, they are known for their research into bias and surveillance technologies. In 2019, Keyes testified to the Washington State House of Representatives as an expert witness in support of House Bill 1654 to prohibit government from using facial recognition technology. Since then, their research has expanded to examine the AI’s impact more broadly, focusing on gender, disability and race. Keyes is one of the first recipients of Microsoft Research’s Ada Lovelace Fellowship.

The College of Engineering’s Chelsea Yates recently spoke with Keyes about gender and diversity in tech; how, why and where bias occurs in AI (spoiler: it’s everywhere); and what can be done to address it.

Why is it important to examine technology from an identity perspective?

Identity is social — and technology is part of society! When we look at the shape of identity categories — how individuals define themselves based on groups and communities they feel aligned with, how people come to acquire these identities, and how others make judgments about what it means to possess a particular identity — we're looking partly at something that is driven by technology. Because what technology does, and has always done, is reshape social worlds.

This is a longstanding area of interest for many researchers. I would be remiss if I did not point to the fantastic work of Simone Browne and Toby Beauchamp. They, and many others, inspire me.

How is AI biased?

People think AI is about the future, but really it’s about what is already here. Any system you develop depends on current efforts of human beings, and data they’ve collected for different purposes, with different intentions, and with different levels of care. The result is that today’s societal assumptions and expectations are baked into AI. Since race, gender and disability are things we form assumptions about, they’re going to be present.

What can be done to address this bias? Is bringing more diversity into tech enough?

Diversifying the range of people in the technology industry is necessary but insufficient because it ignores the broader questions of power that surround technology workers: Who’s designing? Who’s funding and why? Who’s using the tools and who’s not? Who’s buying? Who’s profiting?

When we talk about bias, we’re really talking about power. Biases exist within a power structure, meaning that whoever (or whatever) holds power benefits from it, and those without power do not. Biases and inequalities emerge from this dynamic. More representation won’t “fix” the issues of bias and discrimination that arise because of power, and biases in tech cannot be corrected solely by people who are assumed to experience those biases. Addressing the injustices that result from technology does not just require a broader set of engineers, but a reimagining of the premises like maximizing profit and universal adoption that software is typically developed under. Doing this requires examining relationships of power and reconfiguring those relationships when they are premised on domination rather than collaboration.

Your research deals with gender bias in AI and facial recognition technology. Tell us about it.

HCDE Ph.D. candidate Os Keyes. Photo by Dorothy Edwards in May 2019, courtesy of Os Keyes

There’s a lot of potential for harm and discrimination with facial recognition systems. Facial recognition technology assumes that gender has just two categories, each with clear visual attributes, and that these attributes are consistent for people across the globe. Like, a person with short hair must be a man and one with long hair must be a woman. Something this trivial can have serious consequences for individuals who don’t neatly fit into one category or the other. By reinforcing limited views of gender based on conventional norms, this technology plays a role in shaping our views of gender.

We know that facial recognition technology is more likely to misgender or flag trans people as suspect. I’m now looking at the intersections of gender and racial bias in surveillance technology. These systems are dangerous because they yield inaccurate information, and they can result in potentially fatal outcomes, depending on how they’re used.

What other areas of tech do we find gender bias?

In short, everywhere. Virtual “assistants” like Alexa and Siri are gendered women while “smart” computers like Watson are gendered men. We find bias in targeted advertising. Recently it was discovered that women were being offered lower credit lines than men from Apple Card despite having better credit history because the algorithm being used was biased. Technologies used in hiring and job placement have screened men and women differently, which can impact who gets an interview and who doesn’t.

And health care technologies are full of bias, as health data skews towards male bodies. For example, the bodies of people assigned female at birth experience heart attacks differently from the standard trope of “a pain in your left arm.” So it’s all too easy to imagine algorithms trained on what everyone assumes a heart attack is like that lead to deep disparities in who gets diagnosed and treated.

Is it even possible to design a tool like AI that’s bias-free?

Honestly, I think the answer is no. Much of the time, when we talk about bias, what we are really talking about is over-generalization. If you try to build a model that can do one thing for everyone, it will always be biased (because you can’t include data on everyone) or so general that it isn’t useful. Sometimes bias is bias, and sometimes it is what happens when you apply a system designed with specific assumptions to environments that don’t meet them.

Part of the solution is for developers to recognize those assumptions and understand AI’s complicated past in the context of power and discrimination. Why was it developed in the first place, what was the intention, who was involved and who was left out? How will a new or improved tool interact with pre-existing infrastructure? Who will have access to it? How might it be co-opted?

Another is to design AI for specific contexts and situations instead of attempting to design for everyone. There still may be bias, but there’s more potential to catch and correct it. You can more realistically collect representative data from and tailor an algorithm for a community of 500 individuals than for 5 billion. And there’s greater potential for designers and users to communicate and collaborate: if something’s not working, those 500 can take it up with you much easier than 5 billion can. Addressing inaccuracies, biases and harms is part and parcel of the same work as addressing what relationships between developers and users look like.

Learn more

Keep up with Os Keyes online to find out more about their research, read recent publications and follow their blog.

Originally published February 22, 2021