Interview

Disability Justice and Tech: A Conversation with Lydia X. Z. Brown

Lydia Brown, Catalina Vallejo
November 2, 2022
Interview

Disability Justice and Tech: A Conversation with Lydia X. Z. Brown

Lydia Brown, Catalina Vallejo
November 2, 2022

As part of our “What Is Just Tech?” series, we invited several practitioners and researchers to respond to a simple yet fundamental question: “What is just technology?” This interview was conducted by Just Tech program director Catalina Vallejo, who spoke with Lydia X. Z. Brown, about algorithmic technologies affect disabled people’s lives.

Brown is a policy counsel with the Privacy and Consumer Data Project at the Center for Democracy and Technology, focused on disability rights and algorithmic justice.

In their conversation, Vallejo and Brown discuss the systematic disenfranchisement that is coded into technology, which is often used to further oppress people with disabilities.

Catalina Vallejo (CV): We are very excited about learning more about your trajectory. You are a lawyer, you work at a think tank, you are an educator, and you also have participated in advocacy work. We want to hear about your trajectory and how you got where you are now.

Lydia X. Z. Brown (LB): I consider myself, first and foremost, a community organizer and a community builder. My roots are in the disability justice and disability rights movements, which are distinct (and not enough people realize that those are not interchangeable with one another). One is focused on using law and policy to change societal conditions for people with disabilities; that’s disability rights. The other is focused on radically transforming our society and culture, to attend to the values that inform how we think about disability and in turn, how we respond to disabled people’s existence in our society.

For almost 15 years, my work has focused on a variety of issues of discrimination, oppression, and violence that impact the lives of disabled people, particularly disabled people who live at the intersections of race, class, gender, sexuality, faith, language, and nation, which has come up in many different forms and in many different contexts.

The work that I do centers on one key aspect of disability justice, which is informed and led by the experiences, wisdom, and knowledge of those who are most directly impacted by systems and structures of oppression and marginalization.

In my current work at the Center for Democracy and Technology, I am leading work addressing algorithmic discrimination and injustice that harms disabled people. Our work has looked at several key areas in which algorithmic technologies affect disabled people’s lives: public benefits determinations, employment and hiring, health-related applications, and surveillance and criminalization. The work that I do centers on one key aspect of disability justice, which is informed and led by the experiences, wisdom, and knowledge of those who are most directly impacted by systems and structures of oppression and marginalization.

CV: Your work in the algorithms of discrimination space is fascinating and a complex area of tech justice work. I would love to discuss some examples of that work and how it impacts different marginalized groups.

LB: Primarily, we use algorithms for two purposes. One is to make assessments or evaluations to help understand or make sense of existing data or information. The other is to make predictions or to make suggestions. That is what we often refer to as predictive analytics or data analytics—an attempt to use an algorithmic model to predict what the best course of action should be or what the likely outcome of a particular or given situation might be—in other words, to attempt to make sense out of a very uncertain, confusing, and complicated world.

Today, algorithms are ubiquitous. They appear in every aspect of our lives. They dictate what advertisements you see while you are browsing social media. They dictate whether you’ll be instantly approved for a credit card or another type of personal loan. And they determine more pressing questions, such as whether you might be eligible to rent a particular apartment, whether you will be considered when applying for a job, or whether or not a judge will decide if you are suitable to be released on bail, while criminal charges are pending against you.

We already know from the work of so many leaders in the field, people like Safiya Noble and Ruha Benjamin, that algorithmic models are often profoundly racist and misogynistic. Their modeling depends upon norms and data sets that center and normalize the experiences of white men, while simultaneously marginalizing and erasing the experiences of people of color, women, and particularly women of color. What many people don’t realize is that algorithmic models are often profoundly discriminatory to disabled people as well, often for some of the same reasons.

One of the major areas our work has looked at is the use of algorithmic models and programs in the hiring context. We already know that resume screening software, which performs automated keyword or linguistic analysis to look for certain terms or types of experiences in a person’s resume, can reproduce discriminatory hiring patterns against people who are racially marginalized or who are gendered in marginalized ways. For example, a resume screening software program may learn or be taught that a person who has an Ivy League education on their resume is a more desirable candidate than someone who does not. Or that a person who has certain keywords related to leadership or management on their resume is a more desirable candidate than someone who doesn’t.

It’s not surprising that hiring algorithms that tend to screen out other marginalized communities also tend to screen out members of the disabled community too.

Those screening software programs are going to screen out people of color and gender marginalized people, who’ve been systematically disenfranchised, excluded, or denied access to those same opportunities. The same thing is true for disabled people. It’s not surprising that hiring algorithms that tend to screen out other marginalized communities also tend to screen out members of the disabled community too.

We want to pull back a little bit and look at more complex tools. Tools that purport to perform sentiment analysis of people’s video recorded responses. Tools that attempt to measure or assess someone’s risk-taking behaviors, their impulsivity, or their creativity. We can recognize that those types of tools are inherently connected to surveillance. In all these types of algorithmic programs, the algorithm, by definition, is seeking to measure and assess a person’s performance against a perceived or presumed norm. And, by definition, disabled people fall outside the norm, meaning that we’re more likely to be labeled as unsuccessful, as having a bad or negative personality, as being a poor fit for a working environment, or as someone who is deserving of further scrutiny, of potential discipline, and perhaps even punishment.

CV: Could you talk more about the advocacy and research that you’ve been doing, specifically in the area of disability justice at the intersection with technology?

LB: Most researchers and advocates talking about disability and technology approach it from an angle that is focused on accessibility or usability of different technological devices, programs, interfaces, software, or even mere access to technology. Unfortunately, that’s not as big of a conversation as it should be. The reality is that, even if we’re just going to think about accessibility, we, systematically, have fallen far short of where we need to be.

The Web Content Accessibility Guidelines (WCAG) 2.0 implementation council, which regulates the international standard for accessibility in information and computing technologies, estimates that over 98 percent of all web content fails to comply with the WCAG 2.0 guidelines. Disabled people, with a wide variety of disabilities, can tell you that most devices, interfaces, programs, and websites are not meaningfully accessible for people with a range of sensory, cognitive, and physical disabilities.

Yet at the same time, the conversation about accessibility or usability is missing several key questions: (1) Who is leading and designing the research? (2) Is this research accountable to and responsive to the actual stated needs of the most directly impacted communities? (3) What is the objective or the outcome of the research that we are doing? Is this research moving us in the direction of building a better world? These are not the questions that most researchers, whether academic or corporate, are incentivized to incorporate into their design work.

Instead, sometimes the wrong questions are asked. Is this technology—that is upholding the carceral state—usable for people with disabilities? Is this technological application fair for people with disabilities? I don’t want the carceral state to be “fair” to disabled people; I want to see the end of the carceral state. I am working toward an abolitionist, noncarceral future.

CV: From your perspective, what are the most significant opportunities to align tech with social justice values?

LB: It must come from the top. Those who hold power and resources must be willing to concede power, share resources, and redirect that bottom-line question before we can begin to think about what it means to integrate a disability justice perspective into technological development and deployment. Until the people who hold the purse strings, the people who hold the most power and the most privilege, are willing to grapple with what it means to have built whole empires from exploitation and injustice, we’re not going to get anywhere.

Are the people who hold power willing to give it up? And if not, how can we take back that power? How can our communities work without relying upon or waiting for those who hold the most power to cede it? How can we develop technologies that work for us, technologies that address questions of justice and questions of access, as we understand and experience them?

How can we develop technologies that work for us, technologies that address questions of justice and questions of access, as we understand and experience them?

CV: We would love to hear about what the disability community is doing. Who are the people working in this space that others should look to?

LB: Upturn, for example, is injecting a deliberately justice-focused perspective into technology policy and advocacy. Among others, Mutale Nkonde at AI for People, Aimi Hamraie’s Critical Design Lab at Vanderbilt, Crystal Lee at MIT, and Karen Nakamura’s Disability Lab at UC Berkeley are developing technological innovations informed by disabled people’s lived experiences, not as a means of trying to therapize or pathologize disabled people, but to apply disabled people’s knowledge.

There are some excellent folks to follow on Twitter, including Alice Wong, founder of the Disability and Visibility Project; Tinu Abayomi-Paul, who does fascinating work around access in tech; and s.e. smith, who is a disabled disability journalist. Also, I learn an enormous amount from Talila Lewis (TL).

There are some great resources on the websites of organizations like Sins Invalid, Health Justice Commons, and HEARD, as well as the Autistic Women & Nonbinary Network, which is an organization I’ve also done quite a bit of work with.

Finally, I will always recommend Eli Clare‘s books, Exile and Pride and Brilliant Imperfection, and Leah Lakshmi Piepzna-Samarasinha’s books, Care Work: Dreaming Disability Justice and The Future is Disabled: Prophecies, Love Notes and Mourning Songs. Shayda Kafai has a fantastic book called Crip Kinship, which traces the history of Sins Invalid as an organization led by many of the co-creators of disability justice practice and principles. And my friend and colleague Emily Ladau also published a book called Demystifying Disability, which is a great introduction and primer to disability thinking. Of course, this is just a sampling.

Our Network