Disabling Intelligences: An Antidote to Eugenic AI
Disabling Intelligences: An Antidote to Eugenic AI
Are humans soon to be replaced by artificial intelligence (AI), making human labor obsolete? Will AI surpass the capabilities of human intelligence, rendering us inferior to machines? Such hyperbolic questions have been difficult to escape as Big Tech offers grandiose promises of automation, replacement, artificial greatness, and a democratic techno-utopia. A much-needed reality check, Disabling Intelligences: Legacies of Eugenics and How We Are Wrong about AI cuts through AI hype and doomerism by arguing that AI systems act as engines of discrimination and social stratification, devalue human labor, and distract from systemic injustices. Rua Williams, an assistant professor in the School of Applied and Creative Computing at Purdue University, unites critical disability studies and science and technology studies (STS) by breaking down the eugenic logics behind current AI projects and how this logic leads to tangible harms, especially for those most marginalized in our society. Disabling Intelligences fills a crucial gap in critical AI scholarship by centering an understanding of eugenic influences in its critique of the AI industry.

Disabling Intelligences takes a strong and pragmatic stance against the uncritical development, implementation, and acceptance of AI. But don’t get it twisted; this book does not indiscriminately reject emerging technologies. Williams instead asserts that the book is a “conscientious objection to top-down dictatorial automation of human thought and skill as a capitalist ploy for the devaluation of labor and therefore the devaluation of laborers and human life in general.”[1]Rua M. Williams, Disabling Intelligences: Legacies of Eugenics and How We Are Wrong about AI (Cham: Palgrave Macmillan, 2025), 3. Disabling Intelligences argues that we are wrong about what AI is, does, and should do because there are uninterrogated eugenic logics shaping the technology we create and the problems we think we’re fixing. By eugenic logics, Williams is referring to values from “the project of eugenics,” which “created a separation between worthy and unworthy life on the basis of race, class, gender, sexuality, and disability,” and how “the project of AI is producing a separation between worthy and unworthy minds.”[2]Williams, Disabling Intelligences, 35. To illustrate this point, Williams employs the lived experiences of disabled people to highlight how ableism is a key element of the eugenic approach to AI. In using “we” and “our” throughout the book, Williams is referring to “the sense of a broad public.”[3]Williams, Disabling Intelligences, 107.
Over the course of 136 pages, Williams provides readers with the tools to critically evaluate the role they want AI to play in their own lives and in our shared future. Disabling Intelligences is a short yet densely packed essential read for scholars, technologists, activists, and anyone who interacts with AI in any capacity. Though the book covers topics such as AI’s relationship to discrimination, oppression, labor exploitation, and eugenics, it is nonetheless a pleasant reading experience thanks to the captivating and playful tone of Williams’s writing. Personal anecdotes and witty remarks are sprinkled throughout, such as the author’s own “villain origin story”[4]Williams, Disabling Intelligences, 2. working at a tech startup. The soul and personality these add to the book are refreshing—a fitting and uniquely human charm in a book that critiques the eugenic logics behind the use of generative AI for writing as a force of conformity.
The book is structured in a digestible manner that is well suited for readers from various backgrounds, both in terms of field and level of prior knowledge on eugenics and the umbrella of systems often called AI. Disabling Intelligences is broken down into five chapters, each of which tackles a different popular belief about AI. Chapter 1 introduces eugenics, explaining the term, its history, and its pathway through statistics, AI, and the present day. It also establishes the book’s focus on the intersection of AI and disability to illustrate the human consequences of the eugenic logics embedded into AI.
In Chapter 2, Williams corrects inaccuracies in popular understandings of AI and unpacks the ableist tropes in contemporary media that shape them. Further, they emphasize the frustratingly unproductive nature of discussing a broad concept like AI without delineating specific technologies. To address this problem, Williams offers an informative taxonomy to understand and discuss the six types of systems commonly referred to as AI: Automators, Simulators, Discriminators, Predicators, Amalgamators, and Subjugators. These terms lift the curtain on how AI systems actually work, what they are and aren’t capable of, and what their common pitfalls are. This helps readers to critically understand these systems in order to meaningfully address their prospects, implications, and harms. Additionally, Williams emphasizes that by understanding AI’s capabilities, it becomes clear that the real concern is not merely the AI system itself, but the true purpose for which it’s built.
In Chapter 3, Disabling Intelligences challenges popular narratives about AI’s role in our daily lives, from dating apps, to finance, to medicine, and more. Special attention is given to AI’s many harmful impacts, intended and unintended, experienced predominantly by marginalized communities. Some examples discussed in the book include Black mortgage applicants’ experiences with algorithmic lending discrimination as well as how medical and insurance companies’ profit-motivated implementation of AI technologies harm disabled people. These examples illustrate that the problem is not merely the technology itself, but the larger human systems of oppression in which the technology is situated, and which subsequently shape the values and assumptions encoded into its design and implementation.
Following this line of thought, in Chapter 4, Williams explores the “underlying conditions of dystopia necessary for the kinds of autonomous conveniences we have come to imagine AI will bring us.”[5]Williams, Disabling Intelligences, 76. One such example is our desire for automation. A common line of thinking is that it is desirable to automate tasks to free up more time and relieve our overwhelming workload. Yet, in actuality, automation often creates new tasks to manage the automation, shifting the labor rather than eliminating it. This phenomenon is referred to in the book as labor displacement. And when automation does free up time, a task void is created, which we simply fill with more tasks. Additionally, Williams takes a closer look at what kinds of tasks we seek to automate. They highlight how a dislike for domestic labor and desire to automate it are directly linked to past and current antagonism toward the people who have historically performed this feminized and racialized labor. Further, the automation of certain types of labor contributes to the devaluation of that labor and, in turn, the laborers.
Another example of the dystopic conditions Williams highlights in Chapter 4 is a rhetorical strategy that they refer to as “The Disability Diversion.”[6]Williams, Disabling Intelligences, 87. The Disability Diversion happens when the designers and developers of a technology claim their technology will benefit disabled people in order to justify its creation, even though the design is not actually centered on disabled people’s needs, desires, or input. Instead, the Disability Diversion masquerades corporate intentions as charitable, reinforces ableist ideologies of cure, and preys on disabled people for data collection. This demonstrates a key argument made throughout Disabling Intelligences: Our captivation with the promise of using AI to solve complex social problems is not only rooted in a false belief of what AI can do, but it’s also a distraction from the underlying systemic conditions creating the problems that need solving in the first place. Furthermore, Williams explains that AI hype feeds into a false equivalence between scale and progress. When we believe that the ideal of what we want for AI is application at scale, it becomes easy to overlook the ethical consequences of the decisions made in service of expansion.

At the core of Williams’s call to action is a vital reminder that we are not powerless to make change. Further, it is essential to not get caught up with the idea that these harmful systems are inevitable and end up mistaking fatalism for pragmatism. In Chapter 5, the book explores methods to identify and disrupt harmful sociotechnical systems. To effectively make change, Williams posits, it is first essential to understand our individual roles within AI systems and how our roles may change depending on the context. Here, Williams becomes more specific on the groupings they are referring to rather than the general “we” used throughout the prior chapters by defining the following roles within AI systems: users, clients, builders, executives, researchers, reporters, and governors. They posit that “[e]ven if you are not consciously reflecting on the possible consequences of your actions, everyday decisions have material influence on the world.”[7]Williams, Disabling Intelligences, 123. Recognizing our own roles is crucial, because we all participate in these unjust systems in some way or another, and our choices, even the mundane, matter.
Insisting that a different path forward is possible, Williams first discusses a range of tactics and strategies from other critical technology scholars, such as Ruha Benjamin’s Abolitionist Toolkit and Anita Say Chan’s calls for data pluralism. Building on this work, Williams introduces what they call the “Just AI Toolkit”[8]Williams, Disabling Intelligences, 107. to empower readers to disrupt systems that do not align with their values. This toolkit helps to evaluate, resist, and transform unjust sociotechnical systems based on one’s role in the system. The Just AI Toolkit is composed of four stages: specify, observe, assess, and rewrite. These stages are designed to aid readers in evaluating sociotechnical systems, developing strategies for resistance and refusal, and “recogniz[ing] themselves as belonging in the fight for Just Technology.”[9]Williams, Disabling Intelligences, 110.
Full of incisive critiques of the often-overlooked dimensions of AI-related harms, Williams’s explanation of metaeugenics, eugenics’ covert successor and a “pernicious logic of self-loathing,”[10]Williams, Disabling Intelligences, 11. stands out as most profound. Williams defines metaeugenics as “the undercurrent of cultural norms, ideals, values, and demands that warp and twist deviant bodies into conformity via a desperate drive for survival and future.”[11]Williams, Disabling Intelligences, 10. Metaeugenics lives even in our relationships to ourselves and what we think makes us worthy. Williams states that “metaeugenic thought is something you can do to yourself—when the project of eugenics has become so embedded and entangled within the concepts of rationality and reason that you can fervently believe in the rightness of your own destruction.”[12]Williams, Disabling Intelligences, 10. The focus on the individual mindset here stems from Williams’s belief in “how changing our relationship with ourselves can change the whole world.”[13]Williams, Disabling Intelligences, 110. Reading Williams’s commentary on metaeugenics feels like being handed a mirror and asked to confront the internalized values lurking deep in one’s consciousness as well as their connection to the hidden underbelly of the sociotechnical systems of now.
Disabling Intelligences is a fast paced and illuminating read that takes the reader on a journey to understand the eugenic logic that fuels the current AI moment. It clarifies how AI systems work, examines the web of systemic problems behind AI “solutions,” and entrusts us to envision and create change for a better future. It also gives us hope for the transformative power of our actions, reminding us that “[t]hrough our own despair, we let injustice live.”[14]Williams, Disabling Intelligences, 108. Deconstructing society’s prolific ableism, and its eugenic roots, is essential for understanding the current cultural moment and why we’re so obsessed with the false promise of AI “greatness.” After all, as the book states, it’s not AI’s eugenics problem, it’s our eugenics problem. In Williams’s words, “[t]he structures of oppression are human.”[15]Williams, Disabling Intelligences, 127. Therefore, to effectively resist, we must make changes within ourselves.
Footnotes
| ↑1 | Rua M. Williams, Disabling Intelligences: Legacies of Eugenics and How We Are Wrong about AI (Cham: Palgrave Macmillan, 2025), 3. |
|---|---|
| ↑2 | Williams, Disabling Intelligences, 35. |
| ↑3 | Williams, Disabling Intelligences, 107. |
| ↑4 | Williams, Disabling Intelligences, 2. |
| ↑5 | Williams, Disabling Intelligences, 76. |
| ↑6 | Williams, Disabling Intelligences, 87. |
| ↑7 | Williams, Disabling Intelligences, 123. |
| ↑8 | Williams, Disabling Intelligences, 107. |
| ↑9 | Williams, Disabling Intelligences, 110. |
| ↑10 | Williams, Disabling Intelligences, 11. |
| ↑11 | Williams, Disabling Intelligences, 10. |
| ↑12 | Williams, Disabling Intelligences, 10. |
| ↑13 | Williams, Disabling Intelligences, 110. |
| ↑14 | Williams, Disabling Intelligences, 108. |
| ↑15 | Williams, Disabling Intelligences, 127. |