Discriminating Systems: Gender, Race, and Power in AI
There is a diversity crisis in the AI sector across gender and race. Recent studies found only
18%of authors at leading AI conferences are women,i and more than 80%of AI professors are men.ii This disparity is extreme in the AI industry:iii women comprise only 15% of AI research
staff at Facebook and 10% at Google. There is no public data on trans workers or other gender
minorities. For black workers, the picture is even worse. For example, only 2.5% of Google’s
workforce is black, while Facebook and Microsoft are each at 4%. Given decades of concern and
investment to redress this imbalance, the current state of the field is alarming.
The AI sector needs a profound shift in how it addresses the current diversity crisis. The
AI industry needs to acknowledge the gravity of its diversity problem, and admit that existing
methods have failed to contend with the uneven distribution of power, and the means by which
AI can reinforce such inequality. Further, many researchers have shown that bias in AI systems
reflects historical patterns of discrimination. These are two manifestations of the same problem,
and they must be addressed together.
The overwhelming focus on ‘women in tech’ is too narrow and likely to privilege white women
over others. We need to acknowledge how the intersections of race, gender, and other identities
and attributes shape people’s experiences with AI. The vast majority of AI studies assume gender
is binary, and commonly assign people as ‘male’ or ‘female’ based on physical appearance and
stereotypical assumptions, erasing all other forms of gender identity.
Fixing the ‘pipeline’ won’t fix AI’s diversity problems. Despite many decades of ‘pipeline studies’
that assess the flow of diverse job candidates from school to industry, there has been no
substantial progress in diversity in the AI industry. The focus on the pipeline has not addressed
deeper issues with workplace cultures, power asymmetries, harassment, exclusionary hiring
practices, unfair compensation, and tokenization that are causing people to leave or avoid
working in the AI sector altogether.
The use of AI systems for the classification, detection, and prediction of race and gender
is in urgent need of re-evaluation. The histories of ‘race science’ are a grim reminder that
race and gender classification based on appearance is scientifically flawed and easily abused.
Systems that use physical appearance as a proxy for character or interior states are deeply suspect, including AI tools that claim to detect sexuality from headshots,iv predict ‘criminality’ based on facial features,vor assess worker competence via ‘micro-expressions.’ vi Such systems are replicating patterns of racial and gender bias in ways that can deepen and justify historical inequality. The commercial deployment of these tools is cause for deep concern.