Systemic Racism and Inherent Biases in AI, Machine Learning, and Beyond
From policing and sentencing to incarceration and parole, every step of the criminal legal process can now be outsourced to algorithmic decision-making systems. Social media monitoring tools, risk assessment instruments, facial recognition software, and data-driven policing technologies are now being designed and deployed at a rapid pace, with little to no interrogation of the ways in which such technologies can reproduce social hierarchies, amplify discriminatory outcomes, and legitimize violence against marginalized groups that are already disproportionately overpoliced.
This webinar from April 1, 2021 featured Rashida Richardson, Visiting Scholar at Rutgers Law School and Rutgers Institute for Information Policy and Law, Cathy O’Neil, author, mathematician, and founder of ORCAA, an algorithmic auditing company, and Cierra Robson, a doctoral student in the Sociology and Social Policy program at Harvard University and the Inaugural Associate Director of the Ida B. Wells JUST Data Lab at Princeton University.