Oh No, Not Another Trolley! On the Need for a Co-Liberative Consciousness in CS Pedagogy
Due to growing concerns for the disproportionate dangers, artificial intelligence (AI) advances pose to marginalized groups, proposals for procedural solutions to ethics in AI abound. It is time to consider that some systems may be inherently unethical, even violent, whether or not they are fair. In this article, we deploy a feminist critical discourse analysis of long-format responses to ethical scenarios from computing science undergraduate students. We find that even among students that had a strong understanding of social justice and the power of AI to exacerbate existing inequities, most students contextualize these problems as the product of biased datasets and human mis/trust factors, rather than as problems of design and purpose. Further, while many students recognized racism and classism at play in the potential negative impacts of AI systems, most students failed to recognize ableism as a driving social force for inequity. As computing science faculty, we must recognize that our students graduate to become the researchers and developers of future technosocial systems. Pedagogically, we need more than procedural fixes to systemic inequities. We are not going to program our way into justice. We must learn to say no to building violent things.