Academic and public debates are increasingly concerned with the question whether and how algorithmic decision-making (ADM) may reinforce social inequality. Most previous research on this topic originates from computer science. The social sciences, however, have huge potentials to contribute to research on social consequences of ADM. Based on a process model of ADM systems, we demonstrate how social sciences may advance the literature on the impacts of ADM on social inequality by uncovering and mitigating biases in training data, by understanding data processing and analysis, as well as by studying social contexts of algorithms in practice. Furthermore, we show that fairness notions need to be evaluated with respect to specific outcomes of ADM systems and with respect to concrete social contexts. Social sciences may evaluate how individuals handle algorithmic decisions in practice and how single decisions aggregate to macro social outcomes. In this overview, we highlight how social sciences can apply their knowledge on social stratification and on substantive domains of ADM applications to advance the understanding of social impacts of ADM. Source: Social impacts of algorithmic decision-making: A research agenda for the social sciences – Frederic Gerdon, Ruben L Bach, Christoph Kern, Frauke Kreuter, 2022
Uncategorized
FTC to Prioritize Cybersecurity and Data Minimization Enforcement Under COPPA to Bolster Student Privacy | Center for Democracy and Technology
The Center for Democracy & Technology (CDT) welcomes the unanimous approval by the Federal Trade Commission (FTC) of a policy statement that underscores education technology vendors’ responsibilities under the Children’s Online Privacy Protection Act (COPPA). The statement acknowledges the importance of technology in students’ lives, and makes known that the FTC intends to increase its enforcement of COPPA’s existing requirements related to data security and data minimization. This development is an important step toward improving privacy for students and children and securing their data, as student and children’s privacy laws have long been criticized for their lack of enforcement. “The FTC’s policy statement underscores the importance of thoughtful data practices in protecting students’ privacy,” said CDT President & CEO Alexandra Givens. “Limitations on data collection, use, and retention are essential to protect individuals from privacy harms and cybersecurity risks. We applaud the FTC for its work to strengthen enforcement of children’s privacy requirements in the context of education technology, and particularly thank the Commissioners who championed data minimization as a vital component of this work. While this policy statement represents an important step forward, we also join the call for the FTC to complete its long-awaited review of the regulations that govern children’s privacy, and to align those reforms with the wider movement to protect everyone’s privacy at the federal level.” [...] Source: FTC to Prioritize Cybersecurity and Data Minimization Enforcement Under COPPA to Bolster Student Privacy – Center for Democracy and Technology
Ableism And Disability Discrimination In New Surveillance Technologies: How new surveillance technologies in education, policing, health care, and the workplace disproportionately harm disabled people – Center for Democracy and Technology
Introduction Algorithmic technologies are everywhere. At this very moment, you can be sure students around the world are complaining about homework, sharing gossip, and talking about politics — all while computer programs observe every web search they make and every social media post they create, sending information about their activities to school officials who might punish them for what they look at. Other things happening right now likely include: Delivery workers are trawling up and down streets near you while computer programs monitor their location and speed to optimize schedules, routes, and evaluate their performance; People working from home are looking at their computers while their computers are staring back at them, timing their bathroom breaks, recording their computer screens, and potentially listening to them through their microphones; Your neighbors – in your community or the next one over – are being tracked and designated by algorithms targeting police attention and resources to some neighborhoods but not others; Your own phone may be tracking data about your heart rate, blood oxygen level, steps walked, menstrual cycle, and diet, and that information might be going to for-profit companies or your employer. Your social media content might even be mined and used to diagnose a mental health disability. This ubiquity of algorithmic technologies has pervaded every aspect of modern life, and the algorithms are improving. But while algorithmic technologies may become better at predicting which restaurants someone might like or which music a person might enjoy listening to, not all of their possible applications are benign, helpful, or just. [...] Source: Ableism And Disability Discrimination In New Surveillance Technologies: How new surveillance technologies in education, policing, health care, and the workplace disproportionately harm disabled people – Center for Democracy and Technology
The Pandemic’s Innovation Lessons for the Climate Crisis | Project Syndicate
As with the development of COVID-19 vaccines, confronting today’s mounting climate challenges requires close cooperation between the public and private sectors, as well as between countries. Shaping markets and industries via the right policy mix of targeted government financing and procurement can accelerate the green transition. LONDON – While the COVID-19 crisis brought much suffering and many socioeconomic burdens, it has also shown how targeted cooperation between the state and business can speed innovation. Addressing the climate crisis calls for similarly creative collaboration. In both cases, accelerating innovation and experimenting with local solutions is necessary but not sufficient. Essential technologies – whether vaccines or renewables – must be diffused globally. The first COVID-19 vaccines were granted emergency-use authorizations in the United States and the European Union less than a year after the pandemic began. Established innovation systems and adequate manufacturing capacity were important factors. Without longstanding cooperation between private and public institutions, and government promotion and funding of research, rapid COVID-19 vaccine development would not have been possible. [...] Source: The Pandemic’s Innovation Lessons for the Climate Crisis by Antonio Andreoni & Rainer Quitzow – Project Syndicate
If Tech Fails to Design for the Most Vulnerable, It Fails Us All | WIRED
What do Russian protesters have in common with Twitter users freaked out about Elon Musk reading their DMs and people worried about the criminalization of abortion? It would serve them all to be protected by a more robust set of design practices from companies developing technologies. Let’s back up. Last month, Russian police coerced protesters into unlocking their phones to search for evidence of dissent, leading to arrests and fines. What’s worse is that Telegram, one of the main chat-based apps used in Russia, is vulnerable to these searches. Even just having the Telegram app on a personal device might imply that its owner doesn’t support the Kremlin’s war. But the builders of Telegram have failed to design the app with considerations for personal safety in high-risk environments, and not just in the Russian context. Telegram can thus be weaponized against its users. Likewise, amid the back and forth about Elon Musk’s plan to buy Twitter, many people who use the platform have expressed concerns over his bid to forefront algorithmic content moderation and other design changes on the whim of his $44 billion fancy. Bringing in recommendations from someone with no framework of risk and harms to highly marginalized people leads to proclamations of “authenticating all humans.” This seems to be a push to remove online anonymity, something I’ve written about very personally. It is ill-thought-through, harmful to those most at risk, and backed by no actual methodology or evidence. Beyond his unclear outbursts for changes, Musk’s previous actions combined with the existing harms from Twitter’s current structures have made it clear that we’re heading toward further impacts on marginalized groups, such as Black and POC Twitter users and trans folks. Meanwhile, lack of safety infrastructure is hitting home hard in the US since the leak of the draft…
U.S. cities are backing off banning facial recognition as crime rises | Reuters
Facial recognition is making a comeback in the United States as bans to thwart the technology and curb racial bias in policing come under threat amid a surge in crime and increased lobbying from developers. Virginia in July will eliminate its prohibition on local police use of facial recognition a year after approving it, and California and the city of New Orleans as soon as this month could be next to hit the undo button. Homicide reports in New Orleans rose 67% over the last two years compared with the pair before, and police say they need every possible tool. "Technology is needed to solve these crimes and to hold individuals accountable," police Superintendent Shaun Ferguson told reporters as he called on the city council to repeal a ban that went into effect last year. Efforts to get bans in place are meeting resistance in jurisdictions big and small from New York and Colorado to West Lafayette, Indiana. Even Vermont, the last state left with a near-100% ban against police facial-recognition use, chipped away at its law last year to allow for investigating child sex crimes. [...] Source: U.S. cities are backing off banning facial recognition as crime rises | Reuters
Uber and Lyft are taking on healthcare, and drivers are just along for the ride | The Verge
Within the first week that Austin Correll was driving for Lyft in the fall of 2021, he was sent to pick up passengers at an address that turned out to be for a hospital. When he pulled up to the curb, he found an elderly woman in a wheelchair and another other with a walker, waiting for him — flanked by four or five nurses. He got out and talked to the nurses, who told him that the woman in the wheelchair had just had heart surgery and needed to go to assisted living. The woman with the walker was her daughter, and she also appeared to have some health problems, Correll says. Correll, who said he started working for Lyft for a few months while he waited for the results of his bar exam, doesn’t have any medical training. He told The Verge he immediately felt unprepared for the responsibility of transporting these two women, who were supposed to go to a motel around two hours away. When the nurses then told him that, on arrival at the motel, he should call an ambulance to help move the passengers into their room, he grew even more uneasy. “The biggest thing I was worried about was, what if there was a medical emergency? This isn’t somebody who got their arm broken, got a cast, and needed to get home,” Correll says. “These are two people with severe medical issues.” Source: Uber and Lyft are taking on healthcare, and drivers are just along for the ride | The Verge
The Datafication of #MeToo: Whiteness, Racial Capitalism, and Anti-Violence Technologies | Big Data & Society
This article illustrates how racial capitalism can enhance understandings of data, capital, and inequality through an in-depth study of digital platforms used for intervening in gender-based violence. Specifically, we examine an emergent sociotechnical strategy that uses software platforms and artificial intelligence (AI) chatbots to offer users emergency assistance, education, and a means to report and build evidence against perpetrators. Our analysis details how two reporting apps construct data to support institutionally legible narratives of violence, highlighting overlooked racialised dimensions of the data capital generated through their use. We draw attention to how they reinforce property relations built on extraction and ownership, capital accumulation that reinforces benefits derived through data property relations and ownership, and the commodification of diversity and inclusion. Recognising these patterns are not unique to anti-violence apps, we reflect on how this example aids in understanding how racial capitalism becomes a constitutive element of digital platforms, which more generally extract information from users, rely on complex financial partnerships, and often sustain problematic relationships with the criminal legal system. We conclude with a discussion of how racial capitalism can advance scholarship at the intersections of data and power. Source: The Datafication of #MeToo: Whiteness, Racial Capitalism, and Anti-Violence Technologies | Big Data & Society
What Happens When an AI Knows How You Feel? | WIRED
IN MAY 2021, Twitter, a platform notorious for abuse and hot-headedness, rolled out a “prompts” feature that suggests users think twice before sending a tweet. The following month, Facebook announced AI “conflict alerts” for groups, so that admins can take action where there may be “contentious or unhealthy conversations taking place.” Email and messaging smart-replies finish billions of sentences for us every day. Amazon’s Halo, launched in 2020, is a fitness band that monitors the tone of your voice. Wellness is no longer just the tracking of a heartbeat or the counting of steps, but the way we come across to those around us. Algorithmic therapeutic tools are being developed to predict and prevent negative behavior. Source: What Happens When an AI Knows How You Feel? | WIRED