Throughout my two decades in Silicon Valley, I have seen effective altruism (EA) - a movement consisting of an overwhelmingly white male group based largely out of Oxford University and Silicon valley - gain alarming levels of influence. EA is currently being scrutinized due to its association with Sam Bankman-Fried's crypto scandal, but less has been written about how the ideology is now driving the research agenda in the field of artificial intelligence (AI), creating a race to proliferate harmful systems, ironically in the name of "AI safety." [...] Source: Effective Altruism Is Pushing a Dangerous Brand of ‘AI Safety’ | WIRED
Uncategorized
Will ChatGPT Kill the Student Essay? | The Atlantic
Suppose you are a professor of pedagogy, and you assign an essay on learning styles. A student hands in an essay with the following opening paragraph: The construct of “learning styles” is problematic because it fails to account for the processes through which learning styles are shaped. Some students might develop a particular learning style because they have had particular experiences. Others might develop a particular learning style by trying to accommodate to a learning environment that was not well suited to their learning needs. Ultimately, we need to understand the interactions among learning styles and environmental and personal factors, and how these shape how we learn and the kinds of learning we experience. [...] Source: Will ChatGPT Kill the Student Essay? | The Atlantic
Why automating trucking is harder than you think | The Verge
As a bevy of classic Hollywood movies has shown, truck driving is an occupation intertwined with American ideals of freedom and machismo. But technology is threatening the trucker’s traditional independence. Seeking to reduce crashes, the federal government issued a mandate in 2017 that truckers use Electronic Logging Devices (ELDs) to record their driving hours, replacing pencil-and-paper logs that were easily fudged. Are ELDs unwarranted surveillance, or are they a vital safety technology? The answer depends on whom you ask, but pretty much everyone agrees that their adoption constrains the role of human drivers in an industry employing over 3 million people in the US. [...] Source: Why automating trucking is harder than you think – The Verge
‘We all we got’: How Black people online steered the spotlight to Shanquella Robinson’s death | The 19th
Shanquella Robinson’s death could have easily fallen through the cracks. In the first two weeks after the 25-year-old from North Carolina was pronounced dead during a group vacation to Cabo San Lucas, Mexico, her story was limited to a few local news reports. It appeared that her death would be treated like those of many other Black women and girls – with cursory, if any, attention from the news media. But then, video of a woman being beaten emerged, and the news of her death went viral. One tweet by North Carolina blogger Mina Lo with the words, “Rest in Power Shanquella Robinson” has garnered more than 50,000 likes and nearly 17,000 retweets. National news organizations, including CNN and the New York Times, have since picked up Robinson’s story, highlighting the power and potential of Black media platforms. From the killing of Lauren Smith-Fields last year to Robinson last month, Black people online have been a driving force behind elevating stories about missing and murdered Black women and girls in the absence of mainstream media. [...] Source: Shanquella Robinson’s death was falling through the cracks. Then came Black Twitter. – The 19th
Targeted Ads: The Infrastructure for Algorithmic Discrimination | Center for Critical Race + Digital Studies
This report summarizes the findings of a one-year study of online targeted advertisements. It highlights important questions and considerations regarding how online targeted ads uphold, produce, and recreate racially discriminatory infrastructures within everyday life. First, we propose a novel framework–algorithmic discrimination–which purports targeted ads to be discriminatory infrastructures by design: namely, this conceptual and analytic tool situates the potential harms and risks of targeted ads in relation to a longer history of predatory processes, tactics, and classification schemas, especially within and against marginalized communities of color. Next, we discuss how this framework relates to our novel methodology for algorithmic discrimination audits, in light of ongoing discussions of algorithmic accountability and corporations’ seeming attempts to forestall such efforts. Focusing on third-party search data pertaining to queries for educational opportunities, employment, and housing, we use zip codes as a proxy for racial and sociodemographic data, to audit and assess trends in online ad targeting. We compare differences across and within neighborhoods in online targeting patterns; we also compare individual ad messaging content. In contrast, we argue that a sociohistorical and infrastructural approach to algorithmic audits elucidates the community-based harms and risks of targeted ad systems as well as the digital infrastructures targeted ads undergird and fuel. As such, this approach more aptly shows the longer-term impacts of targeted ads and how they re-instantiate–and amplify–legacies of racial inequality. We close with key questions and future directions for this exploratory framework and methodology, particularly considering ongoing concerns about tech regulation and policy, and the protection of vulnerable communities from further tech-driven exploitation and extraction. Source: Targeted Ads: The Infrastructure for Algorithmic Discrimination — Center for Critical Race + Digital Studies
A People’s Guide to Finding Algorithmic Bias | Center for Critical Race + Digital Studies
PEOPLE’S GUIDE TO FINDING ALGORITHMIC BIAS By all people, for all people, regardless of technical background Source: A People’s Guide to Finding Algorithmic Bias — Center for Critical Race + Digital Studies
TikTok Content Moderators Pushed to the Brink in Colombia | Time
Content warning: This story contains description of extreme and disturbing violence, suicide, child abuse and cruelty to animals. Luis, a 28-year-old student from Colombia, works through the night moderating videos for TikTok. During the day, he tries to get some sleep, but sometimes the videos haunt his dreams. He remembers one video taken at a party, with two people holding what initially looked to him like pieces of meat. When they turned around, it appeared they were holding skin and gristle which had been flayed off human faces. “The worst thing was that the friends were playing games and started using the human faces as masks,” he says. [...] Source: TikTok Content Moderators Pushed to the Brink in Colombia | Time
Voting tech for people with disabilities has expanded — but more is still needed | Marketplace
There are an estimated 38 million disabled eligible voters in the U.S., but many of them face unique obstacles when trying to cast their ballots. Federal and state laws require polling stations provide in-person accommodations, like machines with larger screen displays or text-to-speech interfaces inside voter booths. But individual polling places don’t always make it easy, says Mark Lindeman, Policy and Strategy Director with the nonpartisan organization Verified Voting. “Sometimes the voting machines aren’t even set up,” says Lindeman, “or the poll workers lack basic training on how to support them.” Which is why providing different, early voting options has helped boost turnout among disabled voters in recent elections. A study from Rutgers University and the Election Assistance Commission found that nearly 17.7 million disabled people voted in the 2020 election, compared to 16 million in the previous presidential election. Options like vote-by-mail and curbside drop-offs helped increase turnout. [...] Source: Voting tech for people with disabilities has expanded — but more is still needed – Marketplace
How Political Campaigns Use Your Phone’s Location to Target You | The Markup
Before you got in line on Election Day, the emerging and largely unregulated political tracking industry was able to trace your movements As another election season draws to a close, political campaigns have learned a lot about many voters already, including when they previously voted, how much they make, what issues matter to them, and where they get their news. But campaigns this year also know where voters have been. The location of our phones is a powerful tool that campaigns are using to laser-target our attention. Dozens of companies stand ready to provide that data, offering new services specifically designed for political campaigns. They home in on your location using a variety of techniques, often starting with information publicly available from state voter files and then cross-referencing it with locations shared by your phone to indiscreet apps or snatched from ad networks as you surf the web. [...] Source: How Political Campaigns Use Your Phone’s Location to Target You – The Markup
Not My A.I.: Towards Critical Feminist Frameworks to Resist Oppressive A.I. Systems | The Carr Center for Human Rights
In the hype of A.I., we are observing a world where States are increasingly adopting algorithmic decision-making systems altogether with narratives that portray them as a magic wand to “solve” social, economic, environmental, and political problems. But in practice, instead of addressing such promise, the so-called Digital Welfare States are likely to be deploying oppressive algorithms that expand practices of surveillance of the poor and vulnerable; automate inequalities; are racist and patriarchal by design; further practices of digital colonialism, where data and mineral extractivism feed Big Tech businesses from the Global North; and reinforce neoliberal practices to progressively drain out social security perspectives. While much has been discussed about “ethical”, “fair,” or “human-Centered” A.I., particularly focused on transparency, accountability, and data protection, these approaches fail to address the overall picture. Read the paper. Source: Not My A.I.: Towards Critical Feminist Frameworks to Resist Oppressive A.I. Systems | The Carr Center for Human Rights – Harvard Kennedy School