Abstract, futuristic or science-fiction-inspired images of AI hinder the understanding of the technology’s already significant societal and environmental impacts. Images relating machine intelligence to human intelligence set unrealistic expectations and misstate the capabilities of AI. Images representing AI as sentient robots mask the accountability of the humans actually developing the technology, and can suggest the presence of robots where there are none. Such images potentially sow fear, and research shows they can be laden with historical assumptions about gender, ethnicity and religion. However, finding alternatives can be difficult! That’s why we, a non-profit collaboration, are researching, creating, curating and providing Better Images of AI. [...] Source: Better Images of AI
Uncategorized
Regulatory capture’s third face of power | Socio-Economic Review
The term ‘regulatory capture’ is frequently invoked to describe dysfunctional government institutions. In its casual use, it refers to a phenomenon in which regulations benefit regulated industries, rather than public interests. However, as an analytical concept, social scientists have struggled to empirically identify and define the processes in which capture emerges and sustains. In this article, I outline a cultural framework for regulatory capture by linking cultural sociology and the faces of power to existing capture theory. Through an ethnographic case study of digital trade provisions in international trade agreements, I show how capture occurs through the construction and manipulation of ‘public interests’. I trace how capture (a) emerges when industry lobbyists extend existing schemas of a policy network into new frames and (b) is institutionalized into regulatory agencies when policymakers adopt and enact these frames into knowledge production and law. Thus, capture appears through a veneer of consensus, which suppresses alternative interests and policy outcomes. [...] Source: Regulatory capture’s third face of power
Developers Created AI to Generate Police Sketches. Experts Are Horrified | Vice
Two developers have used OpenAI’s DALL-E 2 image generation model to create a forensic sketch program that can create “hyper-realistic” police sketches of a suspect based on user inputs. The program, called Forensic Sketch AI-rtist, was created by developers Artur Fortunato and Filipe Reynaud as part of a hackathon in December 2022. The developers wrote that the program's purpose is to cut down the time it usually takes to draw a suspect of a crime, which is “around two to three hours,” according to a presentation uploaded to the internet. “We haven’t released the product yet, so we don’t have any active users at the moment, Fortunato and Reynaud told Motherboard in a joint email. “At this stage, we are still trying to validate if this project would be viable to use in a real world scenario or not. For this, we’re planning on reaching out to police departments in order to have input data that we can test this on.” AI ethicists and researchers told Motherboard that the use of generative AI in police forensics is incredibly dangerous, with the potential to worsen existing racial and gender biases that appear in initial witness descriptions. [...] Source: Developers Created AI to Generate Police Sketches. Experts Are Horrified
OpenAI’s Whisper is another case study in Colonisation | Papa Reo
On 21 September OpenAI dropped Whisper, a speech recognition model trained on 680,000 hours of audio taken from the web. The highlight: it enables transcription in multiple languages, as well as translation from those languages into English. We are open-sourcing models and inference code to serve as a foundation for building useful applications and for further research on robust speech processing. The ability for a single model to transcribe in multiple languages is ground-breaking for natural language processing (NLP) technologies. With such bold statements, why didn't we hear more about Whisper from news outlets or social media? Twitter was instead dominated by critiques about Stable Diffusion and other generative art models that are infringing on copyright and appropriating the work of artists. Even big-name late-night hosts covered these models. John Oliver’s foray into generative AI ultimately led him to marry a cabbage and Trevor Noah’s more serious interview with OpenAI's CTO Mira Murati exposed the view that there were mostly positive outcomes from this technology with no critical discussion about the potential harm it could cause, making statements that images created from DALL-E were indeed art created not by a brush but by a digital tool. [...] Source: OpenAI's Whisper is another case study in Colonisation
Restraining protest surveillance: When should surveillance of protesters become unlawful? | Privacy International
PI has been fighting against police using intrusive & disproportionate surveillance technologies at protests around the world for years. [...] Source: Restraining protest surveillance: When should surveillance of protesters become unlawful?
Don’t You Be My Neighbour | Rights Back At You
Rowa Mohamed showed up to support her neighbours at an encampment eviction and was injured by police during the protest. Her experience of violence is not unusual - Black Muslim women are often treated with suspicion, like they don’t belong. What happens when people “fight crime” with home surveillance technology and treat their own neighbours as suspects? [...] Source: 3. Don’t You Be My Neighbour | Rights Back At You
Eric Schmidt Is Building the Perfect AI War-Fighting Machine | WIRED
Schmidt became CEO of Google in 2001, when the search engine had a few hundred employees and was barely making money. He stepped away from Alphabet in 2017 after building a sprawling, highly profitable company with a stacked portfolio of projects, including cutting-edge artificial intelligence, self-driving cars, and quantum computers. Schmidt now sees another opportunity for technological reinvention to lead to domination, this time for the US government in competition with other world powers. He may be uniquely well positioned to understand what the Pentagon needs to reach its technological goals and to help the agency obtain it. But his ties to industry raise questions about how the US should aim to align the government and the private sector. And while US military power has long depended on advances in technology, some fear that military AI can create new risks. [...] Source: Eric Schmidt Is Building the Perfect AI War-Fighting Machine
Chinese Censorship and Surveillance in a Moment of Unrest | Tech Policy Press
Last week, the Chinese government under President Xi Jinping took steps to finally move away from its zero-COVID policy, following two weeks of protests in multiple cities. The unrest and anti-government sentiment was perhaps the most pronounced since the 1989 Tiananmen Square crackdown. And while these events gave Western observers an opportunity to grapple with the complexity of Chinese politics, generational and regional differences in the views of the population, and ultimately how the authoritarian government responds to public pressure, it also gave us a chance to see how the Chinese censorship and surveillance apparatus operates. [...] Source: Chinese Censorship and Surveillance in a Moment of Unrest
A Hacked Newsroom Brings a Spyware Maker to U.S. Court | The New Yorker
NSO Group’s business is founded on secrecy; it has refused to publicly identify its clients. In the statement, the company said it sells its software only to “legitimate government agencies” for use in state intelligence and law-enforcement efforts, and maintained that its tools “have proven to save thousands of lives around the world.” It claimed that the firm “cannot know who the targets of its customers are.” Yet it cites its own “rigorous and unique compliance policies” and says it has “terminated contracts when misuse was found.” Many of the Salvadoran journalists who were hacked told me that they believe that whoever deployed Pegasus against them is connected to the Bukele regime. Citizen Lab said that its findings point to the existence of an NSO client operating Pegasus in El Salvador, and reporters were often hacked as they worked on stories of importance to the Bukele regime. “We analyzed the exact time line,” Herrero, the Access Now investigator, recalled. “If somebody was reporting on corruption, then, boom, they got hacked seven days a week.” Carlos Martínez, an El Faro reporter and the brother of Óscar Martínez, the executive editor, told me, “It’s very clear for us that the Bukele government is trying to stop us, to stop our job and to destroy us as individuals and as an organization.” [...] Source: A Hacked Newsroom Brings a Spyware Maker to U.S. Court | The New Yorker
“Out Of Control”: Dozens of Telehealth Startups Sent Sensitive Health Information to Big Tech Companies | The Markup
Open the website of WorkIt Health, and the path to treatment starts with a simple intake form: Are you in danger of harming yourself or others? If not, what’s your current opioid and alcohol use? How much methadone do you use? Within minutes, patients looking for online treatment for opioid use and other addictions can complete the assessment and book a video visit with a provider licensed to prescribe suboxone and other drugs. But what patients probably don’t know is that WorkIt was sending their delicate, even intimate, answers about drug use and self-harm to Facebook. A joint investigation by STAT and The Markup of 50 direct-to-consumer telehealth companies like WorkIt found that quick, online access to medications often comes with a hidden cost for patients: Virtual care websites were leaking sensitive medical information they collect to the world’s largest advertising platforms. [...] Source: “Out Of Control”: Dozens of Telehealth Startups Sent Sensitive Health Information to Big Tech Companies – The Markup