A Primer on AI in/from the Majority World is a curated collection of over 160 thematic works that serve as pathways to explore the presence of artificial intelligence and technology in the geographic regions that are home to the majority of the human population. Instead of assuming that knowledge and innovations move out of the so-called centers of Europe and the United States to the rest of the world, thinking from the “majority world” (a term coined by Bangladeshi photographer Shahidul Alam) means tracing emerging forms of knowledge, innovation, and labor in former and still-colonized spaces. “Majority world” defines a community in terms of what it has, rather than what it lacks. Source: A Primer on AI in/from the Majority World | Data & Society
Amana Kaskazi
Digital twins improve real-life manufacturing | MIT Technology Review
by MIT Technology Review InsightsReal-world data paired with digital simulations of products—digital twins—are providing valuable insights that are helping companies identify and resolve problems before prototypes go into production and manage products in the field, says Alberto Ferrari, senior director of the Model-Based Digital Thread Process Capability Center at Raytheon. “As they say, ‘All the models are wrong, but some of them are useful,’” Ferrari says. “Digital twins, supported with data—as real facts—are a way to identify models that are really useful for decision-making.” Download the full report. Source: Digital twins improve real-life manufacturing | MIT Technology Review
Experts Doubt Ethical AI Design Will Be Broadly Adopted as the Norm Within the Next Decade | Pew Research Center
by Lee Rainie, Janna Anderson and Emily A. VogelsArtificial intelligence systems “understand” and shape a lot of what happens in people’s lives. AI applications “speak” to people and answer questions when the name of a digital voice assistant is called out. They run the chatbots that handle customer-service issues people have with companies. They help diagnose cancer and other medical conditions. They scour the use of credit cards for signs of fraud, and they determine who could be a credit risk. Corporations and governments are charging evermore expansively into AI development. Increasingly, nonprogrammers can set up off-the-shelf, pre-built AI tools as they prefer. As this unfolds, a number of experts and advocates around the world have become worried about the long-term impact and implications of AI applications. They have concerns about how advances in AI will affect what it means to be human, to be productive and to exercise free will. Dozens of convenings and study groups have issued papers proposing what the tenets of ethical AI design should be, and government working teams have tried to address these issues. In light of this, Pew Research Center and Elon University’s Imagining the Internet Center asked experts where they thought efforts aimed at creating ethical artificial intelligence would stand in the year 2030. Some 602 technology innovators, developers, business and policy leaders, researchers and activists responded to this specific question: By 2030, will most of the AI systems being used by organizations of all sorts employ ethical principles focused primarily on the public good? Source: Experts Doubt Ethical AI Design Will Be Broadly Adopted as the Norm Within the Next Decade | Pew Research Center
Resisting the Menace of Face Recognition | Electronic Frontier Foundation
by Adam SchwartzFace recognition technology is a special menace to privacy, racial justice, free expression, and information security. Our faces are unique identifiers, and most of us expose them everywhere we go. And unlike our passwords and identification numbers, we can’t get a new face. So, governments and businesses, often working in partnership, are increasingly using our faces to track our whereabouts, activities, and associations. Fortunately, people around the world are fighting back. A growing number of communities have banned government use of face recognition. As to business use, many communities are looking to a watershed Illinois statute, which requires businesses to get opt-in consent before extracting a person’s faceprint. In the hands of government and business alike, face recognition technology is a growing menace to our digital rights. But the future is unwritten. EFF is proud of its contributions to the movement to resist abuse of these technologies. Please join us in demanding a ban on government use of face recognition, and laws like Illinois’ BIPA to limit private use. Together, we can end this threat. Source: Resisting the Menace of Face Recognition | Electronic Frontier Foundation
Governing Artificial Intelligence | Data & Society
by Mark Latonero“A human rights-based frame could provide those developing AI with the aspirational, normative, and legal guidance to uphold human dignity and the inherent worth of every individual regardless of country or jurisdiction.” Latonero draws the connections between AI and human rights; reframes recent AI-related controversies through a human rights lens; and reviews current stakeholder efforts at the intersection of AI and human rights. This report is intended for stakeholders–such as technology companies, governments, intergovernmental organizations, civil society groups, academia, and the United Nations (UN) system–looking to incorporate human rights into social and organizational contexts related to the development and governance of AI. Source: Governing Artificial Intelligence
On the Clock and at Home: Post-COVID-19 Employee Monitoring in the Workplace | SHRM Executive Network
by Aiha Nguyen, Data & SocietyThe COVID-19 pandemic has put a spotlight on human resources professionals who have had to rapidly transition employees to working from home (WFH) while also protecting those still working in company facilities. These new conditions will likely continue even as businesses experiment with approaches to reopening. Employees and employers alike may feel the return to offices is too risky or may face external challenges such as finding suitable childcare. Many companies will let employees who’ve proven they can effectively work from home continue to do so. COVID-19 can render an entire workplace hazardous. This is reason enough for implementing a WFH policy when possible. But managing remote work cannot be left to supervisors’ understanding of productivity software. The decisions that company executives make now can have long-term consequences, build new practices and norms, change employee relationships with supervisors, impact workers’ sense of privacy and safety, and establish—for better or worse—the types of work that will be valued by the organization. Source: On the Clock and at Home: Post-COVID-19 Employee Monitoring in the Workplace
New Digital Infrastructures of Workplace Health and Safety | Centre for Media, Technology and Democracy
by Aiha NguyenIn this essay series, Watching the Watchers: The New Frontier of Privacy and Surveillance under COVID-19, McGill’s Centre for Media, Technology and Democracy explores the policy, legal and ethical issues of (new) surveillance tactics in times of crisis. In the wake of the 2020 global pandemic, governments and corporations around the world are adopting unprecedented data-gathering practices to both stop the spread of COVID-19 and transition to safer and more economically stable futures. This essay series examines how public and private actors are using pandemic response technologies to capitalize on this extraordinary moment of upheaval. It convenes a diverse group of experts to examine the policy, legal, and ethical challenges posed by the use of tactics that surveil and control populations around the world. With a focus on wide-ranging topics such as cybersecurity, racial justice, and worker surveillance, among others, this series offers a roadmap as policymakers confront the privacy and human rights impacts of crises like the novel coronavirus in the years to come. Source: New Digital Infrastructures of Workplace Health and Safety — Centre for Media, Technology and Democracy
Technologies of Humility | The Carr Center for Human Rights – Harvard Kennedy School
by Justice MattersHow do science and technology affect rights, equity, and justice? When are techno-solutions inadequate in addressing societal problems? In this month's episode of Justice Matters, host Sushma Raman talks with Professor Sheila Jasanoff, a pioneer in the social sciences exploring the role of science and technology in the law, politics, and policy of modern democracies. Join them as they discuss "technologies of humility," and how we might build more participatory methods of public policy problem solving. Sheila Jasanoff is Pforzheimer Professor of Science and Technology Studies at the Harvard Kennedy School. Her books include The Fifth Branch, Science at the Bar, Designs on Nature, The Ethics of Invention, and Can Science Make Sense of Life? She founded and directs the STS Program at Harvard; previously, she was founding chair of the STS Department at Cornell. Source: Technologies of Humility | The Carr Center for Human Rights – Harvard Kennedy School
Social Media as Criminal Evidence: New Possibilities, Problems | American Sociological Association
by Jeffrey Lane and Fanny A. RamirezIn recent years, police and prosecutors have implemented social media in a host of new ways to investigate and prosecute crimes. Social media, after all, contains a wealth of information—and misinformation—on individual users and their networks and few laws restrict what law enforcement can do with social media data. As more social media evidence factors in criminal cases, new opportunities to solve crime and bring those responsible to justice emerge, along with questions about the fairness and reliability of such evidence. Social justice activists and victim advocates worry that social media content is being used against vulnerable groups, furthering the vilification and stigmatization of already marginalized individuals. In this piece, we discuss research on social media and the law in two types of criminal cases—gang cases and sexual assault cases—to highlight key issues at play in this digital turn in the criminal justice system. We also explore grievances within the legal field from public defenders concerned that social media companies have aligned with prosecutors and shut them out, thus placing them and their clients at a disadvantage in what is already an unbalanced playing field. These contexts point to the double-edged sword of social media use in criminal cases as it opens once-closed communication channels around criminal activity while functioning to support age-old stereotypes and disparities in court. Source: Social Media as Criminal Evidence: New Possibilities, Problems | American Sociological Association
Smart Cities, Bad Metaphors, and a Better Urban Future | WIRED
by Author on SourceMaybe it’s a cliché—I think I’ve used it myself—to say that scientists’ and philosophers’ explanations for how the brain works tend to metaphorically track the most advanced technology of their time. Greek writers thought brains worked like hydraulic water clocks. European writers in the Middle Ages suggested that thoughts operated through gear-like mechanisms. In the 19th century the brain was like a telegraph; a few decades later, it was more like a telephone network. Shortly after that, no surprise, people thought the brain worked like a digital computer, and that maybe they could build computers that work like the brain, or talk to it. Not easy, since, metaphors aside, nobody really knows how the brain works. Science can be exciting like that. Source: Smart Cities, Bad Metaphors, and a Better Urban Future | WIRED