In this paper, we examine the practice and promises of digital phenotyping. We build on work on the ‘data self’ to focus on a medical domain in which the value and nature of knowledge and relations with data have been played out with particular persistence, that of Alzheimer's disease research. Drawing on research with researchers and developers, we consider the intersection of hopes and concerns related to both digital tools and Alzheimer's disease using the metaphor of the ‘data shadow’. We suggest that as a tool for engaging with the nature of the data self, the shadow is usefully able to capture both the dynamic and distorted nature of data representations, and the unease and concern associated with encounters between individuals or groups and data about them. We then consider what the data shadow ‘is’ in relation to ageing data subjects, and the nature of the representation of the individual's cognitive state and dementia risk that is produced by digital tools. Second, we consider what the data shadow ‘does’, through researchers and practitioners’ discussions of digital phenotyping practices in the dementia field as alternately empowering, enabling and threatening. Source: Digital phenotyping and the (data) shadow of Alzheimer’s disease | Big Data and Society
Research
The Chilling Effects of Digital Dataveillance: A Theoretical Model and an Empirical Research Agenda | Big Data and Society
People's sense of being subject to digital dataveillance can cause them to restrict their digital communication behavior. Such a chilling effect is essentially a form of self-censorship in everyday digital media use with the attendant risks of undermining individual autonomy and well-being. This article combines the existing theoretical and limited empirical work on surveillance and chilling effects across fields with an analysis of novel data toward a research agenda. The institutional practice of dataveillance—the automated, continuous, and unspecific collection, retention, and analysis of digital traces—affects individual behavior. A mechanism-based causal model based on the theory of planned behavior is proposed for the micro level: An individual's increased sense of dataveillance causes their subjective probability assigned to negative outcomes of digital communication behavior to increase and attitudes toward this communication to become less favorable, ultimately decreasing the intention to engage in it. In aggregate and triggered through successive salience shocks such as data scandals, dataveillance is accordingly hypothesized to lower the baseline of free digital communication in a society through the chilling effects mechanism. From the developed theoretical model, a set of methodological consequences and questions for future studies are derived. Source: The Chilling Effects of Digital Dataveillance: A Theoretical Model and an Empirical Research Agenda | Big Data and Society
Algorithmic reparation | Big Data and Society
Machine learning algorithms pervade contemporary society. They are integral to social institutions, inform processes of governance, and animate the mundane technologies of daily life. Consistently, the outcomes of machine learning reflect, reproduce, and amplify structural inequalities. The field of fair machine learning has emerged in response, developing mathematical techniques that increase fairness based on anti-classification, classification parity, and calibration standards. In practice, these computational correctives invariably fall short, operating from an algorithmic idealism that does not, and cannot, address systemic, Intersectional stratifications. Taking present fair machine learning methods as our point of departure, we suggest instead the notion and practice of algorithmic reparation. Rooted in theories of Intersectionality, reparative algorithms name, unmask, and undo allocative and representational harms as they materialize in sociotechnical form. We propose algorithmic reparation as a foundation for building, evaluating, adjusting, and when necessary, omitting and eradicating machine learning systems. Source: Algorithmic reparation | Big Data and Society
Social Media Collective Internships | Microsoft Research
Microsoft Research New England and New York, part of the global network of Microsoft Research Labs, are looking for advanced PhD students to join the Social Media Collective (SMC) for its 12-week internship program. The Social Media Collective brings together empirical and critical perspectives to understand the political and cultural dynamics that underpin social media technologies. [...] This year we have several internships available. Please follow the correct link, read the descriptions carefully, include all the required documents with your application, and be sure to indicate in your letter of interest which internship you are applying for. (For 2022, these will be remote internships.) Social Media Collective summer internship (two positions available) – primary mentors: Nancy Baym and Tarleton Gillespie, MSR New England. Application deadline: January 11 Race, Tech and the Future of Work – primary mentor: Nancy Baym, MSR New England. Application deadline: January 11 (NOTE: same link as SMC internship) Sociotechnical Infrastructures – primary mentor: danah boyd, MSR New York. Application deadline: January 11 Self-expression, impression management and social capital in organizational contexts – primary mentors: Nancy Baym, MSR New England and the Yammer Research Team. Application deadline: January 11 Asynchronous collaboration – primary mentor: Nancy Baym, MSR New England. Application deadline: January 18 Source: Social Media Collective Internships – Microsoft Research
Experts Doubt Ethical AI Design Will Be Broadly Adopted as the Norm Within the Next Decade | Pew Research Center
Artificial intelligence systems “understand” and shape a lot of what happens in people’s lives. AI applications “speak” to people and answer questions when the name of a digital voice assistant is called out. They run the chatbots that handle customer-service issues people have with companies. They help diagnose cancer and other medical conditions. They scour the use of credit cards for signs of fraud, and they determine who could be a credit risk. Corporations and governments are charging evermore expansively into AI development. Increasingly, nonprogrammers can set up off-the-shelf, pre-built AI tools as they prefer. As this unfolds, a number of experts and advocates around the world have become worried about the long-term impact and implications of AI applications. They have concerns about how advances in AI will affect what it means to be human, to be productive and to exercise free will. Dozens of convenings and study groups have issued papers proposing what the tenets of ethical AI design should be, and government working teams have tried to address these issues. In light of this, Pew Research Center and Elon University’s Imagining the Internet Center asked experts where they thought efforts aimed at creating ethical artificial intelligence would stand in the year 2030. Some 602 technology innovators, developers, business and policy leaders, researchers and activists responded to this specific question: By 2030, will most of the AI systems being used by organizations of all sorts employ ethical principles focused primarily on the public good? Source: Experts Doubt Ethical AI Design Will Be Broadly Adopted as the Norm Within the Next Decade | Pew Research Center
Governing Artificial Intelligence | Data & Society
“A human rights-based frame could provide those developing AI with the aspirational, normative, and legal guidance to uphold human dignity and the inherent worth of every individual regardless of country or jurisdiction.” Latonero draws the connections between AI and human rights; reframes recent AI-related controversies through a human rights lens; and reviews current stakeholder efforts at the intersection of AI and human rights. This report is intended for stakeholders–such as technology companies, governments, intergovernmental organizations, civil society groups, academia, and the United Nations (UN) system–looking to incorporate human rights into social and organizational contexts related to the development and governance of AI. Source: Governing Artificial Intelligence
On the Clock and at Home: Post-COVID-19 Employee Monitoring in the Workplace | SHRM Executive Network
The COVID-19 pandemic has put a spotlight on human resources professionals who have had to rapidly transition employees to working from home (WFH) while also protecting those still working in company facilities. These new conditions will likely continue even as businesses experiment with approaches to reopening. Employees and employers alike may feel the return to offices is too risky or may face external challenges such as finding suitable childcare. Many companies will let employees who’ve proven they can effectively work from home continue to do so. COVID-19 can render an entire workplace hazardous. This is reason enough for implementing a WFH policy when possible. But managing remote work cannot be left to supervisors’ understanding of productivity software. The decisions that company executives make now can have long-term consequences, build new practices and norms, change employee relationships with supervisors, impact workers’ sense of privacy and safety, and establish—for better or worse—the types of work that will be valued by the organization. Source: On the Clock and at Home: Post-COVID-19 Employee Monitoring in the Workplace
New Digital Infrastructures of Workplace Health and Safety | Centre for Media, Technology and Democracy
In this essay series, Watching the Watchers: The New Frontier of Privacy and Surveillance under COVID-19, McGill’s Centre for Media, Technology and Democracy explores the policy, legal and ethical issues of (new) surveillance tactics in times of crisis. In the wake of the 2020 global pandemic, governments and corporations around the world are adopting unprecedented data-gathering practices to both stop the spread of COVID-19 and transition to safer and more economically stable futures. This essay series examines how public and private actors are using pandemic response technologies to capitalize on this extraordinary moment of upheaval. It convenes a diverse group of experts to examine the policy, legal, and ethical challenges posed by the use of tactics that surveil and control populations around the world. With a focus on wide-ranging topics such as cybersecurity, racial justice, and worker surveillance, among others, this series offers a roadmap as policymakers confront the privacy and human rights impacts of crises like the novel coronavirus in the years to come. Source: New Digital Infrastructures of Workplace Health and Safety — Centre for Media, Technology and Democracy
Technologies of Humility | The Carr Center for Human Rights – Harvard Kennedy School
How do science and technology affect rights, equity, and justice? When are techno-solutions inadequate in addressing societal problems? In this month's episode of Justice Matters, host Sushma Raman talks with Professor Sheila Jasanoff, a pioneer in the social sciences exploring the role of science and technology in the law, politics, and policy of modern democracies. Join them as they discuss "technologies of humility," and how we might build more participatory methods of public policy problem solving. Sheila Jasanoff is Pforzheimer Professor of Science and Technology Studies at the Harvard Kennedy School. Her books include The Fifth Branch, Science at the Bar, Designs on Nature, The Ethics of Invention, and Can Science Make Sense of Life? She founded and directs the STS Program at Harvard; previously, she was founding chair of the STS Department at Cornell. Source: Technologies of Humility | The Carr Center for Human Rights – Harvard Kennedy School
Social Media as Criminal Evidence: New Possibilities, Problems | American Sociological Association
In recent years, police and prosecutors have implemented social media in a host of new ways to investigate and prosecute crimes. Social media, after all, contains a wealth of information—and misinformation—on individual users and their networks and few laws restrict what law enforcement can do with social media data. As more social media evidence factors in criminal cases, new opportunities to solve crime and bring those responsible to justice emerge, along with questions about the fairness and reliability of such evidence. Social justice activists and victim advocates worry that social media content is being used against vulnerable groups, furthering the vilification and stigmatization of already marginalized individuals. In this piece, we discuss research on social media and the law in two types of criminal cases—gang cases and sexual assault cases—to highlight key issues at play in this digital turn in the criminal justice system. We also explore grievances within the legal field from public defenders concerned that social media companies have aligned with prosecutors and shut them out, thus placing them and their clients at a disadvantage in what is already an unbalanced playing field. These contexts point to the double-edged sword of social media use in criminal cases as it opens once-closed communication channels around criminal activity while functioning to support age-old stereotypes and disparities in court. Source: Social Media as Criminal Evidence: New Possibilities, Problems | American Sociological Association