The authors' results underscore the important role of PADM in understanding Internet users' trust in and search for HOFN. When people trust HOFN, they may seek more information to implement further protective actions. Importantly, it appears that trust in HOFN varies with environmental cues (regional pandemic severity) and with individuals' perceived control, providing insight into developing coping strategies during a pandemic. Source: Health-related fake news during the COVID-19 pandemic: perceived trust and information search | Emerald Insight
Autumm Zellers-Leon
Impact of Information Communication Technology on labor productivity: A panel and cross-sectional analysis | ScienceDirect
This article examines the contribution of information and communications technologies (ICT) to labor productivity using panel data approach. The study covers the period of 2000–2015 for a complete dataset of 98 countries as well for three selected groups: low-income, middle-income, and high-income countries. The findings imply that telephone subscription and broadband subscription have a significant impact on overall labor productivity as well as labor productivity of service sector. The ICT affects the labor productivity, so investing in Information Communication Technology is necessary to increase the labor productivity. Source: Impact of Information Communication Technology on labor productivity: A panel and cross-sectional analysis | ScienceDirect
YouTube is major conduit of fake news, factcheckers say | The Guardian
YouTube is a major conduit of online disinformation and misinformation worldwide and is not doing enough to tackle the spread of falsehoods on its platform, according to a global coalition of factchecking organisations. A letter signed by more than 80 groups, including Full Fact in the UK and the Washington Post’s Fact Checker, says the video platform is hosting content by groups including Doctors for the Truth, which spread Covid misinformation, and videos supporting the “fraud” narrative during the US presidential election. “YouTube is allowing its platform to be weaponised by unscrupulous actors to manipulate and exploit others, and to organise and fundraise themselves. Current measures are proving insufficient,” states the letter to YouTube’s chief executive, Susan Wojcicki, which describes YouTube as a “major conduit” for falsehoods. Source: YouTube is major conduit of fake news, factcheckers say | The Guardian
Facebook Hosted Surge of Misinformation and Insurrection Threats in Months Leading Up to Jan. 6 Attack, Records Show | ProPublica
Facebook groups swelled with at least 650,000 posts attacking the legitimacy of Joe Biden’s victory between Election Day and the Jan. 6 siege of the U.S. Capitol, with many calling for executions or other political violence, an investigation by ProPublica and The Washington Post has found. The barrage — averaging at least 10,000 posts a day, a scale not reported previously — turned the groups into incubators for the baseless claims supporters of then-President Donald Trump voiced as they stormed the Capitol, demanding he get a second term. Many posts portrayed Biden’s election as the result of widespread fraud that required extraordinary action — including the use of force — to prevent the nation from falling into the hands of traitors. “LOOKS LIKE CIVIL WAR is BECOMING INEVITABLE !!!” read a post a month before the Capitol assault. “WE CANNOT ALLOW FRAUDULENT ELECTIONS TO STAND ! SILENT NO MORE MAJORITY MUST RISE UP NOW AND DEMAND BATTLEGROUND STATES NOT TO CERTIFY FRAUDULENT ELECTIONS NOW !” Source: Facebook Hosted Surge of Misinformation and Insurrection Threats in Months Leading Up to Jan. 6 Attack, Records Show | ProPublica
Chicago’s “Race-Neutral” Traffic Cameras Ticket Black and Latino Drivers the Most | ProPublica
When then-Mayor Richard M. Daley ushered in Chicago’s red-light cameras nearly two decades ago, he said they would help the city curb dangerous driving. “This is all about safety, safety of pedestrians, safety of other drivers, passengers, everyone,” he said. His successors echoed those sentiments as they expanded camera enforcement. “My goal is only one thing, the safety of our kids,” Rahm Emanuel said in 2011, as he lobbied for the introduction of speed cameras. And in 2020, Lori Lightfoot assured residents her expansion of the program was “about making sure that we keep communities safe.” But for all of their safety benefits, the hundreds of cameras that dot the city — and generate tens of millions of dollars a year for City Hall — have come at a steep cost for motorists from the city’s Black and Latino neighborhoods. A ProPublica analysis of millions of citations found that households in majority Black and Hispanic ZIP codes received tickets at around twice the rate of those in white areas between 2015 and 2019. Source: Chicago’s “Race-Neutral” Traffic Cameras Ticket Black and Latino Drivers the Most | ProPublica
Digital phenotyping and the (data) shadow of Alzheimer’s disease | Big Data and Society
In this paper, we examine the practice and promises of digital phenotyping. We build on work on the ‘data self’ to focus on a medical domain in which the value and nature of knowledge and relations with data have been played out with particular persistence, that of Alzheimer's disease research. Drawing on research with researchers and developers, we consider the intersection of hopes and concerns related to both digital tools and Alzheimer's disease using the metaphor of the ‘data shadow’. We suggest that as a tool for engaging with the nature of the data self, the shadow is usefully able to capture both the dynamic and distorted nature of data representations, and the unease and concern associated with encounters between individuals or groups and data about them. We then consider what the data shadow ‘is’ in relation to ageing data subjects, and the nature of the representation of the individual's cognitive state and dementia risk that is produced by digital tools. Second, we consider what the data shadow ‘does’, through researchers and practitioners’ discussions of digital phenotyping practices in the dementia field as alternately empowering, enabling and threatening. Source: Digital phenotyping and the (data) shadow of Alzheimer’s disease | Big Data and Society
The Chilling Effects of Digital Dataveillance: A Theoretical Model and an Empirical Research Agenda | Big Data and Society
People's sense of being subject to digital dataveillance can cause them to restrict their digital communication behavior. Such a chilling effect is essentially a form of self-censorship in everyday digital media use with the attendant risks of undermining individual autonomy and well-being. This article combines the existing theoretical and limited empirical work on surveillance and chilling effects across fields with an analysis of novel data toward a research agenda. The institutional practice of dataveillance—the automated, continuous, and unspecific collection, retention, and analysis of digital traces—affects individual behavior. A mechanism-based causal model based on the theory of planned behavior is proposed for the micro level: An individual's increased sense of dataveillance causes their subjective probability assigned to negative outcomes of digital communication behavior to increase and attitudes toward this communication to become less favorable, ultimately decreasing the intention to engage in it. In aggregate and triggered through successive salience shocks such as data scandals, dataveillance is accordingly hypothesized to lower the baseline of free digital communication in a society through the chilling effects mechanism. From the developed theoretical model, a set of methodological consequences and questions for future studies are derived. Source: The Chilling Effects of Digital Dataveillance: A Theoretical Model and an Empirical Research Agenda | Big Data and Society
How a racialized disinformation campaign ties itself to The 1619 Project | Columbia Journalism Review
Footage of the January 6 Capitol insurrection revealed hundreds of references to 1776—in signs and in speeches, on t-shirts and hats and stickers. “1776” was chanted in the Capitol halls by leading figures within the so-called alt-right, including some who had also participated in the racist riot in Charlottesville, Virginia, and by those who believed themselves participants in the dawn of the next American revolution. The Proud Boys, too, cite this date; they sell their merch through a store called 1776. We are researchers of media manipulation and disinformation at the Harvard Kennedy School’s Shorenstein Center, and we wanted to know more about how “1776” became the battle cry of the insurrection. Our research reveals that the popularity of “1776” owes in part to keyword squatting—a tactic by which right-wing media have dominated the keywords “1619” and “critical race theory” and enabled a racialized disinformation campaign, waged by Trump and his acolytes, against Black civil rights gains. Source: How a racialized disinformation campaign ties itself to The 1619 Project | Columbia Journalism Review
Algorithmic reparation | Big Data and Society
Machine learning algorithms pervade contemporary society. They are integral to social institutions, inform processes of governance, and animate the mundane technologies of daily life. Consistently, the outcomes of machine learning reflect, reproduce, and amplify structural inequalities. The field of fair machine learning has emerged in response, developing mathematical techniques that increase fairness based on anti-classification, classification parity, and calibration standards. In practice, these computational correctives invariably fall short, operating from an algorithmic idealism that does not, and cannot, address systemic, Intersectional stratifications. Taking present fair machine learning methods as our point of departure, we suggest instead the notion and practice of algorithmic reparation. Rooted in theories of Intersectionality, reparative algorithms name, unmask, and undo allocative and representational harms as they materialize in sociotechnical form. We propose algorithmic reparation as a foundation for building, evaluating, adjusting, and when necessary, omitting and eradicating machine learning systems. Source: Algorithmic reparation | Big Data and Society
A Move for ‘Algorithmic Reparation’ Calls for Racial Justice in AI | WIRED
Supporters of algorithmic reparation suggest taking lessons from curation professionals such as librarians, who’ve had to consider how to ethically collect data about people and what should be included in libraries. They propose considering not just whether the performance of an AI model is deemed fair or good but whether it shifts power. The suggestions echo earlier recommendations by former Google AI researcher Timnit Gebru, who in a 2019 paper encouraged machine learning practitioners to consider how archivists and library sciences dealt with issues involving ethics, inclusivity, and power. Gebru says Google fired her in late 2020, and recently launched a distributed AI research center. A critical analysis concluded that Google subjected Gebru to a pattern of abuse historically aimed at Black women in professional environments. Authors of that analysis also urged computer scientists to look for patterns in history and society in addition to data. Earlier this year, five US senators urged Google to hire an independent auditor to evaluate the impact of racism on Google’s products and workplace. Google did not respond to the letter. Source: A Move for ‘Algorithmic Reparation’ Calls for Racial Justice in AI | WIRED