In June 2021, Alita, a trans woman living in Saudi Arabia, saw a hashtag trending on Twitter that translated roughly to “a space for hatred of religion.” In a Twitter Spaces audio room days earlier, Alita, who asked Rest of World to use her screen name for her safety, had spoken frankly about atheism and her decision to leave Islam. A recording of the conversation started circulating on Twitter and the backlash was swift, resulting in the trending hashtag. In the days that followed, she received transphobic comments, death threats, and calls that she be arrested by Saudi authorities for what she had said. Apostasy — abandoning your religion — is punishable by death in the country, and atheists have been labeled terrorists by the government. “If it was up to them, the government would have arrested and prosecuted me by now. But thank goodness that my information is private and I’m not known everywhere by my real identity. That’s why I’m still safe,” Alita told Rest of World in a private Twitter Spaces room. She requested to speak there, rather than on encrypted messaging apps, because she said it’s where she feels safest sharing her experiences. [...] Source: Why Elon Musk’s ambition to have Twitter “authenticate all real humans” will get people killed | Rest of World
Analysis
Keys to Unlocking an Inclusive and Just Tech Future | Stanford Social Innovation Review
by Raymar Hampshire, Jessica Taketa, and Tayo FabusuyiMichael Odiari, a Dallas-based entrepreneur, seeks to revolutionize traffic stops. He was motivated by his own experience, which includes a history of being pulled over by police and on one occasion, being forced to stare down the barrel of an officer’s gun. After surviving this harrowing, yet all too common, experience for African Americans, Odiari harnessed his skills as a developer and created Check, a technology-based solution that allows drivers to use their mobile devices to send documents like a driver's license, registration, and proof of insurance to an officer, while both parties sit in their vehicle. This solution could increase safety at scale and calm the angst Odiari and others feel during traffic stops. Check offers an example of the countless innovations that are possible when we invest in tech social entrepreneurs, or public interest technology (PIT) entrepreneurs, who represent a diversity of backgrounds and experiences. PIT entrepreneurs from historically marginalized communities have too often been passed over for funding, which stymies both equity and progress. Investing in PIT entrepreneurs who are marginalized based on race, identity, and class is an investment in innovative tech-based solutions to society’s most pressing personal and collective challenges, including the root causes of systemic inequality. Source: Keys to Unlocking an Inclusive and Just Tech Future | Stanford Social Innovation Review
Elon Musk and the oligarchs of the ‘Second Gilded Age’ can not only sway the public – they can exploit their data, too | The Conversation
by Nolan HigdonDuring the Gilded Age of the late 19th century, and the early decades of the 20th century, U.S. captains of industry such as William Randolph Hearst and Jay Gould used their massive wealth to dominate facets of the economy, including the news media. They were, in many ways, prototype oligarchs – by the dictionary definition, “very rich business leaders with a great deal of political influence.” Some have argued that the U.S. is in the midst of a Second Gilded Age defined – like the first – by vast wealth inequality, hyper-partisanship, xenophobia and a new crop of oligarchs using their vast wealth to purchase media and political influence. Which brings us to the announcement on April 25, 2022, that Tesla billionaire Elon Musk is, barring any last-minute hitches, purchasing the social media platform Twitter. It will put the wealthiest man on the planet in control of one of the most influential means of communications in world today. [...] Source: Elon Musk and the oligarchs of the ‘Second Gilded Age’ can not only sway the public — they can exploit their data, too | The Conversation
From the Arab Spring to Russian censorship: a decade of internet blackouts and repression | Rest of World
by Peter GuestSpecial operation and peace On February 27, a few days after Russia invaded Ukraine, radio journalist Valerii Nechay returned to St. Petersburg from a trip to the North Caucasus to find three men in his apartment. Wearing masks to disguise their features, they told him that if he wanted his mother to be left unharmed, he should leave the country. They needn’t have bothered. Nechay already had a one-way ticket booked to Yerevan, the capital of Armenia. “It actually just helped me to pack my bags much quicker,” he said. From Armenia, he traveled on to Georgia and then on again. Rest of World agreed not to disclose his current location, out of concern for his safety. For nearly two decades, Nechay has worked for the radio station Echo of Moscow, which has broadcast political talk shows and news since 1990. Soon after the invasion of Ukraine began, the station was told, like all media in Russia, to stop calling the war a war. [...] Source: From the Arab Spring to Russian censorship: a decade of internet blackouts and repression | Rest of World
The 2020 Census Has Thousands of Errors Added on Purpose | The New York Times
by Michael WinesWASHINGTON — Census Block 1002 in downtown Chicago is wedged between Michigan and Wabash Avenues, a glitzy Trump-branded hotel and a promenade of cafes and bars. According to the 2020 census, 14 people live there — 13 adults and one child. Also according to the 2020 census, they live underwater. Because the block consists entirely of a 700-foot bend in the Chicago River. If that sounds impossible, well, it is. The Census Bureau itself says the numbers for Block 1002 and tens of thousands of others are unreliable and should be ignored. And it should know: The bureau’s own computers moved those people there so they could not be traced to their real residences, all part of a sweeping new effort to preserve their privacy. That paradox is the crux of a debate rocking the Census Bureau. On the one hand, federal law mandates that census records remain private for 72 years. That guarantee has been crucial to persuading many people, including noncitizens and those from racial and ethnic minority groups, to voluntarily turn over personal information. On the other, thousands of entities — local governments, businesses, advocacy groups and more — have relied on the bureau’s goal of counting “every person, only once and in the right place” to inform countless demographic decisions, from drawing political maps to planning disaster response to placing bus stops. The 2020 census sunders that assumption. Now the bureau is saying that its legal mandate to shield census respondents’ identities means that some data from the smallest geographic areas it measures — census blocks, not to be confused with city blocks — must be looked at askance, or even disregarded. And consumers of that data are unhappy. The area within Block 1012 on the southeast side of Chicago is said to have one home…
Are You Still Using Real Data to Train Your AI? | IEEE Spectrum
by Eliza StricklandIt may be counterintuitive. But some argue that the key to training AI systems that must work in messy real-world environments, such as self-driving cars and warehouse robots, is not, in fact, real-world data. Instead, some say, synthetic data is what will unlock the true potential of AI. Synthetic data is generated instead of collected, and the consultancy Gartner has estimated that 60 percent of data used to train AI systems will be synthetic. But its use is controversial, as questions remain about whether synthetic data can accurately mirror real-world data and prepare AI systems for real-world situations. Nvidia has embraced the synthetic data trend, and is striving to be a leader in the young industry. In November, Nvidia founder and CEO Jensen Huang announced the launch of the Omniverse Replicator, which Nvidia describes as “an engine for generating synthetic data with ground truth for training AI networks.” To find out what that means, IEEE Spectrum spoke with Rev Lebaredian, vice president of simulation technology and Omniverse engineering at Nvidia. Source: Are You Still Using Real Data to Train Your AI? | IEEE Spectrum
Artificial intelligence is creating a new colonial world order | MIT Technology Review
by Karen HaoThis story is the introduction to MIT Technology Review’s series on AI colonialism, which was supported by the MIT Knight Science Journalism Fellowship Program and the Pulitzer Center. Read the full series here. My husband and I love to eat and to learn about history. So shortly after we married, we chose to honeymoon along the southern coast of Spain. The region, historically ruled by Greeks, Romans, Muslims, and Christians in turn, is famed for its stunning architecture and rich fusion of cuisines. In Barcelona especially, physical remnants of this past abound. The city is known for its Catalan modernism, an iconic aesthetic popularized by Antoni Gaudí, the mastermind behind the Sagrada Familia. The architectural movement was born in part from the investments of wealthy Spanish families who amassed riches from their colonial businesses and funneled the money into lavish mansions.Little did I know how much this personal trip would intersect with my reporting. Over the last few years, an increasing number of scholars have argued that the impact of AI is repeating the patterns of colonial history. European colonialism, they say, was characterized by the violent capture of land, extraction of resources, and exploitation of people—for example, through slavery—for the economic enrichment of the conquering country. While it would diminish the depth of past traumas to say the AI industry is repeating this violence today, it is now using other, more insidious means to enrich the wealthy and powerful at the great expense of the poor. I had already begun to investigate these claims when my husband and I began to journey through Seville, Córdoba, Granada, and Barcelona. As I simultaneously read The Costs of Connection, one of the foundational texts that first proposed a “data colonialism,” I realized that these cities were the birthplaces of European colonialism—cities through…
Misinformation vs. Disinformation: Here’s How to Tell the Difference | Reader’s Digest
by Laurie BudgarIf you’ve been having a hard time separating factual information from fake news, you’re not alone. Nearly eight in ten adults believe or are unsure about at least one false claim related to COVID-19, according to a report the Kaiser Family Foundation published late last year. Other areas where false information easily takes root include climate change, politics, and other health news. That’s why it’s crucial for you to able to identify misinformation vs. disinformation. Those are the two forms false information can take, according to University of Washington professor Jevin West, who cofounded and directs the school’s Center for an Informed Public. As part of the University of Colorado’s 2022 Conference on World Affairs (CWA), he gave a seminar on the topic, noting that if we hope to combat misinformation and disinformation, we have to “treat those as two different beasts.” The difference between disinformation and misinformation is clearly imperative for researchers, journalists, policy consultants, and others who study or produce information for mass consumption. For the general public, “it’s more important not to share harmful information, period,” says Nancy Watzman, strategic advisor at First Draft, a nonpartisan, nonprofit coalition that works to protect communities from false information. But to avoid it, you need to know what it is. Keep reading to learn about misinformation vs. disinformation and how to identify them. Then arm yourself against online attacks aimed at harming you or stealing your identity by learning how to avoid doxxing, online scams, phone scams, and Amazon email scams. Source: Misinformation vs. Disinformation: Here’s How to Tell the Difference | Reader's Digest
Dementia content gets billions of views on TikTok. Whose story does it tell? | MIT Technology Review
by Abby Ohlibeiser“That’s a conversation that people with dementia have been having now for a while,” says Kate Swaffer, a cofounder of Dementia Alliance International, an advocacy group whose members all live with the condition. Swaffer was diagnosed with younger-onset semantic dementia in 2008, when she was 49. In some ways, these conversations echo ongoing discussions about “sharenting,” family vloggers, and parenting influencers. Kids who were once involuntary stars of their parents’ social media feeds grow up and have opinions about how they were portrayed. But adults with dementia are not children, and whereas children develop the ability to consent as they grow older, theirs will diminish permanently over time. Legally, a care partner or family member with power of attorney can consent on behalf of a person who is unable to do so. But advocates say this standard is not nearly enough to protect the rights and dignity of those living with later-stage dementia. Swaffer’s own standard is this: No one should share content about someone in those stages of dementia—whether on Facebook, in a photography exhibition, or on TikTok—if that person has not explicitly consented to it before losing the cognitive capacity to do so. Source: Dementia content gets billions of views on TikTok. Whose story does it tell? | MIT Technology Review
Connecting Race to Ethics Related to Technology: A Call for Critical Tech Ethics | IEEE Xplore
by Jenny Ungbha KornAbstract: Critical tech ethics is my call for action to influencers, leaders, policymakers, and educators to help move our society towards centering race, deliberately and intentionally, to tech ethics. For too long, when “ethics” is applied broadly across different kinds of technology, ethics does not address race explicitly, including how diverse forms of technologies have contributed to violence against and the marginalization of communities of color. Across several years of research, I have studied online behavior to evaluate gender and racial biases. I have concluded that a way to improve technologies, including the Internet, is to create a specific type of ethics termed “critical tech ethics” that connects race to ethics related to technology. This article covers guiding theories for discovering critical tech ethical challenges, contemporary examples for illustrating critical tech ethical challenges, and institutional changes across business, education, and civil society actors for teaching critical tech ethics and encouraging the integration of critical tech ethics with undergraduate computer science. Critical tech ethics has been developed with the imperative to help improve society through connecting race to ethics related to technology, so that we may reduce the propagation of racial injustices currently occurring by educational institutions, technology corporations, and civil actors. My aim is to improve racial equity through the development of critical tech ethics as research, teaching, and practice in social norms, higher education, policy making, and civil society Source: Connecting Race to Ethics Related to Technology: A Call for Critical Tech Ethics | IEEE Xplore