It may be counterintuitive. But some argue that the key to training AI systems that must work in messy real-world environments, such as self-driving cars and warehouse robots, is not, in fact, real-world data. Instead, some say, synthetic data is what will unlock the true potential of AI. Synthetic data is generated instead of collected, and the consultancy Gartner has estimated that 60 percent of data used to train AI systems will be synthetic. But its use is controversial, as questions remain about whether synthetic data can accurately mirror real-world data and prepare AI systems for real-world situations. Nvidia has embraced the synthetic data trend, and is striving to be a leader in the young industry. In November, Nvidia founder and CEO Jensen Huang announced the launch of the Omniverse Replicator, which Nvidia describes as “an engine for generating synthetic data with ground truth for training AI networks.” To find out what that means, IEEE Spectrum spoke with Rev Lebaredian, vice president of simulation technology and Omniverse engineering at Nvidia. Source: Are You Still Using Real Data to Train Your AI? | IEEE Spectrum
Autumm Zellers-Leon
Artificial intelligence is creating a new colonial world order | MIT Technology Review
This story is the introduction to MIT Technology Review’s series on AI colonialism, which was supported by the MIT Knight Science Journalism Fellowship Program and the Pulitzer Center. Read the full series here. My husband and I love to eat and to learn about history. So shortly after we married, we chose to honeymoon along the southern coast of Spain. The region, historically ruled by Greeks, Romans, Muslims, and Christians in turn, is famed for its stunning architecture and rich fusion of cuisines. In Barcelona especially, physical remnants of this past abound. The city is known for its Catalan modernism, an iconic aesthetic popularized by Antoni Gaudí, the mastermind behind the Sagrada Familia. The architectural movement was born in part from the investments of wealthy Spanish families who amassed riches from their colonial businesses and funneled the money into lavish mansions.Little did I know how much this personal trip would intersect with my reporting. Over the last few years, an increasing number of scholars have argued that the impact of AI is repeating the patterns of colonial history. European colonialism, they say, was characterized by the violent capture of land, extraction of resources, and exploitation of people—for example, through slavery—for the economic enrichment of the conquering country. While it would diminish the depth of past traumas to say the AI industry is repeating this violence today, it is now using other, more insidious means to enrich the wealthy and powerful at the great expense of the poor. I had already begun to investigate these claims when my husband and I began to journey through Seville, Córdoba, Granada, and Barcelona. As I simultaneously read The Costs of Connection, one of the foundational texts that first proposed a “data colonialism,” I realized that these cities were the birthplaces of European colonialism—cities through…
Misinformation vs. Disinformation: Here’s How to Tell the Difference | Reader’s Digest
If you’ve been having a hard time separating factual information from fake news, you’re not alone. Nearly eight in ten adults believe or are unsure about at least one false claim related to COVID-19, according to a report the Kaiser Family Foundation published late last year. Other areas where false information easily takes root include climate change, politics, and other health news. That’s why it’s crucial for you to able to identify misinformation vs. disinformation. Those are the two forms false information can take, according to University of Washington professor Jevin West, who cofounded and directs the school’s Center for an Informed Public. As part of the University of Colorado’s 2022 Conference on World Affairs (CWA), he gave a seminar on the topic, noting that if we hope to combat misinformation and disinformation, we have to “treat those as two different beasts.” The difference between disinformation and misinformation is clearly imperative for researchers, journalists, policy consultants, and others who study or produce information for mass consumption. For the general public, “it’s more important not to share harmful information, period,” says Nancy Watzman, strategic advisor at First Draft, a nonpartisan, nonprofit coalition that works to protect communities from false information. But to avoid it, you need to know what it is. Keep reading to learn about misinformation vs. disinformation and how to identify them. Then arm yourself against online attacks aimed at harming you or stealing your identity by learning how to avoid doxxing, online scams, phone scams, and Amazon email scams. Source: Misinformation vs. Disinformation: Here’s How to Tell the Difference | Reader's Digest
New Amazon Worker Chat App Would Ban Words Like “Union” | The Intercept
Amazon will block and flag employee posts on a planned internal messaging app that contain keywords pertaining to labor unions, according to internal company documents reviewed by The Intercept. An automatic word monitor would also block a variety of terms that could represent potential critiques of Amazon’s working conditions, like “slave labor,” “prison,” and “plantation,” as well as “restrooms” — presumably related to reports of Amazon employees relieving themselves in bottles to meet punishing quotas. “Our teams are always thinking about new ways to help employees engage with each other,” said Amazon spokesperson Barbara M. Agrait. “This particular program has not been approved yet and may change significantly or even never launch at all.” In November 2021, Amazon convened a high-level meeting in which top executives discussed plans to create an internal social media program that would let employees recognize co-workers’ performance with posts called “Shout-Outs,” according to a source with direct knowledge. The major goal of the program, Amazon’s head of worldwide consumer business, Dave Clark, said, was to reduce employee attrition by fostering happiness among workers — and also productivity. Shout-Outs would be part of a gamified rewards system in which employees are awarded virtual stars and badges for activities that “add direct business value,” documents state. At the meeting, Clark remarked that “some people are insane star collectors.” But company officials also warned of what they called “the dark side of social media” and decided to actively monitor posts in order to ensure a “positive community.” At the meeting, Clark suggested that the program should resemble an online dating app like Bumble, which allows individuals to engage one on one, rather than a more forum-like platform like Facebook. Source: New Amazon Worker Chat App Would Ban Words Like “Union” | The Intercept
The giant plan to track diversity in research journals | Nature
In the next year, researchers should expect to face a sensitive set of questions whenever they send their papers to journals, and when they review or edit manuscripts. More than 50 publishers representing over 15,000 journals globally are preparing to ask scientists about their race or ethnicity — as well as their gender — in an initiative that’s part of a growing effort to analyse researcher diversity around the world. Publishers say that this information, gathered and stored securely, will help to analyse who is represented in journals, and to identify whether there are biases in editing or review that sway which findings get published. Pilot testing suggests that many scientists support the idea, although not all. The effort comes amid a push for a wider acknowledgement of racism and structural racism in science and publishing — and the need to gather more information about it. In any one country, such as the United States, ample data show that minority groups are under-represented in science, particularly at senior levels. But data on how such imbalances are reflected — or intensified — in research journals are scarce. Publishers haven’t systematically looked, in part because journals are international and there has been no measurement framework for race and ethnicity that made sense to researchers of many cultures. Source: The giant plan to track diversity in research journals | Nature
Facial recognition firm Clearview AI tells investors it’s seeking massive expansion beyond law enforcement | The Washington Post
The facial recognition company Clearview AI is telling investors it is on track to have 100 billion facial photos in its database within a year, enough to ensure “almost everyone in the world will be identifiable,” according to a financial presentation from December obtained by The Washington Post. Those images — equivalent to 14 photos for each of the 7 billion people on Earth — would help power a surveillance system that has been used for arrests and criminal investigations by thousands of law enforcement and government agencies around the world. And the company wants to expand beyond scanning faces for the police, saying in the presentation that it could monitor “gig economy” workers and is researching a number of new technologies that could identify someone based on how they walk, detect their location from a photo or scan their fingerprints from afar. Source: Facial recognition firm Clearview AI tells investors it’s seeking massive expansion beyond law enforcement | The Washington Post
Uber and Lyft are taking on healthcare, and drivers are just along for the ride | The Verge
Within the first week that Austin Correll was driving for Lyft in the fall of 2021, he was sent to pick up passengers at an address that turned out to be for a hospital. When he pulled up to the curb, he found an elderly woman in a wheelchair and another other with a walker, waiting for him — flanked by four or five nurses. He got out and talked to the nurses, who told him that the woman in the wheelchair had just had heart surgery and needed to go to assisted living. The woman with the walker was her daughter, and she also appeared to have some health problems, Correll says. Correll, who said he started working for Lyft for a few months while he waited for the results of his bar exam, doesn’t have any medical training. He told The Verge he immediately felt unprepared for the responsibility of transporting these two women, who were supposed to go to a motel around two hours away. When the nurses then told him that, on arrival at the motel, he should call an ambulance to help move the passengers into their room, he grew even more uneasy. “The biggest thing I was worried about was, what if there was a medical emergency? This isn’t somebody who got their arm broken, got a cast, and needed to get home,” Correll says. “These are two people with severe medical issues.” Source: Uber and Lyft are taking on healthcare, and drivers are just along for the ride | The Verge
My journey down the rabbit hole of every journalist’s favorite app | POLITICO
So when I talked to Aksu in November, I made sure to use Signal, an encrypted phone app, to protect our discussion about psychological trauma afflicting Uyghurs overseas. The next day, I received an odd note from Otter.ai, the automated transcription app that I had used to record the interview. It read: “Hey Phelim, to help us improve your Otter’s experience, what was the purpose of this particular recording with titled ‘Mustafa Aksu’ created at ‘2021-11-08 11:02:41’?” Three responses were offered: “Personal transcription,” “Meeting or group collaboration,” and “Other.” I froze. Was this a phishing attack? Was Otter or some entity that had access to Otter’s servers spying on my conversations? I contacted Otter to verify if this was indeed a real survey or some clever phishing ruse. An initial confirmation that the survey was legitimate was followed by a denial from the same Otter representative, laced with a warning that I “not respond to that survey and delete it.” My communications with Otter were all restricted to email and were sporadic, often confusing and contradictory. In the three months since that initial exchange (and there was more to come), I’ve gone down the rabbit hole — talking to cybersecurity experts, press freedom advocates and a former government official — to try and understand what vulnerabilities and risks are present in this app that’s become a favorite among journalists for its fast, reliable and cheap automated transcription. Source: My journey down the rabbit hole of every journalist’s favorite app | POLITICO
Dementia content gets billions of views on TikTok. Whose story does it tell? | MIT Technology Review
“That’s a conversation that people with dementia have been having now for a while,” says Kate Swaffer, a cofounder of Dementia Alliance International, an advocacy group whose members all live with the condition. Swaffer was diagnosed with younger-onset semantic dementia in 2008, when she was 49. In some ways, these conversations echo ongoing discussions about “sharenting,” family vloggers, and parenting influencers. Kids who were once involuntary stars of their parents’ social media feeds grow up and have opinions about how they were portrayed. But adults with dementia are not children, and whereas children develop the ability to consent as they grow older, theirs will diminish permanently over time. Legally, a care partner or family member with power of attorney can consent on behalf of a person who is unable to do so. But advocates say this standard is not nearly enough to protect the rights and dignity of those living with later-stage dementia. Swaffer’s own standard is this: No one should share content about someone in those stages of dementia—whether on Facebook, in a photography exhibition, or on TikTok—if that person has not explicitly consented to it before losing the cognitive capacity to do so. Source: Dementia content gets billions of views on TikTok. Whose story does it tell? | MIT Technology Review
Connecting Race to Ethics Related to Technology: A Call for Critical Tech Ethics | IEEE Xplore
Abstract: Critical tech ethics is my call for action to influencers, leaders, policymakers, and educators to help move our society towards centering race, deliberately and intentionally, to tech ethics. For too long, when “ethics” is applied broadly across different kinds of technology, ethics does not address race explicitly, including how diverse forms of technologies have contributed to violence against and the marginalization of communities of color. Across several years of research, I have studied online behavior to evaluate gender and racial biases. I have concluded that a way to improve technologies, including the Internet, is to create a specific type of ethics termed “critical tech ethics” that connects race to ethics related to technology. This article covers guiding theories for discovering critical tech ethical challenges, contemporary examples for illustrating critical tech ethical challenges, and institutional changes across business, education, and civil society actors for teaching critical tech ethics and encouraging the integration of critical tech ethics with undergraduate computer science. Critical tech ethics has been developed with the imperative to help improve society through connecting race to ethics related to technology, so that we may reduce the propagation of racial injustices currently occurring by educational institutions, technology corporations, and civil actors. My aim is to improve racial equity through the development of critical tech ethics as research, teaching, and practice in social norms, higher education, policy making, and civil society Source: Connecting Race to Ethics Related to Technology: A Call for Critical Tech Ethics | IEEE Xplore