Essay

Challenging Tech’s Imagined Future

Chris Gilliard
March 2, 2023
Essay

Challenging Tech’s Imagined Future

Chris Gilliard
March 2, 2023

This past week I had the opportunity to have a conversation with Dr. Ruha Benjamin at Princeton University as part of the Office of the Dean of Undergraduate Students’ FOCUS Speaker Series. During Dr. Benjamin’s opening remarks, I was struck by her comments about imagination and its nature as a contested space. This statement was so incredibly powerful that I was moved to find the fuller context.

I found that context in a talk Dr. Benjamin gave about “The New Jim Code.” In the talk, she states:

Imagination is a contested field of action, not an ephemeral afterthought that we have the luxury to dismiss or romanticize, but a resource, a battleground, an input and output of technology and social order. In fact, we should acknowledge that most people are forced to live inside someone else’s imagination and one of the things we have to come to grips with is how the nightmares that many people are forced to endure are the underside of elite fantasies about efficiency, profit, and social control. Racism, among other axes of domination, helps produce this fragmented imagination, misery for some, monopoly for others.

Without question, the tech that has captured the public’s imagination since its launch in November 2022 is ChatGPT, its Bing search equivalent, “Sydney,”[1]The Sydney persona is now seemingly discontinued. and Google’s “Bard.” Over the past few months, we’ve seen claims that the chatbot can pass business school and law school exams, endured endless speculation about how it will kill the college essay, and read breathless accounts of how it’s both a technological wonder and a scary harbinger of the world to come. It’s impossible to look at any publication’s tech section and not see an article about AI chatbots, a new wild claim about a text it produced, or how this innovation is going to revolutionize (or in some cases destroy) a given field: medicine, teaching, screenwriting, law, and mental health.

Sam Altman, the CEO of OpenAI, the company that released ChatGPT, is, for obvious reasons, heavily invested in the hype as well. In a recent tweet, Altman wrote, “These tools will help us be more productive (can’t wait to spend less time doing email!), healthier (AI medical advisors for people who can’t afford care), smarter (students using ChatGPT to learn), and more entertained (AI memes lolol).” The hype seems to be working, as some reports claim that ChatGPT is the fastest growing consumer application in history.

Much of this uproar has been spawned by a tech that is, by many accounts, not ready for prime time.

Credit where it is due: The hype cycle for the technology has been magnificent, despite the technologies’ flaws. ChatGPT has caused such a seismic shift that Microsoft invested an extra $10 billion dollars in OpenAI and quickly launched a Bing search powered by OpenAI’s chat function. Not wanting to appear left behind, Google soon after launched “Bard.” The company formerly known as Facebook also joined in the gold rush, launching LLaMA, (Large Language Model Meta AI). Much of this uproar has been spawned by a tech that is, by many accounts, not ready for prime time. It’s been unflatteringly described by Dan McQuillan as a “bullshit generator,” and noted by many critics for confidently spouting information that is not true. Scientist Gary Marcus has stated, “We now have the world’s most used chatbot, governed by training data that nobody knows about, obeying an algorithm that is only hinted at, glorified by the media, and yet with ethical guardrails that only sorta kinda work and that are driven more by text similarity than any true moral calculus. And, bonus, there is little if any government regulation in place to do much about this. The possibilities are now endless for propaganda, troll farms, and rings of fake websites that degrade trust across the internet.” Even the CEO of OpenAI has reported that the bot “may make up facts.”

Yet despite these concerns, this chatbot technology has captured the country’s imagination. Here it’s worth investigating further what Dr. Benjamin says about imagination. “Most people are forced to live inside someone else’s imagination…misery for some, monopoly for others.” I would say that we are being forced into ChatGPT’s imagination, except for the fact that it doesn’t have one. It doesn’t know, think, feel, or imagine. Instead, we are being fed a future imagined by the tech’s creators and boosters: A future where low-income students are tutored by a bullshit engine and where people who cannot afford medical care are treated by a chatbot that makes frequent mistakes. This is not the future I imagine, nor one I want to inhabit. Part of OpenAI’s stated mission is the prevention of what they believe to be the existential threat of Artificial General Intelligence (AGI)—one rife with “misuse, drastic accidents, and societal disruption.” However, the vision articulated in Altman’s tweets appears to be exactly that kind of dystopia for poor, disabled, and marginalized people.

This becomes even more clear when looking at other of Altman’s statements. On the issue of what he calls “politics,” Altman tweeted: “There will be more challenges like bias (we don’t want ChatGPT to be pro or against any politics by default, but if you want either then it should be for you; working on this now) and people coming away unsettled from talking to a chatbot, even if they know what’s really going on.”

Altman’s statements about how AI can be directed to suit anyone’s needs are eerily similar to the ways tech companies cynically deployed the term “community” during the platform era. “Community,” however, conveniently glossed over the fact that some people want racism and hateful AI in the same way that connecting communities means connecting fascists and racists—and how Mark Zuckerberg overlooked this whenever he used the language of community.

Platforms pitched the idea of community and never fully reckoned with how it allowed scores of racists, fascists, trolls, and misogynists to find community with each other on Facebook. We’ve all seen how that turned out and are unfortunately living with the ways that it affected democracy and society as a whole. An AI that denies its politics threatens to shape our world in many of the same ways.

If the ideal state of the technology is that the dial can be turned to suit anyone’s politics, that necessarily includes authoritarians, fascists, racists, misogynists and transphobes.

Whether Altman acknowledges it, this statement flies in the face of many of the public statements OpenAI has put out about their tech, but more importantly, this a shallow and dangerous way to think about a technology set out into the world. Technology does have a politics.[2]Langdon Winner, “Do Artifacts Have Politics?Daedalus 109, no. 1 (1980): 121–136. But further, the idea that chatbots would exist purely to accommodate the politics of the user is inviting destruction on a massive scale. If the ideal state of the technology is that the dial can be turned to suit anyone’s politics, that necessarily includes authoritarians, fascists, racists, misogynists and transphobes. That Altman fairly explicitly asserts this should make people’s blood run cold. This is much in the way Zuckerberg used “community” and repeated assertions that Facebook’s goal was “connecting the world.” On the surface that may have seemed a laudable project, until you realize that this meant connecting white supremacists, extremists, fascists, and antidemocratic forces, with each other and amplifying their content. The two possibilities here are that Altman doesn’t understand the implications of this statement (worrisome), or that he does and intends to plow ahead anyway (more worrisome).

We must imagine better, and further—we must build better. As a Just Tech Fellow, part of my directive is to “champion vital research that enhances collective knowledge about technology’s impacts and potential, illuminates the biases and harms created by some novel uses of technology, and identifies solutions that advance social, political, and economic rights.” A chatbot, “woke” or not, can’t do that work. Only people can.

Footnotes

References
1 The Sydney persona is now seemingly discontinued.
2 Langdon Winner, “Do Artifacts Have Politics?Daedalus 109, no. 1 (1980): 121–136.

Our Network