Introduction
While Web 1.0 ended with a bang, Web 2.0 ended with a whimper. … What replaced the Web 2.0 era is what we now know as the Platform Era.
― Dave Karpf, George Washington University
A decade ago, the rise of social media appeared to usher in a new age of technological utopianism around the globe, in which politics would shift away from bureaucracy and hierarchy toward more decentralized modes of governance by a variety of digital communities.
But platforms have fallen short of these utopian claims—as illustrated, for instance, in their propensity to facilitate conspiracy theories, propaganda, hate speech, and fake news. From Russian interference in the 2016 U.S. presidential election and the U.K. Brexit referendum to Facebook’s role in the spread of anti-Rohingya propaganda in Myanmar, scholars, policymakers, and practitioners around the world are grappling with questions of how—and whether—platform companies should be more democratically accountable.[1]
And yet, the companies that operate them—ranging from the “digital giants” such as Facebook, Google, Amazon, and Alibaba to software companies such as Nationbuilder or Optimizely, leading “sharing economy” firms such as Airbnb or Uber, and messaging applications such as Slack or WhatsApp—are more powerful than ever.[2] Platform companies have experienced significant growth, disrupting giant economic markets and becoming enmeshed in virtually every aspect of contemporary social and political life. A handful of companies control large portions of the world’s social, political, and economic activity. These platforms have irrevocably changed the face of online commerce and political communication, and have become a central way that publics receive and engage with information.
As a result, platform governance—who makes, or should make, the decisions that shape user experience and behavior on digital platforms—has moved to the forefront of political, legal, and scholarly conversations about platforms in social and political life. Platform governance falls not just within the domain of government: Volunteers, users, and platforms themselves also create policies and norms that shape behavior on platforms. Given the vast scale of platforms, this multi-pronged approach to governance is necessary, but each of these actors brings a unique set of needs and challenges to bear on governance decisions: platform executives, volunteer-made moderator bots on Reddit, Facebook page owners, content moderation algorithms, and professional networks of content creators and YouTube account managers all have differing (and potentially conflicting) decision-making preferences for how platforms and their content are used and regulated.
In the first section of this report, we highlight the complex challenges of platform governance. In the second section, we outline the various tensions around governance that arise within platforms themselves and between platforms and their key stakeholders. In the final section of this report, we engage with critical questions motivating research about platforms and their increasingly central role in political and public life. We conclude the report with a list of recommended readings for readers to explore. While this list is not exhaustive, the aim is to reflect conversations from this conference and the growing interdisciplinary work around this fast-developing field of research.
Challenges of Platform Regulation and Governance
The way we talk about platforms has drastically transformed over the past decade, as platforms now constitute a range of “sites and services that host, organize, and circulate users’ shared content or social exchanges for them; without having produced or commissioned [the majority of] that content; beneath that circulation, an infrastructure for processing that data (content, traces, patterns of social relations) for customer service and for profit.”[3] As such, new and important challenges have emerged that complicate how platforms’ affordances, practices, and policies interact with the external forces that shape them. While each platform is different, they are united in the fact that their success requires users to interact with and supply content to them; these diffused one-to-one and one-to-many interactions are what make governance particularly difficult.
Throughout the conference, panelists discussed the many challenges of regulating and, more generally, governing platforms. While governments can institute regulations enforceable through fines and other penalties, platform governance more broadly includes the ways that a variety of stakeholders and the platforms themselves are able to shape the norms and rules that determine or impact behavior on platforms. Platform governance plays out in parliaments, federal agencies, and in the meeting rooms of platform companies, as well as through the accounts of volunteer moderators and page owners.
The greatest challenges identified at the conference were 1) determining who should have the authority to make governance decisions, 2) defining the behaviors that require governance solutions, 3) navigating the constant technological change associated with our current digital ecosystem, and 4) navigating considerations of free speech.
Delegating authority: Who governs platforms?
Platform governance extends not just out from platforms to governments, but also down to the users on platforms and through both human and algorithmic means. Users act as moderators in addition to content creators, both through reporting content back to the platform as well as through platform-sanctioned roles such as Facebook page administrators and subreddit moderators. As algorithmic and artificial intelligence-powered moderation increases, these decisions of governance are also made with mixed levels of human involvement.
Karoline Andrea Ihlebæk and Bente Kalsnes documented the lack of transparency within the processes of page-owner moderation on Facebook. Over the past few years, Facebook has been changing its newsfeed algorithm and interface to encourage interaction in groups. While the platform is open to the public, user interaction is increasingly happening in enclosed spaces within it. The Facebook page owners interviewed by Ihlebæk and Kalsnes saw their roles as editorial in nature; not only did they write content for the pages, but they also curated the comments and social interactions of their followers. In this way, their editorial capacity was not simply writing and sharing content, but also moderating the public’s social interactions on the pages.
On Reddit, the entire platform is partitioned into subreddits, each with its own moderation team and rules. Lucas Wright studied “AutoModerator,” a bot created by a Reddit user who later became an employee of the company. Now used across almost all subreddits, this bot has changed how the platform is governed, removing content with limited visibility and decreasing the interactions between subreddit participants and the moderators.
In addition to users and volunteers, platforms’ rely on artificial intelligence and algorithms. Robyn Caplan’s study of YouTube’s decisions on when to demonetize videos and channels highlighted the tensions between serving viewers, content creators, and advertisers. YouTube has embraced automated means of identifying “advertiser-unfriendly content” as well as processes of adjudication. Caplan shows that these rules can be circumvented if the content creator has enough followers, a relationship with a YouTube account manager, or other industry relationships and economic power. While these social relationships can ameliorate algorithmically driven regulation of content, less-resourced content creators are left with the consequences of strictly enforced, non-human video reviewers. YouTube channels that dealt with LGBTQ+ issues were demonetized since YouTube’s algorithm could not distinguish their content from prohibited sexual content. While unintended, YouTube’s policies and automated enforcement disproportionately impacts under-resourced and under-represented users.
As platforms grow and test new ways of governing their content and their users, they will increasingly rely on computer-aided decisions as well as the decisions of existing users. But while algorithmic biases are well-documented in the academic literature, the biases of volunteer moderators may deserve more attention. Moreover, as Caplan’s research shows, we should be wary of depicting algorithms as neutral or sole arbiters of content: Ultimately, as ever, there are distinct advantages to greater access to networks and resources and moderation takes place within those existing networks.
Defining the challenges of platform governance
As platforms are asked to address and regulate hate speech, harassment, fake news, and political advertising, they need adequate definitions of these concepts. Throughout the day, conference discussions circled back to basic questions of how to conceptualize and measure the problematic behaviors and content that platforms and their stakeholders attempt to regulate.
Paddy Leerssen found that, in response to potential government regulation in multiple countries, Facebook and Google have adopted different answers to the basic question of “what is a political advertisement?”: While Facebook is attempting to archive and make publicly visible any ad that address “social issues, elections, or politics[4],” Google has limited their transparency project to “ads that feature a candidate running for political office, a current elected officeholder, or in parliamentary systems, a political party.”[5] As Leerssen noted, these companies adopted their own definitions in part because there is not clear agreement in the regulatory community. The European Union’s Code of Practice on Disinformation (a self-regulatory agreement made between major online advertising platforms and the European Commission) asks that platforms “commit to use reasonable efforts towards devising approaches to publicly disclose ‘issue-based advertising.’”[6] In the very next sentence, however, the code notes that “such efforts will include the development of a working definition of ‘issue-based advertising.’” In this way, governments have passed the responsibility for defining the problems to the platforms themselves.
Defining hate speech and harassment is just as fraught. Although harassment is prohibited by most social media platforms, and hate speech or related behaviors have been banned on Facebook and Twitter, the ways these terms are conceptualized and operationalized is quite complicated. For example, Anna Reepschlager and Elizabeth Dubois found that platforms’ policies reflected their differing values: Facebook’s conceptualization of hate speech, free speech, and harassment focused on child protection and safety, while Reddit and Twitter more directly addressed the issue of free speech (though Reddit’s rationales were built on trust, safety, and being “anti-evil,” and Twitter’s were on healthy conversation and safety from abuse). Ultimately, these policies did not address the same issues in the same ways—or even address the same issues at all. Each platform harbors unique challenges, yet even basic definitions of problematic behavior diverge across platforms.
The goals of simple, accurate definitions of the problems we face and effective, uniform measures across platforms seem straightforward. However, discussions that followed these presentations problematized these goals as well. If the regulation of speech on platforms has taught us nothing else, it is that these problems are complicated and that solutions often bring new, unanticipated problems to the surface—for instance, the demonetization and filtering out of LGBTQ+ videos on YouTube when the company attempted to improve its moderation system for minors.[7]
Further, a solution to one group of actors may be a problem to another. Indeed, how participants conceptualized and understood democracy also influenced their views of how platforms could benefit or harm it: A belief that citizens are knowledgeable, well-meaning participants in the democratic process can lead to very different priorities for platforms than a belief that some or many citizens are ill-informed, often prejudiced, problematic actors within society.
As governments place the onus of governance on platforms themselves, one can easily see their abdication of responsibility as problematic—after all, without national or international laws creating norms and rules across platforms, the likelihood of compliance may be low and the likelihood of meaningful enforcement even lower. Yet co-regulation of specialized industries is not unheard of and is, in fact, common in complicated, specialized industries. While loose or contradictory definitions make regulation more difficult, they may also further the goals of learning what works and creating safer platforms that better serve democratic life.
Constant change in platforms: Governing in an emergent milieu
Platforms are not static entities. Their policies, affordances, and rationales can change incrementally or drastically, with no opportunity for users to meaningfully renegotiate their terms of use. Reepschlager and Dubois found that over the past 14 years, platforms have outlined their policies in their “terms of use,” “statement of rights and responsibilities,” “community standards,” “Twitter rules,” “user agreements,” and “content policy,” as well as in subsections of these larger buckets. These rules were posted on blogs, on the platforms themselves (such as in newsfeeds and in subreddits), and in the legal documents users agree to when they sign up to use a platform.
But as Caplan’s examination of YouTube demonstrated, users (including advertisers) face great difficulty when navigating platforms’ changing content guidelines. Similarly, Ihlebæk and Kalsnes’s study of Facebook page moderators showed how fluctuation in moderation affordances affected page owners’ governance capabilities.
When platforms enter new arenas or change in meaningful ways, these changes can influence more than just users. At the macro-level, Katherine Reilly sought to understand how platforms disrupt—or don’t disrupt—industries in South American economies. Her definition of “platforms” was expansive, including those in the transportation, communication, and financial technology industries. Her preliminary results demonstrated the most basic forms of change that platforms create: changes in the behavior of users and, at times, disruption of business models in existing industries in many countries with their own unique political context and dilemmas.
While regulating and governing platforms that already exist in a specific commercial context is one challenge, attempting to predict the disruptions that are yet to come from peer-to-peer and peer-to-many technologies built on digital technologies is even more difficult. Platforms are not always disruptors in all spaces: Reilly showed areas where disruption happens as well as areas where it doesn’t. Research such as Reilly’s, using clear theoretical and comparative frameworks as well as broad but defined conceptualizations of platforms, can help us understand the scope of change and aid in predictions of disruption.
The challenges of governing speech on platforms
This conference highlighted how we need to understand the novel affordances of platforms that limit speech and may not be obvious to casual observers or well-publicized by the platforms themselves. The complicated relationships between platforms, contracted third parties, and government in governing speech can make transparency in governance decisions incredibly difficult to achieve. These new technological spaces for communication may require new legal frameworks in addition to creative applications of existing legal doctrine.
Platform affordances have created new ways of restricting speech that are more complicated than simply banning or not banning users. Karoline Andrea Ihlebæk and Bente Kalsnes described how political parties used “shadowbanning” on their Facebook pages, and Lucas Wright showed the same practice on Reddit by volunteer moderators. Shadowbanning is when moderators block a user from participating on a platform in such a way that the user cannot immediately tell that they have been banned. For example, the user may still see their own comments, but no other users can. Shadowbanning allows users to perceive their voices as heard and their speech as free when, in fact, it is not. What does it mean to use such deception in platform governance, particularly in potentially democratic, deliberative spaces? The affordances platforms provide for governing both themselves and users create new problems of how to conceptualize speech and censorship; the consequences of these new ways of restricting speech are unclear.
Without clear processes for and transparency regarding how governance decisions are made, platforms and hired third parties restrict speech without accountability to the public. Stefanie Fuchsloch, Tobias Gostomzyk, and Jan Rensinghoff illustrated how the German Enforcement Act allows platforms to determine what is criminal fake news and remove it expediently or to contract this service out to vetted third parties. The ability to contract out these decisions may lower the burden on platforms and make decisions for users more quickly, but it also creates another set of actors that must be accounted for in transparency and accountability initiatives. While the platforms are required to make public reports on how much has been removed, the rationale behind removals is not transparent.
Amélea Pia Heldt proposed a new category of public forum, a social public forum, in order to protect speech on platforms while still allowing them to moderate the content. While this proposal operates primarily in the U.S. context, its theoretical underpinnings address international concerns as well. By bringing platforms into the fold of public forums, users would be granted more structured and transparent rights of due process. These requirements of documented due process could be applied to the transparency problems in the German Enforcement Act. Yet, while this level of transparency may be beneficial, the issue of who decides what is acceptable still stands.
The common discussions surrounding free speech on platforms such as blocking users, removing hate speech through algorithmic decisions, or alerting users to fake news are vital, but do not address the broader challenges that these new technologies bring. The novel technological and economic structures of platforms, constantly changing and adapting, have continued to surface challenges to governing speech.
Moving forward
The challenges of platform governance outlined here are not exhaustive. Nevertheless, they reflect a variety of concerns that were raised repeatedly throughout the conference. These discussions made it clear, not only that there are no perfect fixes for platform governance problems, but also that these problems cannot be solved individually by governments, the platforms themselves, or by users—each of these stakeholders must be involved in the process.
Moving forward, research has to account for the messy, difficult, and sometimes opposing definitions of the issues at hand. Simultaneously, research cannot approach platforms as static entities: The constant changes within platforms’ policies and affordances must be accounted for and the implications of potential future changes must be acknowledged. When thinking about how to govern platforms, researchers should better identify those actors who should, normatively, have the authority as well as those who do, in practice, have the authority to implement and enforce governance decisions. These contested definitions, constant changes, and diverse actors all play a role in how speech is governed on platforms, creating a chaotic but increasingly important set of governance dilemmas.
Tensions in Platform Governance
In the Platform Era, we are beginning to see the inherent tensions between platforms as neutral, open intermediaries and commercially owned, novel technologies. These tensions, which exist between platforms and their various stakeholders, raise important questions about how platforms can or should be governed, why decisions are made, and how people come to those decisions.
The key tensions identified at the conference concerned those 1) between platforms’ economic interests and stakeholders’ political interests and 2) around the use of platforms and their data. The tensions discussed here are not comprehensive, but they represent the variety of negotiations that have surfaced as platforms and their stakeholders navigate a field of governance that must necessarily involve multiple actors whose interests are often in conflict. Each topic highlights an area of disagreement and negotiation—not only about platform governance, but also between participants’ understandings of platforms and their roles in the political process.
Tensions between economic and political imperatives
Most platforms were not designed with political campaigns or governments in mind as their primary users. However, as more political participation takes place online, the tensions between the economic interests of platforms and the needs of users or state regulators are increasingly fraught.
Many platform companies’ business models rely on advertising revenues from huge numbers of inexpensive ads purchased by millions of advertisers and are predicated on a low-regulatory environment. As of 2019, Facebook has 7 million advertisers; reviewing every ad placed on their platforms with more than a cursory algorithmic check will hurt the company’s bottom line.[8] Because of this, the implications of these advertising processes for democratic processes are not only not prioritized—they may not even be considered.
While platforms do profit from political advertising campaigns and can create valuable networks with government officials through their campaign outreach, it’s important to remember that this is not a primary revenue generator or even necessarily a significant one; platforms’ involvement in democratic and electoral processes is incidental. How these commercial products are used by campaigns creates significant tension between normative goals related to democratic participation and political speech, and the realities of platforms’ commercial business models, such as micro-targeted advertisements and users as unfiltered content producers.
Tensions also exist between platform business models and government regulators. The origins of these tensions reflect the fact that regulations for political advertisements were devised prior to the Platform Era. But as political campaigns move online, so do regulators. For example, leading up to the 2016 U.S. elections, political advertisements on Facebook and Google were not required to indicate who paid for them. But in the aftermath of the election, Facebook, Google, and Twitter came under fire in light of growing evidence of Russia’s use of the social-media platforms in its disinformation campaign, and the Federal Election Commission established new disclosure requirements for digital political ads posted on Facebook with images or videos.[9][10]
In this context, Katherin Haenschen and Jordan Wolf investigated the origins of the lack of rules governing political advertisements on platforms prior to the 2016 election. Why weren’t these disclosures required from the start of political advertising on social media? They documented platforms’ efforts to contest regulation in this space and avoid a firm ruling on disclosure requirements: In short, Facebook and Google took advantage of an already flawed system for their own economic benefit. Among other rationales, the platforms argued that they could neither change the ads they were running nor enforce disclaimer requirements and continue to be profitable.
Leading up to the 2018 U.S. midterm elections, Google, Facebook, and Twitter began to tighten guidelines and implement new tools for transparency and accountability around digital political advertising.[11][12] But with growing concerns surrounding the effectiveness of these tools and spending on U.S. political digital advertising projected to reach $3.3 billion in 2020, these tensions between platforms’ economic interests and stakeholders’ political interests remain more important than ever to explore.[13]
Similarly, platform companies often have economic incentives not to regulate bad or problematic behavior. For instance, Fenwick McKelvey found that the prominent NationBuilder platform takes on not only international clients from both left- and right-wing political parties but also advertising companies, purveyors of fake news, and clients listed as hate groups by the Southern Poverty Law Center.
While it is a political engagement platform, Nationbuilder’s purpose is not purely political—it is also commercial. Nationbuilder’s problematic uses are not subject to formal regulation, though the platform can be subjected to social or financial pressures to impact its governance decisions. However, the threat to a company’s profits if they are pushed to limit their morally questionable clientele is real. These tensions surrounding access—particularly for the most controversial users—fall at the heart of discussions surrounding how platforms are, or should be, governed.
Tensions around the use of platforms and their data
People use platforms in their daily lives to communicate, inform themselves, make purchases, and perform a variety of social tasks—in short, platforms have become both digital marketplace and digital public sphere, where users expect their fundamental rights to be protected, including free speech and privacy.
And yet, privately owned companies are not restricted by legal or constitutional provisions, such as the First Amendment in the U.S., intended to protect the rights of individuals against the state. These tensions have led scholars, including Amélea Pia Heldt, to ask: Can a platform that hosts millions (or, in some cases, billions) of users and offers increasingly essential services still be treated as a strictly private actor? This question does not simply concern policy decisions—it also appeals to broader tensions between democratic values and commercial ownership in platform governance: whether users’ expectations of digital platforms as spaces where their fundamental political and economic rights are protected can or should reflect the legal reality.
Further, political actors often use platforms in ways that accord with practices predating platform technologies—even when these technologies were expected to disrupt those very practices. Practices that were institutionalized before the Platform Era are pigeonholed into platform activities, leading to important discrepancies between the normative possibilities platform technologies offer for democratic deliberation and the realities of their use by political actors.
For instance, constituents want to use platforms to communicate with their political representatives—and vice-versa—because it’s easier, but the quality of the communication is poor. Samantha McDonald found that digital platforms offer a face for engagement with policymakers while dissuading actual opportunities to improve democratic communication. U.S. congressional staffers often devalue social media communications and demarcate it as a separate form of labor from traditional methods of correspondence, such as email, mail, and phone. Although platform technologies may lead to increased citizen and policymaker participation, McDonald argues there is little democratic value in platforms that do not offer deliberative two-way communication or policy influence.
And yet, many platforms have been sold as deliberative democratic spaces. In Facebook’s own words, “the size and diversity of the platform offers a town square-like atmosphere where people gather to voice opinions, interact with other voters and easily engage with the leaders who make the decisions that affect their lives every day.”[14] Until recently, many academics and policymakers alike accepted this premise.
Platforms and their stakeholders—including researchers and practitioners—have various needs and goals that influence how we understand platforms, their functions, and their value. The research presented at this conference not only questions this assumption, but also suggests that we begin to think more critically about our own goals and biases, lest we ignore—or misinterpret—the reality on the ground.
Most essential to explore are tensions around the needs of practitioners and researchers to use data provided by platforms despite their knowledge of the flaws in these data. For instance, Jessica Baldwin-Philippi explored how political campaigns deal with the limitations and flaws of data provided to them by privately owned platforms such as Facebook and Twitter as well as niche data and data visualization platforms such as Optimizely, Catalyst, and Nationbuilder, highlighting the ways in which political practitioners uphold the objectivity of these metrics in spite of their knowledge of these flaws.
Panelists such as Baldwin-Philippi emphasized the messy reality of data-driven campaigning and spoke to the organizational power struggles that go on within campaigns, as well as the incentives platform companies have to make strong claims about their data. In this way, some campaign workers must use the data they know is limited (e.g., Facebook prioritizes engagement over time while Twitter prioritizes metrics such as “top” tweets and followers) and even flawed (e.g., instances of Facebook miscalculating metrics presented to advertisers) in order to justify the decisions they make on the campaign.[15][16] Negotiations about the value of platform-generated data play out both internally and in public as campaigns present themselves as “data-driven,” users of “big data,” and technologically competent.
Moving forward
In the Platform Era, layers of governance relationships shape (and complicate) interactions between platform companies and their various stakeholders, including users, advertisers, governments, and political practitioners and actors. A variety of tensions exist under the surface of all of these relationships that structure discussions and decisions about platform governance in important ways.
While platforms’ business models are predicated on the existence of low-regulatory environments in order to allow mass participation from users, they are increasingly political arenas and, therefore, targets of regulation. Moving forward, how can platform companies and their stakeholders—policymakers, state regulators, researchers, political practitioners, and users—navigate these conflicting economic and political imperatives? And if governments are not equipped to make or enforce decisions to alleviate these tensions, who is or should be?
Critical Questions for the Study of Platforms
The “Rise of Platforms” conference closed with a discussion about the trajectory of the field of research on platform governance. We conclude this report by briefly capturing some of the most compelling and provocative questions illuminating this conversation, and by setting the stage for future discourse within this burgeoning field.
The rise of the study of platforms presents an important opportunity for researchers to think critically about what sites they study, what questions they ask, and what data they use. As scholars come to understand the nuances of platform data, they must ask: What data don’t researchers have, and why do they have the data they do have? Facebook’s political advertising archive included ads for Bush’s Baked Beans in its first release because its algorithm couldn’t tell the difference between the former president and the legume. Understanding exactly what platform companies include in datasets and why is paramount to conducting quality research.
When thinking about data, panelists also asked which types of transparency will be needed in the Platform Era. Platforms and governments have focused primarily on transparency in terms of quantitative data and content—for instance, around what content stays up or comes down. However, many of the questions researchers grappled with during the conference were not simply how many YouTube channels were demonetized but rather, how did a platform decide what to include in their definition of un-friendly advertiser content? What did that decision-making process look like, and how might the information available to them, the structure of their organization, and their relationships have had unintended effects on that process? Panelists did not want more big data—they wanted transparency into (and especially qualitative data on) platform processes that directly influence democratic norms and ideals.
Throughout the conference, there was a resounding call for more comparative and historical research on platforms, such as Heidi Tworek’s. Tworek traced how news agencies during World War I (such as Reuters, Havas, and Wolff) served as “bottlenecks” for the supply of information; in essence, these agencies were the Google or Facebook of the time, serving as multinational black boxes of information. While platform companies’ contemporary control over information may seem like a novel problem, historical research such as Tworek’s demonstrates not only that these problems existed before, but also that a free and successfully competitive information market may be the exception to the norm rather than the norm itself. Because these platforms operate in multi- and transnational contexts, there can be no one-size-fits-all policy solutions.
We must consider the history of these platforms, especially in the context of policymaking processes and decisions that continue to shape not only how we use but also how we understand platforms and how they function. Understanding the context within which past decisions about platform governance were made can inform choices about which policies must be jettisoned and which are worth adapting for the needs and values of contemporary societies. For instance, what challenges do digital technologies present for democratic life, and how might they best be addressed in order to protect democracy in the digital age through shared policy solutions?
However, participants voiced concerns that such research could fall victim to a nostalgic view of the past—for instance, the assumption of a “golden age” of the internet and the burgeoning scene of social media is reminiscent of a “rose-tinted narrative” of 20th-century journalism.[17] New scholarship on platform governance will need to examine the normative assumptions held by the communication field about the meaning and value of democracy. What values do we embrace when we discuss publics and elites, deliberation and participation, or the institutions that organize these aspects of public life, and how do these intersect with changes in platforms, economic models, and democratic practices?
Finally, the conference highlighted a growing need for researchers to reassess their own roles in this changing platform environment. Researchers are now equal parts observers and actors in the realm of technology and politics, and depend on the infrastructures they are provided. What does this mean for the power platforms hold in the communication field and, more broadly, how does the field maintain ethical standards, independence, and legitimacy in the eyes of the public, while maintaining essential relationships with these powerful companies?
Moving forward
Platform companies are increasingly central to social, political, and economic life around the world. These companies have built large and diverse user bases around global digital platforms that enable a variety of communicative interactions. In the process, platforms have come to host vast amounts of public and commercial information, to organize attention and access to it, and to shape social life as we know it. As such, understanding the decisions that shape user experience and behavior on digital platforms, along with the layers of governance relationships that structure interactions between platforms and their stakeholders, is more important than ever. Platform governance, therefore, is central to a burgeoning field of interdisciplinary research on technology and society.
As the scholarly community begins to acknowledge the rise of the Platform Era, the “Rise of Platforms” conference—and this ensuing report—sought to identify some of the key challenges and tensions that platform governance presents to researchers, policymakers, and users, as well as to platform companies themselves. Above all, the challenges, tensions, and critical questions identified in this report reflect the complex nature of platform governance in a multi-stakeholder arena of socio-political-economic activity.
As research on platforms as an object of analysis moves forward, we hope that the themes outlined here can help guide the questions being asked. Investigating the challenges of governing platforms by diverse actors, unpacking the tensions between stakeholders, using comparative and historical methods of inquiry, and pushing for greater transparency into why decisions are made the way they are at these companies are all paramount to understanding how these new technologies are impacting democratic life.
[1] Mozur, P. (2018, October 15). A genocide incited on Facebook, with posts from Myanmar’s military. Retrieved from https://www.nytimes.com/2018/10/15/technology/myanmar-facebook-genocide.html
[2] Shan, J., & Wade, M. R. (2018, August 23). The digital giants in 2018. Retrieved from https://www.imd.org/research-knowledge/articles/digital-giants-in-2018/
[3] Gillespie, T. (2017). Regulation of and by platforms. The SAGE Handbook of Social Media.
[4] Facebook Business. (n.d.). Ads about social issues, elections or politics. Retrieved from https://www.facebook.com/business/help/1838453822893854
[5] Google. (n.d.). Transparency report: Political advertising on Google. Retrieved from https://transparencyreport.google.com/political-ads/home
[6] European Commission. (2018, September 26). Code of Practice on Disinformation. Retrieved from https://ec.europa.eu/digital-single-market/en/news/code-practice-disinformation
[7] YouTube Continues To Restrict LGBTQ Content. (2018, January 16). Retrieved from
[8] Flynn, K. (2019, January 31). Cheatsheet: Facebook now has 7m advertisers. Retrieved from https://digiday.com/marketing/facebook-earnings-q4-2018/
[9] Timberg, C., & Romm, T. (2018, December 17). New report on Russian disinformation, prepared for the Senate, shows the operation’s scale and sweep. Retrieved from https://www.washingtonpost.com/technology/2018/12/16/new-report-russian-disinformation-prepared-senate-shows-operations-scale-sweep/
[10] Glaser, A. (2017, December 18). Political ads on Facebook now need to say who paid for them. Retrieved from https://slate.com/technology/2017/12/political-ads-on-facebook-now-need-to-say-who-paid-for-them.html
[11] Romm, T. (2018, May 24). Who’s behind those political ads on Facebook? Now, you can find out. Retrieved from https://www.washingtonpost.com/news/the-switch/wp/2018/05/24/whos-behind-those-political-ads-on-facebook-now-you-can-find-out/
[12] Wakabayashi, D. (2018, May 04). Google will ask buyers of U.S. election ads to prove identities. Retrieved from https://www.nytimes.com/2018/05/04/technology/google-election-ad-rules.html
[13] Glazer, E., & Haggin, P. (2019, July 17). Google’s tool to tame election influence has flaws. Retrieved from https://www.wsj.com/articles/google-archive-of-political-ads-is-fraught-with-missing-content-delays-11563355800
[14] Facebook Newsroom. (2014, November 4). Election Day 2014 on Facebook. Retrieved from https://newsroom.fb.com/news/2014/11/election-day-2014-on-facebook/
[15] Sutton, K. (2018, October 17). Facebook hid inflated video ad metrics error for over a year, advertisers allege. Retrieved from https://www.adweek.com/digital/facebook-hid-inflated-video-ad-metrics-error-for-over-a-year-advertisers-allege/
[16] KeyMedia. (2018, February 20). How Facebook organic reach has changed. Retrieved from https://keymediasolutions.com/news/facebook/how-facebook-organic-reach-has-changed/
[17] Tworek, H., & Hamilton, J. (2018, May 2). Why the “golden age” of newspapers was the exception, not the rule. Retrieved from https://www.niemanlab.org/2018/05/why-the-golden-age-of-newspapers-was-the-exception-not-the-rule/