In the classic Japanese film Rashomon, four people retell the same incident in different and sometimes contradictory ways. The debate about the Twitter Files—documents from and Slack conversations within the social-media platform now owned by Elon Musk—reminds me of that movie. This month, the journalists Matt Taibbi and Bari Weiss and the author Michael Shellenberger have posted tweet threads describing the internal debates over banning the sharing of a 2020 New York Post story about Hunter Biden’s laptop, throttling the circulation of certain posts and accounts, and taking down then-President Donald Trump’s account after the January 6 riot at the Capitol. Depending on your perspective, you might conclude that suspending Trump was an essential safety measure, a big scandal, or utterly inconsequential.
“Sunlight is the best disinfectant,” Musk declared on Twitter earlier this year, a week after the company’s board accepted his buyout offer. And he’s right that transparency about content moderation—that is, the screening of user-generated posts to ensure that they meet community standards—is the only thing that can deliver legitimacy for and repair trust in a platform that has enormous influence over the political discourse. Unfortunately, the Twitter Files offer little insight into how important moderation decisions are made. Individual anecdotes—particularly those involving high-stakes outlier decisions, such as how to handle a president whose supporters try to keep his successor’s election from being certified—are interesting but reveal little about how platforms operate day in and day out. And that is information that the public needs to know.
Twitter’s most recent transparency report, published in July, shows that it took action on 4.3 million accounts in the second half of 2021 and removed 5.1 million pieces of content. You could cherry-pick a few of those decisions to fit almost any ideological narrative. Right-wing commentators aren’t the only people complaining about platforms’ actions. Some Black and LGBTQ social-media users have also objected that they’re being unfairly moderated, as automated tools take down posts containing words and phrases deemed offensive. Distrust of Big Tech’s power is universal.
Content moderation is sometimes seen as a binary: Posts and even entire accounts are either left up or taken down. But social-media platforms have developed a range of tools; many platforms use a framework akin to what Facebook calls “Remove, Reduce, Inform.” Remove is what it sounds like. Inform often involves applying a label or interstitial to show context or a fact-check alongside the content. At the moment, the most contentious tool is reduce, which consists of downranking content so that fewer people see it. Under the reduce option, accounts or posts that are on the verge of violating site policies stay up, but curation algorithms push them into the feeds of fewer people; to see the so-called borderline content, you have to proactively visit the originating user’s page. Critics of downranking dismiss it as a sly form of censorship; users who are “shadow-banned,” they argue, are denied an audience without their knowing about it.
The current controversy is ironic because downranking emerged as an alternative to more stringent takedowns. As an enforcement option for borderline posts and accounts, reduce supports the premise of “freedom of speech, not freedom of reach”—a phrase that my colleague Aza Raskin and I introduced years back to articulate a potential balance between maintaining the free expression of posting and assuaging platform concerns around the viral amplification of things that might be harmful. Musk now uses this same phrase to describe his own vision for moderation. On today’s centralized social-media services, content is always ranked in some way, and platforms shape what their users see. But what, exactly, is denied reach? How are the platforms defining harm? Has Twitter expanded the pool of posts and accounts that it counts as borderline? Has the company shown political bias in the enforcement of the policy—and if so, how will that be minimized going forward? These are fair questions.
Journalists and academic researchers shouldn’t have to base their evaluations solely on anecdotes. Twitter could easily provide systematic information about its practices. In her Twitter Files thread about shadow banning, Weiss shared screenshots, provided by Ella Irwin, the company’s new head of trust and safety, of a moderation interface that allows employees to tag specific accounts in ways—“trends blacklist,” “search blacklist”—that likely limit circulation of their tweets. (The precise effects of those and other tags remain unclear.)
Weiss’s reporting focused on how the company handled high-profile accounts, such as Libs of TikTok, that are popular among American conservatives. However, it raised a lot of interesting questions about the platform more generally. To understand the systemic enforcement of platform policies, researchers, including my team at the Stanford Internet Observatory, would like to see statistics on, for instance, how many accounts have received the “trends blacklist” and “search blacklist” tags. Even if the usernames of individual accounts are obscured for privacy, a report detailing the follower counts and presumed home country of tagged accounts might reveal more about how the platform is exercising its content-moderation power than individual anecdotes. (Twitter is a global platform; while its handling of Libs of TikTok is certainly interesting, other accounts, such as those of many government officials and leaders of political movements, have greater global influence).
Because anecdotal examples do help make abstract dynamics clearer to the public, the Twitter Files authors should seek out and share more details about precisely why the high-profile and somewhat controversial accounts they highlighted were subject to specific actions. Which tweet or tweets prompted Twitter employees to put the right-wing talk-show host Dan Bongino on the search blacklist? Which specific policy justified that decision? Social-media companies rarely explain their deliberations. To my knowledge Twitter has never granted academic researchers the level of access that it gave these journalists, and information they obtain might help other researchers know what information to look for and request in the future.
Some commentators, particularly on the left, have dismissed the Twitter Files reporting because they disagree with the authors politically. Unfortunately, in an information environment fragmented into bespoke realities, the narrator matters. To discredit a given report, partisans can tell themselves that the writer or analyst is too right wing or too left wing—I’ve been accused of both—to possibly be intellectually honest; they once appeared on a panel, podcast, or conference schedule with somebody vaguely associated with the enemy. There is something unpalatable about them, and therefore their findings can be ignored.
This is all the more reason for speech platforms, given their enormous influence, to subject their content-moderation data to scrutiny from multiple teams. Platforms have generally been secretive about how they moderate content, in part to prevent people from gaming the rules. If Twitter laid out the specific phrases that might trigger a downranking, for example, accounts could simply avoid those terms; when platforms attempted to squelch misleading health claims about the coronavirus pandemic, some users tried to evade detection through creative spellings such as “c0vid” and “♈ax.” But now the secrecy itself is useful to bad-faith actors. Indeed, when the Twitter Files on Hunter Biden’s laptop did not reveal a vast collusion in which Twitter was taking orders from the FBI (or any other branch of government), influencers in the Pizzagate Cinematic Universe seized on the idea that Jim Baker, a former FBI lawyer who served as a deputy general counsel at Twitter from June 2020 until last week, had personally and deliberately compromised the file handoff to media, removing the “evidence” that would have proved their theory.
The attempt to cast any form of moderation—even the addition of a mere label to add context to a controversial claim—as egregious and tyrannical censorship has been a deliberate strategy for some time now, one that Trump himself found highly effective for perpetuating and monetizing grievance among his base. Some users with few followers became convinced that they had been shadow-banned for their ideological beliefs, even when in all likelihood platforms had taken no action against them. The most plausible explanation is that their content simply wasn’t popular or salient, so the platform did not distribute it into their followers’ feeds. However, transparency might go a long way toward addressing this suspicion; last week, Instagram announced a tool by which creators can check whether their content has been deemed ineligible for recommendation, and appeal if they felt the platform made a bad call.
The feeling that tech companies are both intrusive and opaque has prompted vows, from politicians across the political spectrum, to get tough on the industry—promises that somehow still have not translated into any kind of tangible action or regulation in the United States. One problem is that the left and the right disagree about which kinds of content-moderation decisions are the problem. But after years of being constrained only by their own capabilities, American tech companies are now on track to be regulated by the European Union, where new rules will enforce content restrictions, safety standards, and transparency reporting.
The United States can do better. Researchers have been seeking access to platform data about algorithmic recommendations, curation, and moderation for quite some time now. Far more people than a handful of well-known writers should have access to investigate the power these companies wield. If Congress wants to understand how platforms moderate—a question that is of interest to every political party—members should pass the Platform Accountability and Transparency Act, or a similar legislative proposal, with the goal of providing researchers and the public with access to certain types of platform data while still protecting regular users’ privacy. Indeed, by adopting the law, which would enable multiple research teams from across the political spectrum to track platforms’ operations more closely, congressional leaders can help pull the public out of the current cycle of distrust. And if Congress remains gridlocked, Musk and Twitter can still provide an example. As my colleague Alex Stamos suggested, Twitter’s current leadership could commit to releasing all communications by global political actors related to content moderation; requests and demands by governments could go all into a public database similar to Lumen, which tracks requests to remove material under the Digital Millennium Copyright Act.
The Twitter Files thus far are a missed opportunity. To settle scores with Twitter’s previous leaders, the platform’s new owner is pointing to niche examples of arguable excesses and missteps, possibly creating far more distrust in the process. And yet there is a real need for public understanding of how platform moderation works, and visibility into how enforcement matches up against policy. We can move toward genuine transparency—and, hopefully, toward a future in which people can see the same facts in similar ways.
Stay connected with us on social media platform for instant update click here to join our Twitter, & Facebook
We are now on Telegram. Click here to join our channel (@TechiUpdate) and stay updated with the latest Technology headlines.
For all the latest For Top Stories News Click Here