What if I told you the U.S. government was actively involved in suppressing free speech—not directly, but by pressuring a private company to do it for them? This is what the Twitter Files revealed. They are a collection of internal documents, emails, and communications from Twitter—now rebranded as X—that expose a shocking level of government interference. Released in late 2022 after Elon Musk purchased the company, these files show how federal agencies, like the FBI, flagged posts for moderation, effectively using the platform to silence lawful speech.
But it goes deeper. The files also uncovered how the Biden administration worked to suppress dissenting opinions during the COVID-19 pandemic, particularly from doctors and scientists who questioned the prevailing narratives. These weren’t conspiracy theories—they were legitimate medical debates that the government decided were too dangerous to allow. By flagging posts for removal and pressuring Twitter to comply, the administration effectively quashed discussions about lockdowns, vaccines, and public health strategies. This wasn’t just content moderation—it was a direct attack on the principles of open dialogue and free inquiry, coordinated at the highest levels of power.
Elon Musk’s decision to open up these archives wasn’t just about transparency; it was a wake-up call. The Twitter Files show us how the flow of information in the digital age can be manipulated, even by those sworn to uphold constitutional rights. From the suppression of COVID-related dissent to the silencing of major news stories like the Hunter Biden laptop, the files reveal how the government used private platforms as a tool for censorship. So, what does this mean for the future of free speech, and how do we protect the digital public square from becoming a weapon of government control?
The Twitter Files are not just a story about one company’s internal decisions; they’re a blueprint for understanding the levers of influence in our information ecosystems. These revelations are like a controlled experiment where we can observe, in real time, how technology platforms become arenas for debates over free speech, misinformation, and governmental power. Let’s dive into the mechanics of these systems, their unintended consequences, and what they reveal about the fragility of our digital commons.
Let’s unpack this…
What We Learned About Violations of Trust and Governance
The “Twitter Files” didn’t just shine a light on how one platform operates—they exposed critical failings that raise questions about transparency, accountability, and trust in the digital public square. Two congressional hearings stand out for uncovering how Twitter’s actions crossed boundaries in ways that many argue violated public trust and potentially constitutional protections.
February 8, 2023 – House Oversight Committee Hearing
This hearing directly questioned former Twitter executives—Vijaya Gadde, Yoel Roth, and James Baker—about decisions that many lawmakers alleged amounted to partisan censorship. The focus was on Twitter’s suppression of the Hunter Biden laptop story in the critical weeks before the 2020 election. The platform blocked the story under its hacked materials policy, despite internal uncertainty about whether the policy actually applied.
During the hearing, it became clear that Twitter had acted with little precedent, effectively silencing a news story with potential political ramifications. Critics argued that this amounted to election interference, as the decision influenced what information voters had access to during a pivotal moment.
Moreover, testimony revealed that Twitter executives worked closely with government officials, raising alarms about potential violations of the First Amendment. While private companies can moderate content as they see fit, these collaborations blurred the line between government influence and corporate decision-making.
March 9, 2023 – House Judiciary Committee Hearing
This hearing, while centered on journalists Matt Taibbi and Michael Shellenberger, delved into Twitter’s internal communications with government agencies. The “Twitter Files” showed that federal entities—including the FBI—flagged content for moderation under the guise of combating misinformation.
The concern here was clear: When does cooperation between a government agency and a private company become coercion? The files suggested that Twitter’s employees, under significant pressure, often removed or down-ranked content flagged by federal entities. Critics argue this could constitute an indirect violation of the First Amendment, where government influence led to the suppression of lawful speech.
Taking a Fast Look at the Original Twitter Files Posts:
Part 1: The Hunter Biden Laptop Story – A Lesson in Policy Overreach
The Hunter Biden laptop story isn’t just a case of one platform making a controversial decision—it’s a blueprint for understanding how government influence, social media policies, and misinformation narratives can converge to shape public discourse. The Twitter Files reveal that the suppression of this story wasn’t an isolated incident, but part of a broader, coordinated effort involving not just Twitter—now X—but also Facebook, the FBI, and key political figures.
When the New York Post broke the story in October 2020, Twitter quickly blocked it under its “hacked materials” policy. But internal communications revealed cracks in this decision. Brandon Borrman, then Twitter’s VP of Global Communications, questioned whether the policy even applied since there was no evidence the materials were hacked. Yet the story was still suppressed. Why? Because the narrative surrounding it had already been established.
A letter signed by over 50 former intelligence officials claimed the laptop had “all the classic earmarks of a Russian information operation.” While the letter didn’t confirm the materials were fake, it planted enough doubt to justify labeling the story as misinformation. This narrative was amplified by Democratic leaders and created immense pressure on platforms to act swiftly.
But Twitter wasn’t the only platform to respond. Facebook, as Mark Zuckerberg later revealed, also limited the distribution of the story. Why? Because the FBI had warned Facebook about potential “Russian disinformation” campaigns ahead of the election. Although the FBI didn’t mention the Hunter Biden story explicitly, the timing and context were enough to spur Facebook into action, further throttling the story’s reach.
The coordination between government agencies and social media platforms went deeper than policy enforcement—it represented a system where external pressures could dictate what millions of people were allowed to see and discuss. Platforms became not just moderators of content, but arbiters of public discourse, aligning their decisions with government warnings and political narratives.
Part 1.5: Jim Baker – The Black Box of Oversight
Thread Two of the Twitter Files revealed a shocking twist: Jim Baker, Twitter’s Deputy General Counsel and former FBI General Counsel, was quietly vetting the files before their release—without approval from Twitter’s new leadership. This discovery explained the delays in publishing additional installments and raised serious questions about transparency and institutional bias.
Baker wasn’t just any executive. His history included key roles in controversial FBI operations like the Steele Dossier and Alfa-Bank server investigations. After resigning from the FBI in 2018 following a probe into press leaks, he joined Twitter—bringing his government ties with him. So, when journalists Matt Taibbi and Bari Weiss found that files marked “Spectra Baker Emails” had gone through Baker’s hands, alarm bells went off.
Once Elon Musk was informed, he acted swiftly to fire Baker, removing a potential roadblock to full transparency. The situation highlighted a key challenge: when individuals with deep government ties are embedded in private companies, can those companies ever operate independently?
Thread Two wasn’t just about what was in the files—it was about the forces working to control their release. It showed how even efforts to expose truth can face resistance from within, and how ensuring transparency requires dismantling entrenched systems of influence.
Part 2: Shadow Banning – Algorithms and Hidden Filters
Bari Weiss unveiled Twitter’s use of “visibility filtering,” often referred to as shadow banning. This is where things get fascinating: Twitter didn’t just block accounts outright—it tweaked their visibility. Dan Bongino was on a “Search Blacklist,” Dr. Jay Bhattacharya couldn’t trend, and Charlie Kirk was set to “Do Not Amplify.”
Thread Three of the Twitter Files unveiled the hidden mechanics of how Twitter controlled visibility and reach on its platform. Contrary to its public denials, shadowbanning, deamplification, and blacklisting were standard practices. These decisions weren’t made by frontline moderators but by a small, elite group known as SIP-PES (Site Integrity Policy, Policy Escalation Support). This secretive team, which included top executives like Vijaya Gadde, Yoel Roth, and even CEOs Jack Dorsey and Parag Agrawal, handled politically sensitive cases without leaving a trace in the company’s standard ticketing system.
One striking example was @LibsOfTikTok, an account repeatedly flagged and placed on a “Trends Blacklist” despite internal memos admitting it hadn’t directly violated Twitter’s rules. The account faced six suspensions in 2022 alone, each lasting up to a week, with the justification that its posts encouraged harassment of medical providers over gender-affirming care. Yet, when the account owner was doxxed, and her home address posted publicly, Twitter took no action, claiming it didn’t violate their policies.
Slack messages from Yoel Roth further revealed a deliberate strategy to reduce the spread of “harmful” content without removing it outright. Roth argued that limiting visibility through shadowbanning and deamplification was an effective way to mitigate harm while keeping content technically on the platform. This approach allowed Twitter to control the conversation without appearing overtly censorious.
Thread Three exposed how Twitter secretly manipulated public discourse, amplifying some voices while silencing others, all without users’ knowledge. By making decisions behind closed doors, Twitter wielded its power to shape narratives in ways that raise serious concerns about transparency, bias, and the influence of private platforms on public debate.
Parts 3, 4 & 5: About Trump’s Suspension
The next three threads of the Twitter Files peel back the layers of decision-making at Twitter, exposing the interplay of internal debates, political pressure, and government influence on the platform’s most controversial actions. These threads take us inside the rooms where some of the most impactful moderation decisions were made—and reveal just how complex, and at times troubling, those processes were.
This thread reveals Twitter’s close coordination with federal agencies like the FBI, DHS, and DNI. Yoel Roth, Head of Trust and Safety, held weekly meetings with these groups, discussing flagged content, including tweets about mail-in ballots. While flagged tweets often didn’t violate policies, Twitter still applied labels or limited their reach, following federal input.
Notably, no moderation requests from the Trump campaign or Republicans appeared in the reviewed logs, raising concerns about bias. This thread highlights how federal influence shaped Twitter’s moderation decisions, blurring the line between collaboration and censorship.
Michael Shellenberger’s thread dives into the events leading to Twitter’s unprecedented decision to permanently ban a sitting U.S. president. In the chaotic aftermath of the January 6 Capitol attack, Twitter faced extraordinary pressure from political leaders, media, and its own staff to take action against Donald Trump.
Internal communications show a heated debate among executives. Some, like Yoel Roth (Head of Trust and Safety), argued that Trump’s tweets risked inciting further violence. Others were concerned about setting a dangerous precedent. On January 8, 2021, Twitter cited two of Trump’s tweets as the basis for the suspension, interpreting them as “glorifying violence.” The decision was hailed by some as necessary for public safety, while critics saw it as a chilling moment for free speech.
Bari Weiss’s thread focuses on the intense internal debate over Trump’s suspension. Contrary to the public narrative that this was a clear-cut decision, internal Slack messages reveal deep divisions among Twitter employees. Some argued that Trump’s tweets did not explicitly violate the platform’s policies, while others felt his rhetoric posed an existential threat to public order.
What’s striking is how the decision evolved in real time, with top executives, including Jack Dorsey, ultimately green-lighting the ban. The process wasn’t just about policy—it was about navigating immense political pressure and public scrutiny. The thread highlights the ethical and practical challenges of moderating content from influential figures in moments of crisis.
Donald Trump’s removal from Twitter following the January 6th Capitol riots shows how platforms make governance decisions under extreme pressure. Internal Slack messages reveal a process where policies were reinterpreted to fit a specific outcome. One employee compared Trump to historical figures like Hitler—a clear example of how emotional reactions can distort proportionality.
This was a watershed moment: platforms are not neutral. They are actors in the socio-political landscape, making decisions with far-reaching consequences.
Part 6: The FBI’s Role in Moderation – Where Security Meets Speech
Thread after thread, the Twitter Files show us a troubling pattern: a private platform working hand-in-glove with federal agencies like the FBI to moderate content—even when the flagged material was satirical or trivial. This thread reveals how deeply intertwined these relationships were and raises serious questions about the boundaries between collaboration and overreach.
Here’s what we learned: The FBI regularly sent Twitter lists of accounts and tweets they flagged for “possible violations.” Many were low-engagement, satirical, or harmless jokes, like tweets about voting on the wrong day. In some cases, Twitter acted, suspending accounts or applying labels, even when the flagged content didn’t clearly violate their policies.
The connection wasn’t casual—it was institutional. Twitter executives held regular meetings with the FBI, DHS, DOJ, and DNI, where flagged content and broader moderation policies were discussed. In one instance, Twitter’s legal team confirmed there were “no impediments” to sharing classified information with the company—a strikingly close relationship for a private platform.
Part 7: The Hunter Biden Laptop Redux – Priming Platforms
The Hunter Biden laptop story didn’t land on a neutral platform—it hit a Twitter already primed by federal agencies to view it as disinformation. Weekly FBI meetings conditioned Yoel Roth, Twitter’s Head of Trust and Safety, to expect Russian “hack-and-dump” operations, leading him to dismiss the story as suspicious immediately.
Former FBI General Counsel Jim Baker, now at Twitter, wasn’t alone—there were so many ex-FBI employees at Twitter that they created a private Slack channel. Meanwhile, Roth attended a September 2020 Aspen Institute exercise simulating a “hack-and-dump” involving Hunter Biden, shaping how platforms would handle such stories.
This thread reveals how FBI influence and coordinated exercises shaped Twitter’s perception of the laptop story, raising critical questions about the independence of platforms when primed by government narratives.
Part 8: The Pentagon’s Covert Accounts and Role in U.S. Military Propaganda
Lee Fang’s thread exposes how Twitter knowingly facilitated a covert U.S. military propaganda network designed to push pro-Western narratives. Starting in 2017, Twitter granted whitelist privileges to CENTCOM accounts, allowing them to bypass scrutiny. These accounts used fake bios, AI-generated images, and fabricated claims to target U.S. adversaries like Russia, China, and Iran.
Despite internal awareness of the deception, Twitter executives, including Jim Baker, discussed strategies to obscure the Pentagon’s involvement rather than taking immediate action. Many accounts continued posting until 2022, years after they were flagged for violating Twitter’s policies.
This delayed response contrasts sharply with Twitter’s quick takedowns of foreign state-backed influence campaigns, raising concerns about bias in enforcement and the platform’s complicity in covert U.S. operations. It highlights the tension between Twitter’s public stance on transparency and its behind-the-scenes actions.
Part 9: The Intelligence Apparatus Expands
This thread reveals how Twitter, under significant influence from federal agencies, blurred the lines between moderating foreign influence and surveilling domestic content. The FBI’s Foreign Influence Task Force (FITF) regularly flagged content to Twitter, often involving accounts and posts that were neither foreign nor influential. What started as an effort to combat foreign disinformation expanded into a wide-ranging oversight operation, targeting even fringe, low-engagement domestic content.
In one example, the FBI’s New York office requested user IDs and handles for a list of accounts cited in a Daily Beast article. Senior Twitter executives, including former FBI lawyer Jim Baker, expressed no hesitation in complying. Baker himself found it “odd” that the FBI was actively searching for policy violations on Twitter, yet the company continued to support these requests without question.
This dynamic raises critical questions: why was a task force focused on foreign threats spending its resources monitoring domestic speech? And why did Twitter executives accept these requests without challenging their scope or appropriateness?
The revelations show how government agencies expanded their influence over a private platform, using broad mandates to police content that often had little to do with their stated goals. It’s a case study in how mission creep and unchecked collaboration between public and private entities can reshape the boundaries of free speech and surveillance.
Part 10: COVID-19 and the Battle for Truth
David Zweig’s thread reveals a troubling pattern: during the COVID-19 pandemic, Twitter actively suppressed accurate but contrarian content, favoring alignment with dominant public health narratives over factual accuracy. In one example, a tweet by @KelleyKga, showing CDC data, was flagged as “misleading” and engagement was disabled, even though the data came directly from official sources. Meanwhile, the original post it responded to—which contained actual misinformation—remained untouched.
This wasn’t an isolated case. A physician’s tweet referencing results from a peer-reviewed study was also labeled “misleading,” highlighting how posts challenging mainstream narratives, even when factually correct, were disproportionately targeted. Dr. Andrew Bostom, a Rhode Island physician, was permanently suspended after receiving five strikes for alleged misinformation. However, an internal audit following legal intervention revealed that only one of his strikes was valid. Even that strike cited legitimate data but was deemed inconvenient to the prevailing narrative about COVID-19 risks versus the flu in children.
What these examples show is a system where content moderation prioritized conformity over truth. Tweets that questioned public health messaging were flagged and suppressed, while misinformation that aligned with dominant narratives often went unchallenged. By silencing valid dissent, Twitter’s moderation practices during the pandemic undermined its role as a platform for open dialogue and critical discussion on urgent public health issues.
Parts 11 – 16: Unmasking Influence: How Twitter Became the Battleground for Public Discourse
The Twitter Files have grown into an expansive, intricate web of revelations—each thread peeling back another layer of how this platform was influenced by external forces and internal decisions. The sheer volume of information is overwhelming, but it’s also critical to understanding how public discourse has been shaped by unseen forces. So rather than diving into every detail, let’s summarize some of the remaining key threads below, focusing on their major themes and implications.
Part 11: How the Intelligence Community Moved Into Twitter’s Backroom
This thread exposes how Twitter allowed intelligence agencies unprecedented access to shape moderation decisions. Federal entities, from the FBI to other intelligence arms, were deeply embedded in the platform’s processes, raising serious questions about the independence of private platforms in the face of government influence.
Part 12: The FBI’s Role as Twitter’s Content Pipeline
Here, we see the FBI’s role as a central hub for government agencies to funnel moderation requests. Described as the “belly button” for filtering information to Twitter, the FBI acted as a bridge between the intelligence community and the platform, amplifying concerns about the concentration of influence.
Part 13: The White House vs. COVID Dissent on Twitter
This thread uncovers direct pressure from the White House on Twitter to suppress dissenting views about COVID-19, even when those views were based on credible scientific data. It reveals the tension between public health messaging and free discourse during a global crisis.
Part 14: The Russiagate Illusion—Twitter’s Role in Spreading Misinformation
Russiagate dominated the media for years, but this thread highlights Twitter’s complicity in spreading misinformation about alleged Russian interference. It reveals how unverified narratives gained traction through a combination of media amplification and platform involvement.
Part 15: Hamilton 68—The Tool That Built a Phantom Threat
Hamilton 68 claimed to track Russian influence, but this thread exposes the tool’s flawed methodology. By misattributing ordinary user activity to foreign influence, it created a false narrative that shaped media reporting for years.
Part 16: A Controlled Experiment on Media Bias
This lighter thread experiments with how narratives are constructed in media coverage. It serves as a meta-commentary on the broader issues exposed in the Twitter Files, demonstrating how easily perceptions can be influenced.
Conclusion
The Twitter Files show us that platforms are not just neutral hosts for content; they are deeply embedded in the power structures of our world. This has profound implications for how we think about governance, accountability, and the future of free speech. The question isn’t just whether platforms can be trusted—it’s whether the frameworks we’ve built for digital communication are sustainable in a world where every interaction is a potential node of influence.