Site icon Common Sense and Ramblings In America

Does Big Tech Have An Obligation to Allow Freedom of Speech Due to the Protections It Enjoys Under Article 230?

I have written several articles on postings related to Big Tech, Social Media and Corporations. A list of links have been provided at bottom of this article for your convenience. This article will, however address different aspects on these Industries.

You may have never heard of it, but Section 230 of the Communications Decency Act is the legal backbone of the internet. The law was created almost 30 years ago to protect internet platforms from liability for many of the things third parties say or do on them. And now it’s under threat by one of its biggest beneficiaries: President Trump. In combination with an executive order issued in May, the Justice Department’s proposed law, released in September, could ensure that the president can say whatever he wants on social media and address the accusation that platforms are biased against conservatives.

Section 230 says that internet platforms that host third-party content — think of tweets on Twitter, posts on Facebook, photos on Instagram, reviews on Yelp, or a news outlet’s reader comments — are not liable for what those third parties post. For instance, if a Yelp reviewer were to post something defamatory about a business, the business could sue the reviewer for libel, but it couldn’t sue Yelp. Without Section 230’s protections, the internet as we know it today would not exist. If the law were taken away, many websites driven by user-generated content would likely go dark.

In the wake of the Capitol riot, Facebook, Twitter and other digital platforms suspended former President Donald Trump’s accounts. Some conservatives and many free-speech advocates howled that this was a violation of the First Amendment at best, or a coordinated Big Tech attempt to suppress dissenting speech at worst. A handful of world leaders also complained, including German Chancellor Angela Merkel.

At face value, these are not unreasonable criticisms. In the 2019 case Knight First Amendment Institute v. Trump the Second Circuit Court of Appeals unanimously upheld a lower court’s decision that found it was Trump who violated the First Amendment when he blocked Twitter users who criticized him. The court’s reasoning was that his account operates “to conduct official business and to interact with the public.” Wasn’t Twitter, then, equally guilty of damaging free speech by suspending Trump? 

As critics of “cancel culture” and similar attempts to stifle dissent and debate, as well as experts on liberal democracy and electoral integrity, we offer a simple, if surprising, answer: No. 

First, no serious person really thinks free speech should be absolute and without consequences. For example, individuals and businesses can be sued for defamation or false advertising, protesters can be restricted from blaring their messages through loudspeakers at 3 a.m. and — most relevant to the case at hand — people have no right to speech that provokes a person or group to engage in violence. Landmark Supreme Court decisions have upheld the latter notion.

Second, digital platforms did not threaten free speech by flagging Trump’s untrue posts about election fraud or later banning him due to his glorification of violence. Twitter makes clear in its decision to suspend his account permanently it did so out of an abundance of caution driven by his subsequent violations of its rules and the potential for further incitements to violence in the context of the Capitol riot. The company’s reasoning had nothing to do with ordinary political speech, Trump’s campaign promises or even his lies. Nor does Twitter taking down the nearly 70,000 QAnon accounts from some of his most ardent supporters threaten free speech, either. Some of them were using digital platforms to conspire further insurrection, a crime under U.S. law.

Furthermore, if we truly care about free speech, calls to overturn or reform how U.S. law currently regulates social media would do more harm than good. Counterintuitively, social-media companies responded to Trump by following exactly what the congressional authors intended from Section 230, the law that gave birth to today’s internet. Digital platforms are empowered by this law to engage in aggressive, albeit selective, moderation. From taking down child pornography to censoring hate speech, the application of this law now rightly includes de-platforming the person who was just recently the most powerful person in the world.

Indeed, selective screening and blocking of content and users is what fosters the digital marketplace of ideas. In 1996, Congress’ Communications Decency Act — and specifically Section 230 — gave tech platforms an exemption from civil lawsuits, granting them immunity against defamation, libel and negligence. Section 230 is intended, among other goals, to promote free speech precisely by allowing these companies to moderate the content posted by third-party users, including but not restricted to, indecent content and potential criminal acts. 

The law removes the fear of civil liability that digital platforms would experience without its protections. If tech platforms are not legally responsible for what their users write and say online, then they can and should exercise discretion when removing misinformation, policing platform manipulation and curbing cyberbullying. Section 2(a) of the Communications Decency Act clearly endorses the legitimacy of “any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected …”

Conversely, because tech platforms are not considered publishers or even distributors, they do not have to remove posts that disgruntled parties claim are libelous, defamatory or negligent. Instead, it is the third parties who author the posts that are liable. In short, Section 230 shields platforms from having to impose blanket restrictions and engage in indiscriminate censorship, while still allowing them to curate their sites as they see fit.

Even though they are private companies without the legal obligation to free speech that applies to the government, tech platforms are encouraged by Section 230 and ensuing judicial interpretations to moderate their content to better foster the exchange of ideas. In turn, this potentially allows a more vibrant political ecosystem to flourish. Consider that users’ social-media activity has provided (true) information about candidates, promoted voter education and offered corrections to misinformation about election integrity. And even though controversial, social-media platforms allow politicians to identify and target voters; more surgical pitches increase electoral turnout and political engagement. 

At the same time, under the cover of Section 230, some unscrupulous politicians have serially deceived citizens, including the brazen lie that the 2020 election was stolen. Because digital platforms cannot and do not want to screen and curate everything, “fake news” can and does proliferate. The algorithms used by digital platforms can accentuate and more effectively spread lies and conspiracy theories, even if inadvertently, especially due to algorithmic amplification: AI making choices about what content to show users based on followers, shares and overall engagement. Translation: polarizing, sexualized and extreme videos that glorify violence and espouse conspiracy theories may draw the most eyeballs and clicks.

Yet there has always been a fraught relationship between free speech, media and politics. Deception and demagoguery are as old as politics, or at least the written word, the first vehicle for widely spreading these ills. The list of politicians who incited violence through modern means includes: Mussolini (radio/film), Hitler (radio/film), Perón (radio/television), Milosovic (television) and Rwanda’s Hutu Power (radio). And, as we witnessed on Jan. 6, speeches delivered by politicians in the flesh can be equally or even more effective than messages scribbled on the internet. 

What is not in doubt, however: Trump’s words and the subsequent actions taken by social media fall outside the bounds of free speech. Trump was at a rally outside the White House, which was filmed, and he probably committed a crime: incitement to insurrection. And then the insurrection actually happened. While arguably criminal incitement doesn’t require that any third party act on the mere suggestion, the fact that his supporters did makes this instance a much more compelling and probably easier to prove incident of incitement. This had not been true for previous episodes in which Trump used inflammatory language to stir up a crowd into a frenzy, whether on social media or in person. 

Twitter’s response was fair, proportional and prudent: Trump was initially warned and temporarily locked. He then again violated the company’s policy about glorifying violence. The nation subsequently learned about Trump’s unwillingness to quell the riot once it was in progress, which led to his second impeachment.

But Section 230 also fosters the objectivity needed to counteract the scourges of misinformation and hate. It is precisely the tech platforms’ moderation practices that eventually allow facts to surface and spread. If something like Section 230 and the commercial internet existed in earlier times, it might have been easier to arrest the propaganda advanced by leaders bent on sowing bloodshed. Regime dissenters who tried to use traditional media to counteract such vituperations were unable to circumvent the state’s censorship and repression. Fortunately today, in the U.S. case, the law has created a vibrant, albeit imperfect, marketplace of ideas with genuinely diverse viewpoints. It has also cultivated a fact checking industry that continues to improve. 

Digital platforms are developing the necessary antibodies to combat hate speech and calls for violence in a way that promotes free speech, including banning politicians who are the real threats to the First Amendment and the Constitution. The decisions by Big Tech to kick Trump off vindicates the value of Section 230. 

Of course, one might argue that social media banned Trump simply because Congress is flexing its muscle about reforming or even rescinding Section 230, or even because Big Tech is pandering to its liberal employees. There might be some truth to that view, as Twitter and other social media platforms have taken a hit by losing a big chunk of their user base and overall engagement after the former president’s de-platforming.

Even so, gutting Section 230 to remove digital platforms’ protections from civil liability for the content posted by third parties would make them much more risk averse and thus truly censorious. Before Section 230 was the law of the land, digital platforms such as CompuServe did not do much moderating at all or, more typically, they did too much of it, truly stifling viewpoint diversity and engaging in pearl-clutching prudery. If this occurred across today’s internet, facts, logic and evidence will suffer in its wake. More to the point, Twitter, Facebook and even YouTube would not exist in their current form because their business model is based on collecting, processing and selling the data created and shared through vibrant third-party engagement.

What about the view that Section 230 is a shield used by digital platforms that foment outrage through their algorithms? While there is no doubt that some social-media companies have been foot dragging on content moderation for many years and that they have the strength and deep pockets to better police their platforms, the same was once true for prior incarnations of cutting edge media distribution channels such as cable television providers that adopted voluntary systems. Getting this right sometimes takes time, learning and perhaps better regulation. Also, newspapers, magazines and even television networks can enjoy Section 230 protections if their websites allow for users’ comments.

The government could always compel social-media companies such as Twitter to stop using algorithmic amplification when offering content suggestions and also reallocate their budgets toward moderation that is more qualitative and human-based as a condition for Section 230 protections. But there are trade-offs here. Social networks can also recommend content that is edifying, and AI can help facts spread just as much as lies. And there is no one-size-fits-all way to moderate content. It can potentially involve crowdsourcing (think, mechanical Turk), one person deputized to do so, a group of anointed ethicists or improved AI. AI itself relies on human coding and intuition (think, training data sets), which suggests all moderation requires planning, judgment and learning.

A better use of Congress’ time if they are worried about technology and democracy would be to promote civic education and provide broadband to all Americans to help both spread accurate information about how elections are conducted and debunk conspiracy theories by helping to disseminate facts on, yes, digital platforms. The truth is, the demand-side factors driving misinformation and conspiracy theories will endure if the commercial internet as currently constituted disappeared tomorrow. The answer to the rampant fear, distrust, polarization and uncertainty about a fast-changing world is not to ban the messenger but to do something about the message. Policymakers would be wise to focus on those who have been left behind by globalization, racial injustice and ignorance. 

Section 230’s salacious origins

In the early ’90s, the internet was still in its relatively unregulated infancy. There was a lot of porn floating around platforms like AOL and the World Wide Web where anyone, including our nation’s impressionable children, could see it. This alarmed some lawmakers. In an attempt to regulate this situation, in 1995 lawmakers introduced a bipartisan bill called the Communications Decency Act which would extend to the internet laws governing obscene and indecent use of telephone services. This would also make websites and platforms responsible for any indecent or obscene things their users posted.

In the midst of this was a lawsuit between two companies you might recognize: Stratton Oakmont and Prodigy. The former is featured in The Wolf of Wall Street, and the latter was a pioneer of the early internet. But in 1995, Stratton Oakmont sued Prodigy for defamation after an anonymous user claimed on a Prodigy bulletin board that the financial company’s president engaged in fraudulent acts. As the New York Times explains the court’s decision:

The New York Supreme Court ruled that Prodigy was “a publisher” and therefore liable because it had exercised editorial control by moderating some posts and establishing guidelines for impermissible content. If Prodigy had not done any moderation, it might have been granted free speech protections afforded to some distributors of content, like bookstores and newsstands.

Fearing that the Communications Decency Act would stop the burgeoning internet in its tracks and mindful of the court’s decision, then-Rep. (now Sen.) Ron Wyden and Rep. Chris Cox authored an amendment that said that “interactive computer services” were not responsible for what their users posted, even if those services engaged in some moderation of that third-party content. The internet companies, in other words, were mere platforms, not publishers.

“What I was struck by then is that if somebody owned a website or a blog, they could be held personally liable for something posted on their site,” Wyden explained to Vox’s Emily Stewart last year. “And I said then — and it’s the heart of my concern now — if that’s the case, it will kill the little guy, the startup, the inventor, the person who is essential for a competitive marketplace. It will kill them in the crib.”

Section 230 also allows those services to “restrict access” to any content they deem objectionable. In other words, the platforms themselves get to choose what is and what is not acceptable content, and they can decide to host it or moderate it accordingly. That means the free speech argument frequently employed by people who are suspended or banned from these platforms — that the Constitution says they can write whatever they want — doesn’t apply, no matter how many times Laura Loomer tries to test it. As Harvard Law professor Laurence Tribe points out, the First Amendment argument is also generally misused in this context:

Wyden likens the dual nature of Section 230 to a sword and a shield for platforms: They’re shielded from liability for user content, and they have a sword to moderate it as they see fit.

The Communications Decency Act was signed into law in 1996. The indecency and obscenity provisions, which made it a crime to transmit such speech if it could be viewed by a minor, were immediately challenged by civil liberty groups. The Supreme Court would ultimately strike them down, saying they were too restrictive of free speech. Section 230 stayed, and the law that was initially meant to restrict free speech on the internet instead became the law that protected it.

This protection has allowed the internet to thrive. Think about it: Websites like Facebook, Reddit, and YouTube have millions and even billions of users. If these platforms had to monitor and approve every single thing every user posted, they simply wouldn’t be able to exist. No website or platform can moderate at such an incredible scale, and no one wants to open themselves up to the legal liability of doing so.

That doesn’t mean Section 230 is perfect. Some argue that it gives platforms too little accountability, allowing some of the worst parts of the internet — think 8chan or sites that promote racism — to flourish along with the best. Simply put, internet platforms have been happy to use the shield to protect themselves from lawsuits, but they’ve largely ignored the sword to moderate the bad stuff their users upload.

Recent challenges

In recent years, Section 230 has come under threat. In 2018, two bills — the Allow States and Victims to Fight Online Sex Trafficking Act (FOSTA) and the Stop Enabling Sex Traffickers Act (SESTA) — were signed into law, which changed parts of Section 230. Now, platforms could be deemed responsible for prostitution ads posted by third parties. These were ostensibly meant to make it easier for authorities to go after websites that were used for sex trafficking, but they did this by carving out an exception to Section 230. The law was vulnerable.

Amid all of this was a growing public sentiment that social media platforms like Twitter and Facebook were becoming too powerful. In the minds of many, Facebook even influenced the outcome of the 2016 presidential election by offering up its user data to shady outfits like Cambridge Analytica. There were also allegations of anti-conservative bias. Right-wing figures who once rode the internet’s relative lack of moderation to fame and fortune were being held accountable for various infringements of hateful content rules and kicked off the very platforms that helped created them. Alex Jones and his expulsion from Facebook and other social media platforms is perhaps the most illustrative example of this.

Republican Sen. Ted Cruz, demonstrating a profound misunderstanding of Section 230, claimed in a 2018 op-ed that the law required the internet platforms it was designed to protect to be “neutral public forums.” Lawmakers have tried to introduce legislation that would fulfill that promise ever since.

Republican Rep. Louie Gohmert introduced the Biased Algorithm Deterrence Act in 2019, which would consider any social media service that used algorithms to moderate content without the user’s permission or knowledge to be legally considered a publisher, not a platform, thereby removing Section 230’s protections. Later that year, Republican Sen. Josh Hawley introduced the Ending Support for Internet Censorship Act, which would require that, in order to be granted Section 230 protections, social media companies would have to show the Federal Trade Commission (FTC) that their content moderation practices were politically neutral.

Neither of those bills went anywhere, but the implications were obvious: Emboldened by FOSTA-SESTA, the two sex-trafficking bills from 2018, lawmakers not only wanted to chip away at Section 230 but were actively testing out ways to do it.

More likely to succeed is a bipartisan bill introduced in March called the Eliminating Abusive and Rampant Neglect of Interactive Technologies Act, from Sens. Lindsey Graham and Richard Blumenthal. Here, the lawmakers used the prevention of child pornography as an avenue to erode Section 230 by requiring companies to follow a set of “best practices” developed by a newly established commission or else lose their Section 230 immunity from civil lawsuits over child pornography postings. Some privacy advocates fear that the proposed law would extend to requiring tech companies to provide law enforcement with access to all user content. The law has bipartisan support, with Hawley and Democrat Dianne Feinstein among its cosponsors. At the end of September, a House version of EARN IT was introduced by Reps. Sylvia Garcia and Ann Wagner, a Democrat and Republican, respectively, paving the way for EARN IT to get a House vote as well.

Trump’s executive order

President Trump, who has benefited greatly from social media, is trying to dial back Section 230’s protections through an executive order. Back in May, Trump signed his “Executive Order on Preventing Online Censorship” roughly 48 hours after Twitter applied a new policy of flagging potentially false or misleading content to two of the president’s tweets. At the signing ceremony, Trump referred to Twitter’s actions as “editorial decisions,” and Barr referred to social media companies as “publishers.”

“They’ve had unchecked power to censure, restrict, edit, shape, hide, alter virtually any form of communication between private citizens or large public audiences,” Trump said at the time. “We cannot allow that to happen, especially when they go about doing what they’re doing.”

The order says that platforms that engage in anything beyond “good faith” moderation of content should be considered publishers and therefore not entitled to Section 230’s protections. It also calls on the Federal Communications Commission (FCC) to propose regulations that clarify what constitutes “good faith”; the FTC to take action against “large internet platforms” that “restrict speech”; and the attorney general to work with state attorneys general to see if those platforms violate any state laws regarding unfair business practices.

While the order talks a big game, legal experts don’t seem to think much — or even any — of it can be backed up, citing First Amendment concerns. It’s also unclear whether or not the FCC has the authority to regulate Section 230 in this way, or if the president can change the scope of a law without any congressional approval.

Barr’s proposals

Barr is not a fan of Section 230, and his Department of Justice has been looking into the law and how he believes it allows “selective” removal of political speech. This has included a set of recommendations from the Justice Department in June and the legislation proposal sent to Congress on Wednesday. The proposal includes the addition of a “good faith” section requiring platforms to spell out their moderation rules, follow them to the letter, explain any moderation decisions to the user whose content is being moderated, and provide the user with the chance to respond. There are also additional carve-outs that would remove civil lawsuit immunity for material that violates anti-terrorism, child sex abuse, cyberstalking, and antitrust laws.

“For too long Section 230 has provided a shield for online platforms to operate with impunity,” Barr said in a statement. “Ensuring that the internet is a safe, but also vibrant, open and competitive environment is vitally important to America. We therefore urge Congress to make these necessary reforms to Section 230 and begin to hold online platforms accountable both when they unlawfully censor speech and when they knowingly facilitate criminal activity online.”

It’s not clear how Barr determined that platforms are “unlawfully” censoring speech, as First Amendment protections do not extend to private businesses.

Trump and Barr also recently met with some Republican state attorneys general to discuss ways state laws can be used to further dictate how and when social media platforms can moderate their users’ speech.

Needless to say, Section 230’s creator isn’t thrilled.

“As the co-author of Section 230, let me make this clear: There is nothing in the law about political neutrality,” Wyden said. “It does not say companies like Twitter are forced to carry misinformation about voting, especially from the president. Efforts to erode Section 230 will only make online content more likely to be false and dangerous.”

Article 9  Freedom of thought, conscience and religion

  1. Everyone has the right to freedom of thought, conscience and religion; this right includes freedom to change his religion or belief and freedom, either alone or in community with others and in public or private, to manifest his religion or belief, in worship, teaching, practice and observance.
  2. Freedom to manifest one’s religion or beliefs shall be subject only to such limitations as are prescribed by law and are necessary in a democratic society in the interests of public safety, for the protection of public order, health or morals, or for the protection of the rights and freedoms of others.

From the US to the EU, one thing has become painfully clear to me in recent months: free speech and the freedom of conscience is under threat by big tech companies like Facebook and Twitter. Over the past six months, I have been witnessing the daily banning of primarily feminists from both these platforms for the infractions of “misgendering” and “deadnaming” (mentioning a trans-identified person’s previous name, also called their “dead name”) or for simply saying that women don’t have penises. I know, I know, silly me. Biology is changing all the time and we just need to STFU and accept women with penises in our intimate spaces. Or so the anti-science rhetoric of identitarians in recent years goes.

We are in the throes of a cultural revolution where big tech meets women and gay rights activists meets the First Amendment and the EU Charter of Human Rights. Over the past year in the UK, teachers have faced disciplinary actions for questioning gender ideology, a mother has been summoned by the police for an online Twitter discussion, and a mother has been threatened with the custody of her child for making a complaint that her child was being “encouraged” to transition by a therapist and school teachers.  Today the “public square” is quickly becoming the various spaces of social media with Facebook accounting for over  2.23 billion monthly users and Twitter coming in at 328 million monthly active users.  Both these companies have user numbers the size matching the population of large countries, yet these companies are immune from upholding Article 9 of the European Convention on Human Rights (ECHR) and Article 10 of the EU Charter of Fundamental Rights, both which guarantee the freedom of conscience in addition to several other UN provisions preventing anti-democracy actions and totalitarianism. Then there is the UK’s own Human Rights Act, Article 9 (HRA) which also mirrors the EU legislation and in the US, there is the First Amendment which guarantees the freedom of speech, press, religion, assembly and petition.

As Jonathan Best points out, Article 9 of the HRA “protects everyone’s right to believe that gender is a social construct and to reject the concept of gender identity,” and so too do the other pieces of legislation from the US to the EU.  So why are Facebook and Twitter immune to upholding these laws as they manifestly are engaging in—and have been for some time—censorship through blocking or banning users of their platform?  Users which are almost always female.

And in the US, the situation is more complex, especially since the passing of Section 230 of the Communications Decency Act of 1996 (also known as Title V of the Telecommunications Act of 1996). This landmark legislation codified at 47 U.S.C. § 230. Section 230(c)(1)  which provides immunity from liability for providers and users of an “interactive computer service” which publishes information by third-party users. What this means is that if I post illegal information on WordPress or Friendster, these companies cannot be held accountable for my having used their platform for illegal ends. Tack onto this legislation the fact that as of April of this year the “Allow States and Victims to Fight Online Sex Trafficking Act,” H.R. 1865, 115th Congress (2018) actually provides websites immunity for content posted by third parties, with the exclusion of sex trafficking. Big tech companies fought back on this exclusion warning that the bill could compel them to block controversial political speech losing the legal battle. However, these big tech companies have been trying to reinvoke their immunity as previously held under Section 230 of the Communication Decency Act through NAFTA (North American Free Trade Agreement) renegotiations. And last month they were successful as NAFTA’s substitute, the United States-Mexico-Canada Agreement (USMCA), will now extend the immunity Congress had earlier provided with Section 230 of the Communications Decency Act of 1996 (CDA) into neighboring North American countries. Not only is this is a gift to the tech industry, but it is a complete paradox. The tech industry lobbied heavily to get back Section 230 immunity by invoking “free expression” for its users while conterminously taking on the policing free speech on its platforms. In short, big tech’s request for absolute immunity, in light of its use of Section 230 to justify political bias and censorship, reveals a troubling present for free speech on the net.

What I have been wondering since June is this: how does freedom of expression translate to the freedoms guaranteed by various national and international laws in this era of hyper-censorship at the hands of social media giants?

Google has argued its right to restrict political content citing the “First Amendment protection for a publisher’s editorial judgments encompasses the choice of how to present, or even whether to present, particular content.” Twitter has also issued similar statements. So while these tech giants have secured the right to legal immunity under Section 230 which they cite regularly, yet none of these corporations are transparent in their censorship.

It’s not just conservatives being shadow banned, but it is leftist women today speaking out against gender ideology, such as Canadian feminist journalist and editor of Feminist CurrentMeghan Murphy, whose Twitter account was permanently banned last week. There is an exercise of institutional misogyny across the board from Facebook, to Twitter, and many other social media platforms. And let’s be clear here: we are talking about thought and expression policing in full force where women are not allowed to call men “men” and for so doing are banished from the 21st century public square. And Murphy is one in a long line of feminists who have been kicked off social media for simply stating a scientific truism.

The question we face as users of these platforms around the world is if we might hold these tech giants accountable to respecting democratic norms and procedure by demanding that freedom of expression not be dominated by faceless tech giants. As it stands there has been a concerted focus by these social media companies to  shut down the voices of  leftist women who are pushing back on misogyny on the left. It’s time we pay attention to how our freedoms have been sold to these corporations. We need to demand that the policing of free speech end today and that corporations are not turned into the surrogate for a police state none of us voted for.

Four Solutions For Big Tech Censorship

“It’s not a battle for liberal speech. It’s only a battle for conservative speech,” Dan Gainor of TechWatch, a big tech watchdog publication run by Newsbusters, said. “There is actually no movement afoot to restrict liberal speech.”

President Donald Trump’s tweets almost always receive a flag or fact-check and prominent news outlets are being slapped with week-long bans for posting factual stories. Conservatives are done with Twitter—but there must be real solutions to fight discrimination by big tech outside of running from the problem.

Soon candidates on the right won’t have access to the most vital forms of communication and conservatives will lose elections.

There is still hope, however, and the following four options are the tools to win this fight so speech can once again be protected and free to all—even on Twitter. 

1. Strip Big Tech of Section 230 Protections

This is certainly the most discussed solution today, and that is because it’s an essential starting point to curb bias. President Trump made headlines when he threatened to veto the 2021 National Defense Authorization Act unless it repealed Section 230.

Section 230 is a small portion of the Communications Decency Act of 1996 (CDA)—an act passed with the purpose of preventing minors from accessing sexually explicit materials on the Internet. The CDA itself was an addition to the Telecommunications Act of 1996, which sought to expand competitiveness in this groundbreaking internet market. The CDA was added on as an amendment—Title V—months later and thus Section 230 became law. 

The Section dictates that “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” In other words, tech companies are not publishers and cannot be held legally responsible for the content posted to their platforms by users.

It also allows big tech companies to remove and block content it deems to be “obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.” The law’s only boundaries are that removal of speech be done in “good faith.”

Intended to allow internet companies to block harmful content and to avoid consequences for content put out by their users, this section has since been hijacked to pave the consequence-free way for big tech to shadow-ban, silence, and squash conservative thought. 

Representative and House Judiciary Committee member Matt Gaetz (FL-01) has positioned himself on the frontlines of the fight against big tech discrimination and for the repealment of Section 230.

“Right now, technology companies enjoy special immunities that even local newspapers, even your television network doesn’t enjoy in terms of their responsibility for content,” Gaetz said in an email.

Section 230 pertains to “interactive computer services,” which is defined as “any information service, system, or access software provider that provides or enables computer access by multiple users to a computer server.” Meaning this privilege only applies to big tech companies like Twitter, Facebook, and Google. It’s a special treat for social media platforms and no one else.

“Section 230 protects big tech in ways, at least it’s been interpreted to protect them in ways that allow them to discriminate,” Buck said. “Their bias shows in the algorithm they have created.”

Representative Ken Buck (CO-04) is a member of the Judiciary Subcommittee on Antitrust, Commercial and Administrative Law along with Gaetz and recently published a report titled “The Third Way” in which he spells out his solutions to big tech bias. 

“Section 230 protects big tech in ways, at least it’s been interpreted to protect them in ways that allow them to discriminate,” Buck said. “Their bias shows in the algorithm they have created.”

Gaetz and Buck argue that these companies are not acting in good faith and thus should have these protections revoked.

“[Big tech companies] enjoy those protections because they hold themselves out to be unbiased and neutral platforms,” Gaetz said. “But if they aren’t willing to demonstrate they, in fact, are unbiased and neutral, I think we should repeal that section of law.”

With the protections gone, these companies can be held responsible for their blatant biased behavior against ring-wing users. 

Repealing 230, however, comes with difficulties and Buck notes that “it’s not the best alternative.”

Holding big tech companies responsible for acting outside the bounds of what an actual fair and neutral platform would do is beneficial. But accidentally squashing those big tech company’s competitors is also a possibility—and would make the problem worse. 

Opponents to repealment of Section 230 argue that the sole thing protecting alternatives to big tech companies—like Parler and Rumble (Twitter and Youtube’s biggest competitors)—is this very section itself. Without its protections, sites like Yelp could be sued for a user leaving a bad review, argues former Congressman Rick Santorum in a National Review piece this month. And while a massive multi-billion dollar cooperation could fight frivolous lawsuits, a small startup could not. 

“You don’t want lawsuits against a company when they don’t control the content of what is being put on their site,” Buck said. “We need to create in Section 230 a protection that smaller startups can use moving forward.”

Like most issues in politics, the repealment of Section 230 is complicated. Twitter is no longer a neutral platform and clearly acts against conservatives so it should not have the special privileges found in the section. But frivolous lawsuits could crush any free market solution to these tech giants.  

“We could make it clear what 230 covers and what it doesn’t cover,” Buck proposed. Any such reform, however, could be hard to enact if the very elected officials tasked with this are receiving donations from the violators. 

2. Prevent Members of Congress from Accepting Big Tech Campaign Donations 

Reform will have to come from senators and representatives with bills and acts. This is difficult in our current political climate because these elected officials are eagerly accepting donations from these companies.  

“Congress is not going to rein in Big Tech, because Congress is bought by Big Tech. That’s why I refuse to accept any PAC donation from any special interest group,” Gaetz said. “We need more members in Congress who are willing to stand up to Big Tech and stop taking their PAC donations.”

In the 2020 election alone, Google’s parent company Alphabet gifted out $21 million, Microsoft $17 million, and Facebook $6 million. The vast majority of this green paper went to the Biden campaign through individuals and PACs. These tech giants have forced Biden and other Democrats to be beholden to their anti-free speech activism. Also, for reference, here is what abysmal percentage of those massive donations went to Republicans, in order of above listing: 7%, 14%, and 10%. 

“Our democracy should be powered by the people of our country, not by a few Silicon Valley monopolies,” Gaetz argued. “There are just simply too many members of the House and Senate who are beholden to Big Tech either because of political donations or because their family members are getting employed by Big Tech.”

Campaign reform is heavily discussed but barely acted upon for obvious reasons but “until that happens, Congress will not do anything about major tech platforms’ censorship,” Gaetz warned. 

But even if the government does decide to take action, there will not be enough resources in the necessary agencies to effectively respond. 

3. Increase Funding to Antitrust Agencies

Action against big tech will take place in antitrust agencies in the federal government. Specifically the Federal Trade Commision and the U.S. Department of Justice Antitrust Division, which are the main enforcers of the antitrust laws in the United States. 

In 1890, Congress passed the Sherman Act, which was the first antitrust law in the country. Serving as a “comprehensive charter of economic liberty aimed at preserving free and unfettered competition as the rule of trade,” the act laid the groundwork for fighting monopolies in the economy. Two more acts followed: the Federal Trade Commission Act and the Clayton Act. These three acts are the main antitrust laws in the nation. 

These laws are violated by big tech with their monopolistic status but the enforcers can’t act on it because the funds are too low. 

The American government invests $510 million into its antitrust agencies, compared to the big tech sector which accounts for $2 trillion—or nearly 10 percent of the US gross domestic product. 

Congressman Buck has been a strong advocate of increasing the funding to these agencies—he put out a publication calling for this as the main solution to big tech bias. 

“Congress has failed in its role. We have not given the tools to the FTC and Antitrust Division to do their job and we have not updated the law to cover big tech,” Buck said. “In order to level the playing field, the FTC and Antitrust Division need more resources.”

Aside from the clear need for more funding, the antitrust agencies need to begin to heavily enforce the laws being violated by Twitter, Facebook, Youtube, and others. 

“The Obama Administration’s weak enforcement of antitrust laws allowed big tech companies to achieve near-monopoly status,” Gaetz said. “They have used their vast size to harm competition and consumers. Antitrust issues affect all Americans, and we should work across the aisle to address them.”

Gaetz had demanded Attorney General Barr do more to enforce the antitrust laws in the country before his departure from the DOJ.

“The Department of Justice is not doing enough today to enforce antitrust laws,” said Gaetz, in an interview last month. “Bill Barr needs to be doing more to enforce antitrust laws in litigation in actions against the companies that utilize their market power to redefine the nature of speech in this country.”

Buck said he would not put the burden on Barr since he has been in office for a short time, but agrees the DOJ as a whole has lacked in enforcement. 

“The Department of Justice over time should have been more involved,” Buck said. “The government has failed to adequately oversee this area for the last ten years.” 

Like violent neighborhoods receive more funding for police officers and said officers are given the power to enforce the law, so should the big tech threat be met with adequately funded antitrust agencies with the ability to actually enforce the anti-monopoly laws on the books since the late 1800s. 

4. Focus on Appointments to the FEC and FCC Boards 

Another avenue to fight the bias is by ensuring selections made for the Federal Election Commission and Federal Communications Commission are people who understand this issue and are willing to fight it. 

“Big Tech bias is election interference. Their bias is also a violation of our communication standards and they’re not bound to certain liabilities under the Communications Decency Act,” Gaetz said. “There should be a significant focus on appointments to the FCC and to the FEC with people who understand this concept.” 

Since the bias is affecting sectors that these two boards oversee, action should be taken by them—especially since Congress is so ineffective. 

“The House of Representatives and the Senate are not going to take on Big Tech; we need a direct focus on appointments to these boards,” Gaetz said. “A second term of President Trump would be crucial in this fight because a lot of these boards have staggered terms, and I am confident in the Trump Administration taking bold executive action to vindicate our free speech rights.”

Appointing ethical, informed, and brave Americans to these boards would increase the chances action would be taken against the big tech monopolies as they interfere in elections and the communications of the country. 

Whichever solution is utilized, the fight against big tech bias must be on top of every elected official’s list—for the sake of free speech and the ability to ever win an election again. 

“To put America first and to put the American people first,” Gaetz said, “we need to ask not what our country can do for our tech companies but what our tech companies can do for our country.”


I have investigated Section 230 thoroughly. While there apparently is no legal obligation by Big Tech to act only as a platform, to my way of thinking there is a moral obligation to allow freedom of speech. These companies have enjoyed freedom from persecution and litigation due to Section 230. Nobody forced them to utilize these protections, by doing so, they fall under the purview of the Federal Government and thereby have to follow the laws of the land. So we have two different arguments for freedom of speech. The first one I just mentioned and second one that these companies need to act as a platform and not a publisher. They can’t have it both ways, which is what they have been doing for the last few years. By selective censorship, they helped steel the Presidential election from President Trump.

Resources, “Section 230, the internet free speech law Trump wants to repeal, explained,” By Sara Morrison;, “Big Tech’s Threat To Freedom Of Expression,” By Julian Vigo;, “Big tech censorship is growing—here are the 4 solutions you haven’t read about yet,” By Ben Wilson;, “Section 230: Friend, not foe, of free speech,” By James D. Long and Victor Menaldo; the, “Social Media Platforms Or Publishers? Rethinking Section 230,” By Adam Candeub;


Social Media Platforms Or Publishers? Rethinking Section 230

hris Hughes, co-founder of Facebook, recently wrote a widely discussed critique of the company in The New York Times. He observes that Mark Zuckerberg once claimed that “Facebook was just a ‘social utility,’ a neutral platform for people to communicate what they wished.”

Hughes laments that Zuckerberg now considers Facebook to be both a platform and a publisher—claiming in court that it is “entitled to First Amendment protection” and that it is “inevitably making decisions about values.” Hughes sees this transformation from neutral platform to self-appointed arbiter of acceptable public discourse as a threat to free expression and political debate. He argues that Facebook should be broken up, or at the very least that “the government must hold Mark accountable.

Hughes is right to point out that we underestimate social media’s power to dominate communications and control political speech. According to the Pew Research Center, 7 percent of American adults in 2005 used social networking sites. Today, more than two thirds receive news from social media, with almost half, according to Pew, getting news from social media “often” or “sometimes.”

What Hughes does not discuss, and what too often goes unremarked on in discussions of Big Tech’s power, is how a special government privilege, Section 230 of the Communications Decency Act, allowed for this growth. This provision protects internet platforms like Facebook and Google from liability for statements and content its users generate.

This legal protection—not accorded to newspapers or other fora—created the internet as we know it. Protection from liability for any false or injurious content their users post has permitted the social media giants to allow the incredibly freewheeling discussions and commentary that we have come to expect from the internet. Exemplary is Facebook’s response to the recently circulated doctored video of House Speaker Nancy Pelosi. In a statement to The Washington Post, Facebook said:“We don’t have a policy that stipulates that the information you post on Facebook must be true.” In addition, this legal protection has also helped create the lucrative tech behemoths and further their dominance in attracting advertisers. This dominance has gutted the advertising revenue streams of local, regional, and even national media outlets—outlets that do not enjoy the privileges of Section 230.

But what is particularly bizarre, ironic, and deeply destructive to public discourse is that, though Congress passed Section 230 to promote a free and open internet, Facebook, Twitter, and Google now use it to advocate for an open internet while at the same time justifying their censorship regimes.

On one hand, Twitter, Google, and the other internet platforms often advocate for an open and free internet with no restrictive gatekeepers who would block or throttle disfavored content—i.e., the policy generically known as “network neutrality.” However, they advocate for an open and free internet only when faced with broadband providers like Verizon and Comcast that could block their services. In 2017, Zuckerberg wrote that broadband providers should not be allowed to “block you from seeing certain content.” Similarly, Twitter’s lobbyists argued that Verizon and Comcast should not be permitted to “block content they don’t like” and/or relegate “certain content to the backwaters of the Internet in second or third-tier status.”

On the other hand, Facebook, Twitter, and Google seem to embrace a principle of “an open internet for thee but not for me” when it comes to their own platforms. And much of the country has yet to comprehend the power they seek to wield through discriminatory network practices. Twitter CEO Jack Dorsey explained to podcaster Sam Harris that Twitter does not “optimize for neutrality” when moderating speech, despite the company’s professed support for “net neutrality.” He didn’t specify which values Twitter does optimize, but Columbia University’s Richard Hanania found that over 95 percent of high-profile bans have targeted those on the Right. (In full disclosure, I have worked, pro bono, on lawsuits challenging Twitter’s censorship policies.)

Republican Senator Josh Hawley of Missouri has taken the lead in exposing Twitter’s bizarre, inconsistent, and biased censorship regime. Twitter’s deplatforming of Unplanned, a pro-life movie, prompted Hawley to question a Twitter representative at a Senate hearing. The senator’s trenchant questioning—and subsequent letter to Dorsey—exposed Twitter’s refusal to reveal, in a transparent way, how or why it censors.

Perhaps in an effort to place a fig leaf on all its unsavory censorship, Zuckerberg has floated a proposal to create a “Facebook Supreme Court.” His appointees would make decisions about acceptable content, obviating the need for judges appointed through a democratic process. In the 1990s, John Perry Barlow inspired a generation of programmers, innovators, and entrepreneurs with his “A Declaration of the Independence of Cyberspace” that foresaw the internet transcending government and ushering in a novum ordo of global freedom. Instead, we’ve gotten second-rate Silicon Valley satrapies.


Social media’s power stems in part from its unprecedented exemption from legal rules that govern other communications networks and virtually every other firm or person. For instance, telephone companies cannot kick customers off their platform on the basis of political views, nor can airlines. In contrast, the social media firms have the power—and have used the power—to kick off any user for any reason, in contravention of civil rights law. Similarly, cable companies retain legal liability for content the public creates on the public, educational, and government access channels, and newspapers retain legal liability for their paid advertising. But social media platforms face no liability for the libelous, or even criminal, statements their users publish.

How have we gotten to this legal anomaly where dominant internet platforms not only avoid antitrust scrutiny and run-of-the-mill legal duties, but also receive government giveaways? Section 230 of the Communications Decency Act is the culprit. Passed in 1996 as part of the Communications Decency Act, its primary purpose was, as its name suggests, to regulate pornography on the web. The Supreme Court struck down the anti-porn provisions on First Amendment grounds, but Section 230 was upheld and, until recently, remained obscure.

Congress passed Section 230 in 1996 to help the nascent internet platforms and encourage them to censor indecent and obscene content. They did so by altering traditional common law for publishers. In those days, the major internet firms were dial-up bulletin boards such as Prodigy, AOL, and CompuServe. Before passage of Section 230, they faced so-called “publisher” and “distributor” tort liability law, a body of legal rules developed over centuries to deal with newspapers and booksellers.

Courts had reasoned that these early platforms were not the publishers of statements made by third parties on their fora, but were instead distributors. Legal lingo aside, this meant that Prodigy and the other platforms typically could not be sued for libel or be held criminally liable based on comments posted by its users. Distributor liability requires direct knowledge of the illegal comments as well as a failure to act once made aware of them. Thus, a bookstore had no liability for libelous books on its shelves, but could become liable if it was credibly informed of the books’ content and nonetheless failed to act. On the other hand, if Prodigy were to communicate directly to its users or create content, it would be liable for that content.

This distinction between publishers and distributors rubbed Congress the wrong way, especially in the mid-1990s when legislators were trying to keep the internet wholesome and pornography-free. The legal result was this: when Prodigy promised to create a family-friendly environment, editing for indecent or profane comments, courts ruled that it had lost its distributor liability, and treated Prodigy as publisher of all content on its bulletin boards. Thus, Prodigy, because it took affirmative steps to police speech, became liable for all statements published in its curated bulletin boards. This was a powerful incentive not to curate the internet.

Congress considered this outcome undesirable. One of Section 230’s sponsors, Representative Chris Cox of California, thought this was “exactly the wrong result” because platforms exposed themselves to greater liability if they tried to keep their content clean. Thus, in order to keep things clean on the internet and to ensure the free flow of ideas, Congress passed Section 230.

Section 230(c)(1) ensures the free flow of ideas, exempting internet platforms from publisher and distributor liability posted by third parties. In this sense, Section 230(c)(1) provides common carrier protections. Just as telephone companies, broadband service providers, and even courier services like FedEx have no liability for the content of the messages and parcels they carry, neither does Facebook or Twitter.

Section 230(c)(2) was meant to keep things clean. It relieves carriers of liability for efforts to censor or curate content “in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable.” Thus, when Prodigy edits comments for decency, it can do so without becoming a publisher of the entire website.

Congress bestowed in Section 230 extraordinary gifts of liability immunity. Section 230(c)(1) essentially gives internet platforms the immunities of common carriers, such as telephone companies, delivery services, and airlines. And Section 230(c)(2) is simply unprecedented and sui generis. The ease of distributing pornography and other obscene materials on the internet—which in the 1990s was shocking—no doubt motivated this gift.

But what makes Section 230 truly unprecedented is that Congress gave these legal privileges without any public obligation in return. For centuries, common law has recognized that certain types of industries that exhibit market power have obligations to the public. These obligations include non-discrimination, service to all customers, as well as sometimes more burdensome regulation such as rate regulation. In short, they had to give up some pricing and marketing strategies associated with their extraordinary market power. But in return, they received certain benefits, such as liability exemption and other legal privileges like special property easement or even rights of condemnation.

Section 230 asks for nothing because the early internet platforms like Prodigy and CompuServe had nothing to give. They had no market power upon which to exercise control. Congress bestowed a gift on the nascent industry in Section 230(c)(1) and an incentive for creating family-friendly environments in Section 230(c)(2).

But times have changed. Facebook, Twitter, and Google now bestride the narrow world like a Colossus. They exercise market power like a telephone company or electrical utility, extracting huge rents from the economy. They receive liability relief for the messages they carry just like a telephone or electrical utility, but with none of the duties of nondiscrimination or service. The result is obvious. Insulated from market forces by their near monopoly power and facing diminished financial incentives given their Croesus-like wealth, the leaders of these private companies can indulge their personal preferences, imposing them on the country’s political discourse.

In short, the Section 230 “deal” should be up for renegotiation. Fortunately, Senators Hawley and Ted Cruz of Texas are leading the charge. On Wednesday, Hawley introduced legislation that would remove Section 230 protections for the social media giants unless they submit to an external audit that proves their censorship protocols are politically neutral. 

Both Hawley and Cruz recognize the threats that the relatively new social media behemoths pose and are seeking ways to rein them in. Unfortunately, on the other side of the aisle, Democratic leaders want to make a different deal for Section 230. As Nancy Pelosi recently made clear, she sees section 230 as a “gift.” It takes little imagination to assume that if the social media giants want to retain Pelosi’s special gift, they must continue to beat down on conservatives.

The Section 230 debate is becoming the kind that the Founding Fathers intended the First Amendment to prevent. “Congress shall make no law…abridging the freedom of speech, or of the press” means not simply that the federal government should not regulate speech but that political gamesmanship should not allow a politically slanted marketplace of ideas.

The solution is easy. Just like telephone companies and FedEx, dominant internet platforms should not have the ability to block, discriminate, or prioritize against individuals on the basis of their political views. This status could be applied without special regulation—simply by tying their Section 230 immunity to viewpoint neutrality.

Greg Walden, the ranking Republican on the House Committee on Energy and Commerce, alluded to this when discussing net neutrality legislation by asking why tech monopolies that have “blocked, prioritized, or shadowbanned” content on their platforms receive “special protection under Section 230…as if they were a common carrier” without being “covered by the net neutrality rules.”

Indeed, advocates of applying common carrier rules to broadband service providers have explicitly tied Section 230 to this non-discrimination principle. The FCC cited Section 230’s findings that online platforms “offer a forum for a true diversity of political discourse,” when it enacted its “Protecting and Promoting the Open Internet” order in 2015, as did the state of Vermont when it enacted its own neutrality legislation.

This does not mean the platforms should not be able to regulate their platforms. As Section 230 intended, they can remove content that is violent, harassing, or obscene—even if it’s not illegal. However, this must be done so based on fair and neutral criteria. They should share their impressive technologies to track and monitor online content with their users to empower them to create online environments of their choosing—not arrogating that power to a central corporate authority.

There is nothing new here. The struggle for a free, democratic internet mirrors the struggle of the settlers, frontiersman, ranchers, and farmers against the railroads during the 19th century. Back then, a large network industry that was centered in the cities and coasts and that had significant market power pitted itself against a large swath of the American public, mostly in the nation’s interior. This led to common carriage regulation of railroads, grain elevators, and other industries affected with the public interest.

This regulation did not burden the nation with the unbearable weight of dirigisme. Instead, the 19th century witnessed an unparalleled burst of economic and civic energy. Similarly, guaranteeing free speech and non-discrimination on dominant internet platforms will not crush online innovation. It is more likely that such reasonable controls will protect free speech and allow our political culture to flourish.

Postings for Big Tech, Social Media and Corporations

Exit mobile version