How Did The Communications Decency Act Affect Social Media?

Social networking service concept. communication network.

I have written several articles on postings related to Big Tech, Social Media and Corporations. A list of links have been provided at bottom of this article for your convenience. This article will, however address different aspects on these Industries.

Communications Decency Act of 1996

The Communications Decency Act of 1996 (CDA) was the first notable attempt by the United States Congress to regulate pornographic material on the Internet. In 1997, in the landmark case of Reno v. ACLU, the United States Supreme Court struck the anti-indecency provisions of the act.

The Act is the short name of Title V of the Telecommunications Act of 1996, as specified in Section 501 of the 1996 Act. It was introduced to the Senate Committee of Commerce, Science, and Transportation by Senators James Exon (D-NE) and Slade Gorton (R-WA) in 1995. The amendment that became the CDA was added to the Telecommunications Act in the Senate by an 81–18 vote on June 15, 1995.

As eventually passed by Congress, Title V affected the Internet (and online communications) in two significant ways. First, it attempted to regulate both indecency (when available to children) and obscenity in cyberspace. Second, Section 230 of the Communications Act of 1934 (Section 9 of the Communications Decency Act / Section 509 of the Telecommunications Act of 1996) has been interpreted to say that operators of Internet services are not to be construed as publishers (and thus not legally liable for the words of third parties who use their services).

Anti-indecency and anti-obscenity provisions

The most controversial portions of the act were those relating to indecency on the Internet. The relevant sections of the act were introduced in response to fears that Internet pornography was on the rise. Indecency in TV and radio broadcasting had already been regulated by the Federal Communications Commission—broadcasting of offensive speech was restricted to certain hours of the day when minors were supposedly least likely to be exposed. Violators could be fined and potentially lose their licenses. The Internet, however, had only recently been opened to commercial interests by the 1992 amendment to the National Science Foundation Act and thus had not been taken into consideration by previous laws. The CDA, which affected both the Internet and cable television, marked the first attempt to expand regulation to these new media.

Passed by Congress on February 1, 1996, and signed by President Bill Clinton on February 8, 1996, the CDA imposed criminal sanctions on anyone who

knowingly (A) uses an interactive computer service to send to a specific person or persons under 18 years of age, or (B) uses any interactive computer service to display in a manner available to a person under 18 years of age, any comment, request, suggestion, proposal, image, or other communication that, in context, depicts or describes, in terms patently offensive as measured by contemporary community standards, sexual or excretory activities or organs.

It further criminalized the transmission of materials that were “obscene or indecent” to persons known to be under 18.

Free speech advocates, however, worked diligently and successfully to overturn the portion relating to indecent, but not obscene, speech. They argued that speech protected under the First Amendment, such as printed novels or the use of the seven dirty words, would suddenly become unlawful when posted to the Internet. Critics also claimed the bill would have a chilling effect on the availability of medical information. Online civil liberties organizations arranged protests against the bill, for example, the Black World Wide Web protest which encouraged webmasters to make their sites’ backgrounds black for 48 hours after its passage, and the Electronic Frontier Foundation‘s Blue Ribbon Online Free Speech Campaign.

Section 230

Section 230 of the Communications Act of 1934 (added by Section 9 of the Communications Decency Act / Section 509 of the Telecommunications Act of 1996) was not part of the original Senate legislation, but was added in conference with the House, where it had been separately introduced by Representatives Christopher Cox (R-CA) and Ron Wyden (D-OR) as the Internet Freedom and Family Empowerment Act and passed by a near-unanimous vote on the floor. It added protection for online service providers and users from actions against them based on the content of third parties, stating in part that “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” Effectively, this section immunizes both ISPs and Internet users from liability for torts committed by others using their website or online forum, even if the provider fails to take action after receiving actual notice of the harmful or offensive content.

Through the so-called Good Samaritan provision, this section also protects ISPs from liability for restricting access to certain material or giving others the technical means to restrict access to that material.

On July 23, 2013, the attorneys general of 47 states sent a letter to Congress requesting that the criminal and civil immunity in section 230 be removed. The ACLU wrote of the proposal, “If Section 230 is stripped of its protections, it wouldn’t take long for the vibrant culture of free speech to disappear from the web.”

FOSTA-SESTA

Allow States and Victims to Fight Online Sex Trafficking Act (FOSTA) is a bill introduced in the U.S. House of Representative by Ann Wagner in April 2017. Stop Enabling Sex Traffickers Act (SESTA) is a similar U.S. Senate bill introduced by Rob Portman in August 2017. The combined FOSTA-SESTA package passed the House on February 27, 2018, with a vote of 388–25 and the Senate on March 21, 2018, with a vote of 97–2. The bill was signed into law by President Donald Trump on April 11, 2018.

The bill clarifies the country’s sex trafficking law to make it illegal to knowingly assist, facilitate, or support sex trafficking, and amends the section 230 safe harbors of the Communications Decency Act (which make online services immune from civil liability for the actions of their users) to exclude enforcement of federal or state sex trafficking laws from its immunity. The intent is to provide serious, legal consequences for websites that profit from sex trafficking and give prosecutors tools they need to protect their communities and give victims a pathway to justice.

The bills were criticized by pro-free speech and pro-Internet groups as a “disguised internet censorship bill” that weakens the section 230 safe harbors, places unnecessary burdens on internet companies and intermediaries that handle user-generated content or communications with service providers required to proactively take action against sex trafficking activities, and requiring a “team of lawyers” to evaluate all possible scenarios under state and federal law (which may be financially unfeasible for smaller companies). Online sex workers argued that the bill would harm their safety, as the platforms they utilize for offering and discussing sexual services (as an alternative to street prostitution) had begun to reduce their services or shut down entirely due to the threat of liability under the bill. Since the bill’s passage, sex workers have reported economic instability and increases in violence, as had been predicted.

Section 230 Examined

Section 230 is a piece of Internet legislation in the United States, passed into law as part of the Communications Decency Act (CDA) of 1996 (a common name for Title V of the Telecommunications Act of 1996), formally codified as Section 230 of the Communications Act of 1934. Section 230 generally provides immunity for website publishers from third-party content.

At its core, Section 230(c)(1) provides immunity from liability for providers and users of an “interactive computer service” who publish information provided by third-party users:

No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

The statute in Section 230(c)(2) further provides “Good Samaritan” protection from civil liability for operators of interactive computer services in the removal or moderation of third-party material they deem obscene or offensive, even of constitutionally protected speech, as long as it is done in good faith.

Section 230 was developed in response to a pair of lawsuits against Internet service providers (ISPs) in the early 1990s that had different interpretations of whether the service providers should be treated as publishers or distributors of content created by its users. It was also pushed by the tech industry, and other experts, that language in the proposed CDA made providers responsible for indecent content posted by users that could extend to other types of questionable free speech. After passage of the Telecommunications Act, the CDA was challenged in courts and ruled by the Supreme Court in Reno v. American Civil Liberties Union (1997) to be partially unconstitutional, leaving the Section 230 provisions in place. Since then, several legal challenges have validated the constitutionality of Section 230.

Section 230 protections are not limitless, requiring providers to still remove material illegal on a federal level such as copyright infringement. In 2018, Section 230 was amended by the Stop Enabling Sex Traffickers Act (FOSTA-SESTA) to require the removal of material violating federal and state sex trafficking laws. In the following years, protections from Section 230 have come under more scrutiny on issues related to hate speech and ideological biases in relation to the power technology companies can hold on political discussions, and became a major issue during the 2020 United States presidential election.

Passed at a time when Internet use was just starting to expand in both breadth of services and range of consumers in the United States, Section 230 has frequently been referred as a key law that has allowed the Internet to flourish, and has been called “the twenty-six words that created the Internet”.

Application and limits

Section 230, as passed, has two primary parts both listed under 230 as the “Good Samaritan” portion of the law. Section 230(c)(1), as identified above, defines that an information service provider shall not be treated as a “publisher or speaker” of information from another provider. Section 230 provides immunity from civil liabilities for information service providers that remove or restrict content from their services they deem “obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected”, as long as they act “in good faith” in this action.

In analyzing the availability of the immunity offered by Section 230, courts generally apply a three-prong test. A defendant must satisfy each of the three prongs to gain the benefit of the immunity:[4]

  1. The defendant must be a “provider or user” of an “interactive computer service.”
  2. The cause of action asserted by the plaintiff must treat the defendant as the “publisher or speaker” of the harmful information at issue.
  3. The information must be “provided by another information content provider,” i.e., the defendant must not be the “information content provider” of the harmful information at issue.

Section 230 immunity is not unlimited. The statute specifically excepts federal criminal liability 230, electronic privacy violations 230 and intellectual property claims 230. There is also no immunity from state laws that are consistent with 230 though state criminal laws have been held preempted in cases such as Backpage.com, LLC v. McKenna and Voicenet Commc’ns, Inc. v. Corbett (agreeing “The plain language of the CDA provides … immunity from inconsistent state criminal laws.”).

As of mid-2016, courts have issued conflicting decisions regarding the scope of the intellectual property exclusion set forth in 230. For example, in Perfect 10, Inc. v. CCBill, LLC, the 9th Circuit Court of Appeals ruled that the exception for intellectual property law applies only to federal intellectual property claims such as copyright infringement, trademark infringement, and patents, reversing a district court ruling that the exception applies to state-law right of publicity claims. The 9th Circuit’s decision in Perfect 10 conflicts with conclusions from other courts including Doe v. Friendfinder. The Friendfinder court specifically discussed and rejected the lower court’s reading of “intellectual property law” in CCBill and held that the immunity does not reach state right of publicity claims.

Additionally, with the passage of the Digital Millennium Copyright Act in 1998, service providers must comply with additional requirements for copyright infringement to maintain safe harbor protections from liability, as defined in the DMCA’s Title II, Online Copyright Infringement Liability Limitation Act.

Background and passage

Prior to the Internet, case law was clear that a liability line was drawn between publishers of content and distributors of content; publishers would be expected to have awareness of material it was publishing and thus should be held liable for any illegal content it published, while distributors would likely not be aware and thus would be immune. This was established in Smith v. California (1959), where the Supreme Court ruled that putting liability on the provider (a book store in this case) would have “a collateral effect of inhibiting the freedom of expression, by making the individual the more reluctant to exercise it.”

In the early 1990s, the Internet became more widely adopted and created means for users to engage in forums and other user-generated content. While this helped to expand the use of the Internet, it also resulted in a number of legal cases putting service providers at fault for the content generated by its users. This concern was raised by legal challenges against CompuServe and Prodigy, early service providers at this time. CompuServe stated they would not attempt to regulate what users posted on their services, while Prodigy had employed a team of moderators to validate content. Both faced legal challenges related to content posted by their users. In Cubby, Inc. v. CompuServe Inc., CompuServe was found not be at fault as, by its stance as allowing all content to go unmoderated, it was a distributor and thus not liable for libelous content posted by users. However, Stratton Oakmont, Inc. v. Prodigy Services Co. found that as Prodigy had taken an editorial role with regard to customer content, it was a publisher and legally responsible for libel committed by customers.

Service providers made their Congresspersons aware of these cases, believing that if upheld across the nation, it would stifle the growth of the Internet. United States Representative Christopher Cox (R-CA) had read an article about the two cases and felt the decisions were backwards. “It struck me that if that rule was going to take hold then the internet would become the Wild West and nobody would have any incentive to keep the internet civil,” Cox stated.

At the time, Congress was preparing the Communications Decency Act (CDA), part of the omnibus Telecommunications Act of 1996, which was designed to make knowingly sending indecent or obscene material to minors a criminal offense. A version of the CDA had passed through the Senate pushed by Senator J. James Exon. A grassroots effort in the tech industry reacted to try to convince the House of Representatives to challenge Exon’s bill. Based on the Stratton Oakmont decision, Congress recognized that by requiring service providers to block indecent content would make them be treated as publishers in context of the First Amendment and thus become liable for other illegal content such as libel, not set out in the existing CDA. Cox and fellow Representative Ron Wyden (D-OR) wrote the House bill’s section 509, titled the Internet Freedom and Family Empowerment Act, designed to override the decision from Stratton Oakmont, so that service providers could moderate content as necessary and did not have to act as a wholly neutral conduit. The new Act was added the section while the CDA was in conference within the House.

The overall Telecommunications Act, with both Exon’s CDA and Cox/Wyden’s provision, passed both Houses by near-unanimous votes and signed into law by President Bill Clinton by February 1996. Cox/Wyden’s section became Section 509 of the Telecommunications Act of 1996 and became law as a new Section 230 of the Communications Act of 1934. The anti-indecency portion of the CDA was immediately challenged on passage, resulting in the Supreme Court 1997 case, Reno v. American Civil Liberties Union, that ruled all of the anti-indecency sections of the CDA were unconstitutional, but left Section 230 as law.

Impact

The passage and subsequent legal history supporting the constitutionality of Section 230 have been considered essential to the growth of Internet through the early part of the 21st century. Coupled with the Digital Millennium Copyright Act (DMCA) of 1998, Section 230 provides internet service providers safe harbors to operate as intermediaries of content without fear of being liable for that content as long as they take reasonable steps to delete or prevent access to that content. These protections allowed experimental and novel applications in the Internet area without fear of legal ramifications, creating the foundations of modern Internet services such as advanced search enginessocial mediavideo streaming, and cloud computingNERA Economic Consulting estimated in 2017 that Section 230 and the DMCA, combined, contributed about 425,000 jobs to the U.S. in 2017 and represented a total revenue of US$44 billion annually.

Early challenges – Zeran v. AOL (1997–2008)

The first major challenge to Section 230 itself was Zeran v. AOL, a 1997 case decided at the Fourth Circuit. The case involved a person that sued America Online (AOL) for failing to remove, in a timely manner, libelous ads posted by AOL users that inappropriately connected his home phone number to the Oklahoma City bombing. The court found for AOL and upheld the constitutionality of Section 230, stating that Section 230 “creates a federal immunity to any cause of action that would make service providers liable for information originating with a third-party user of the service.” The court asserted in its ruling Congress’s rationale for Section 230 was to give Internet service providers broad immunity “to remove disincentives for the development and utilization of blocking and filtering technologies that empower parents to restrict their children’s access to objectionable or inappropriate online material.” In addition, Zeran notes “the amount of information communicated via interactive computer services is . . . staggering. The specter of tort liability in an area of such prolific speech would have an obviously chilling effect. It would be impossible for service providers to screen each of their millions of postings for possible problems. Faced with potential liability for each message republished by their services, interactive computer service providers might choose to severely restrict the number and type of messages posted. Congress considered the weight of the speech interests implicated and chose to immunize service providers to avoid any such restrictive effect.” This rule, cementing Section 230’s liability protections, has been considered one of the most important case laws affecting the growth of the Internet, allowing websites to be able to incorporate user-generated content without fear of prosecution. However, at the same time, this has led to Section 230 being used as a shield for some website owners as courts have ruled Section 230 provides complete immunity for ISPs with regard to the torts committed by their users over their systems. Through the next decade, most cases involving Section 230 challenges generally fell in favor of service providers, ruling in favor of their immunity from third-party content on their sites.

Erosion of Section 230 immunity – Roommates.com (2008–2016)

While Section 230 had seemed to have given near complete immunity to service providers in its first decade, new case law around 2008 started to find cases where providers can be liable for user content due to being a “publisher or speaker” related to that content under 230. One of the first such cases to make this challenge was Fair Housing Council of San Fernando Valley v. Roommates.com,  The case centered on the services of Roommates.com that helped to match renters based on profiles they created on their website; this profile was generated by a mandatory questionnaire and which included information about their gender and race and preferred roommates’ race. The Fair Housing Council of San Fernando Valley stated this created discrimination and violated the Fair Housing Act, and asserted that Roommates.com was liable for this. In 2008, the Ninth Circuit in an en banc decision ruled against Roommates.com, agreeing that its required profile system made it an information content provider and thus ineligible to receive the protections of 230.

The decision from Roommates.com was considered to be the most significant deviation from Zeran in how Section 230 was handled in case law. Eric Goldman of the Santa Clara University School of Law wrote that while the Ninth Circuit’s decision in Roommates.com was tailored to apply to a limited number of websites, he was “fairly confident that lots of duck-biting plaintiffs will try to capitalize on this opinion and they will find some judges who ignore the philosophical statements and instead turn a decision on the opinion’s myriad of ambiguities”. Over the next several years, a number of cases cited the Ninth Circuit’s decision in Roommates.com to limit some of the Section 230 immunity to websites. Law professor Jeff Kosseff of the United States Naval Academy reviewed 27 cases in the 2015–2016 year involving Section 230 immunity concerns, and found more than half of them had denied the service provider immunity, in contrast to a similar study he had performed in from 2001 to 2002 where a majority of cases granted the website immunity; Kosseff asserted that the Roommates.com decision was the key factor that led to this change.

Sex trafficking – Backpage.com and FOSTA-SESTA (2012–2017)

Around 2001, a University of Pennsylvania paper warned that “online sexual victimization of American children appears to have reached epidemic proportions” due to the allowances granted by Section 230. Over the next decade, advocates against such exploitation, such as the National Center for Missing and Exploited Children and Cook County Sheriff Tom Dart, pressured major websites to block or remove content related to sex trafficking, leading to sites like FacebookMySpace, and Craigslist to pull such content. Because mainstream sites were blocking this content, those that engaged or profited from trafficking started to use more obscure sites, leading to the creation of sites like Backpage. In addition to removing these from the public eye, these new sites worked to obscure what trafficking was going on and who was behind it, limiting ability for law enforcement to take action. Backpage and similar sites quickly came under numerous lawsuits from victims of the sex traffickers and exploiters for enabling this crime, but the court continually found in favor of Backpage due to Section 230. Attempts to block Backpage from using credit card services as to deny them revenue was also defeated in the courts, as Section 230 allowed their actions to stand in January 2017.

Due to numerous complaints from constituents, Congress began an investigation into Backpage and similar sites in January 2017, finding Backpage complicit in aiding and profiting from illegal sex trafficking. Subsequently, Congress introduced the FOSTA-SESTA bills: the Allow States and Victims to Fight Online Sex Trafficking Act (FOSTA) in the House of Representatives by Ann Wagner in April 2017, and the Stop Enabling Sex Traffickers Act (SESTA) U.S. Senate bill introduced by Rob Portman in August 2017. Combined, the FOSTA-SESTA bills modified Section 230 to exempt service providers from Section 230 immunity when dealing with civil or criminal crimes related to sex trafficking, which removes section 230 immunity for services that knowingly facilitate or support sex trafficking. The bill passed both Houses and was signed into law by President Donald Trump on April 11, 2018.

The bills were criticized by pro-free speech and pro-Internet groups as a “disguised internet censorship bill” that weakens the section 230 immunity, places unnecessary burdens on Internet companies and intermediaries that handle user-generated content or communications with service providers required to proactively take action against sex trafficking activities, and requires a “team of lawyers” to evaluate all possible scenarios under state and federal law (which may be financially unfeasible for smaller companies). Critics also argued that FOSTA-SESTA did not distinguish between consensual, legal sex offerings from non-consensual ones, and argued it would cause websites otherwise engaged in legal offerings of sex work would be threatened with liability charges. Online sex workers argued that the bill would harm their safety, as the platforms they utilize for offering and discussing sexual services in a legal manner (as an alternative to street prostitution) had begun to reduce their services or shut down entirely due to the threat of liability under the bill.

Debate on Section 230’s protections for social media companies (2016–present)

Many social media sites, notably the Big Tech companies of FacebookGoogle, and Apple, as well as Twitter, have come under scrutiny as a result of the alleged Russian interference in the 2016 United States elections, where it was alleged that Russian agents used the sites to spread propaganda and fake news to swing the election in favor of Donald Trump. These platforms also were criticized for not taking action against users that used the social media outlets for harassment and hate speech against others. Shortly after the passage of FOSTA-SESTA acts, some in Congress recognized that additional changes could be made to Section 230 to require service providers to deal with these bad actors, beyond what Section 230 already provided to them.

Platform neutrality

Some politicians, including Republican senators Ted Cruz and Josh Hawley, have accused major social networks of displaying a bias against conservative perspectives when moderating content (such as Twitter suspensions). In a Fox News op-ed, Cruz argued that section 230 should only apply to providers that are politically “neutral”, suggesting that a provider “should be considered to be a liable ‘publisher or speaker’ of user content if they pick and choose what gets published or spoke.” Section 230 does not contain any requirements that moderation decisions be neutral. Hawley alleged that section 230 immunity was a “sweetheart deal between big tech and big government”.

In December 2018, Republican representative Louie Gohmert introduced the Biased Algorithm Deterrence Act, which would remove all section 230 protections for any provider that used filters or any other type of algorithms to display user content when otherwise not directed by a user.

In June 2019, Hawley introduced the Ending Support for Internet Censorship Act, that would remove section 230 protections from companies whose services have more than 30 million active monthly users in the U.S. and more than 300 million worldwide, or have over $500 million in annual global revenue, unless they receive a certification from the majority of the Federal Trade Commission that they do not moderate against any political viewpoint, and have not done so in the past 2 years.

There has been criticism—and support—of the proposed bill from various points on the political spectrum. A poll of more than 1,000 voters gave Senator Hawley’s bill a net favorability rating of 29 points among Republicans (53% favor, 24% oppose) and 26 points among Democrats (46% favor, 20% oppose). Some Republicans feared that by adding FTC oversight, the bill would continue to fuel fears of a big government with excessive oversight powers. Democrat Speaker Nancy Pelosi has indicated support for the same approach Hawley has taken. The chairman of the Senate Judiciary Committee, Senator Graham, has also indicated support for the same approach Hawley has taken, saying “he is considering legislation that would require companies to uphold ‘best business practices’ to maintain their liability shield, subject to periodic review by federal regulators.” 

Legal experts have criticized the Republicans’ push to make Section 230 encompass platform neutrality. Wyden stated in response to potential law changes that “Section 230 is not about neutrality. Period. Full stop. 230 is all about letting private companies make their own decisions to leave up some content and take other content down.” Kosseff has stated that the Republican intentions are based on a “fundamental misunderstanding” of Section 230’s purpose, as platform neutrality was not one of the considerations made at the time of passage. Kosseff stated that political neutrality was not the intent of Section 230 according to the framers, but rather making sure providers had the ability to make content-removal judgement without fear of liability. There have been concerns that any attempt to weaken Section 230 could actually cause an increase in censorship when services lose their exemption from liability.

Attempts to bring damages to tech companies for apparent anti-conservative bias in courts, arguing against Section 230 protections, have generally failed. A lawsuit brought by the non-profit Freedom’s Watch in 2018 against Google, Facebook, Twitter, and Apple on antitrust violations for using their positions to create anti-conservative censorship was dismissed by the D.C. Circuit Court of Appeals in May 2020, with the judges ruling that censorship can only apply to First Amendment rights blocked by the government and not by private entities.

Hate speech

In the wake of the 2019 shootings in Christchurch, New ZealandEl Paso, Texas, and Dayton, Ohio, the impact on Section 230 and liability towards online hate speech has been raised. In both the Christchurch and El Paso shootings, the perpetrator posted hate speech manifestos to 8chan, a moderated imageboard known to be favorable for the posting of extreme views. Concerned politicians and citizens raised calls at large tech companies for the need for hate speech to be removed from the Internet; however, hate speech is generally protected speech under the First Amendment, and Section 230 removes the liability for these tech companies to moderate such content as long as it is not illegal. This has given the appearance that tech companies do not need to be proactive against hateful content, thus allowing the hate content to proliferate online and lead to such incidents.

Notable articles on these concerns were published after the El Paso shooting by The New York Times,[62] The Wall Street Journal, and Bloomberg Businessweek, among other outlets, but which were criticized by legal experts including Mike GodwinMark Lemley, and David Kaye, as the articles implied that hate speech was protected by Section 230, when it is in fact protected by the First Amendment. In the case of The New York Times, the paper issued a correction to affirm that the First Amendment protected hate speech, and not Section 230.

Members of Congress have indicated they may pass a law that changes how Section 230 would apply to hate speech as to make tech companies liable for this. Wyden, now a Senator, stated that he intended for Section 230 to be both “a sword and a shield” for Internet companies, the “sword” allowing them to remove content they deem inappropriate for their service, and the shield to help keep offensive content from their sites without liability. However, Wyden argued that because tech companies have not been willing to use the sword to remove content, it is necessary to take away that shield. Some have compared Section 230 to the Protection of Lawful Commerce in Arms Act, a law that grants gun manufacturers immunity from certain types of lawsuits when their weapons are used in criminal acts. According to law professor Mary Anne Franks, “They have not only let a lot of bad stuff happen on their platforms, but they’ve actually decided to profit off of people’s bad behavior.”

Representative Beto O’Rourke stated his intent for his 2020 presidential campaign to introduce sweeping changes to Section 230 to make Internet companies liable for not being proactive in taking down hate speech. O’Rourke later dropped out of the race. Fellow candidate and former vice president Joe Biden has similarly called for Section 230 protections to be weakened or otherwise “revoked” for “big tech” companies—particularly Facebook—having stated in a January 2020 interview with The New York Times that “[Facebook] is not merely an internet company. It is propagating falsehoods they know to be false”, and that the U.S. needed to “[set] standards” in the same way that the European Union’s General Data Protection Regulation (GDPR) set standards for online privacy.

Terrorism-related content

In the aftermath of the Back page trial and subsequent passage of FOSTA-SESTA, others have found that Section 230 appears to protect tech companies from content that is otherwise illegal under United States law. Professor Danielle Citron and journalist Benjamin Wittes found that as late as 2018, several groups deemed as terrorist organizations by the United States had been able to maintain social media accounts on services run by American companies, despite federal laws that make providing material support to terrorist groups subject to civil and criminal charges.[70] However, case law from the Second Circuit has ruled that under Section 230, technology companies are generally not liable for civil claims based on terrorism-related content.

2020 Department of Justice review

In February 2020, the United States Department of Justice held a workshop related to Section 230 as part of an ongoing antitrust probe into “big tech” companies. Attorney General William Barr said that while Section 230 was needed to protect the Internet’s growth while most companies were not stable, “No longer are technology companies the underdog upstarts…They have become titans of U.S. industry” and questioned the need for Section 230’s broad protections. Barr said that the workshop was not meant to make policy decisions on Section 230, but part of a “holistic review” related to Big Tech since “not all of the concerns raised about online platforms squarely fall within antitrust” and that the Department of Justice would want to see reform and better incentives to improve online content by tech companies within the scope of Section 230 rather than change the law directly.[72] Observers to the sessions stated the focus of the talks only covered Big Tech and small sites that engaged in areas of revenge porn, harassment, and child sexual abuse, but did not consider much of the intermediate uses of the Internet.

The DOJ issued their four major recommendations to Congress in June 2020 to modify Section 230. These include:

  1. Incentivizing platforms to deal with illicit content, including calling out “Bad Samaritans” that solicit illicit activity and remove their immunity, and carve out exemptions in the areas of child abuse, terrorism, and cyber-stalking, as well as when platforms have been notified by courts of illicit material;
  2. Removing protections from civil lawsuits brought by the federal government;
  3. Disallowing Section 230 protections in relationship to antitrust actions on the large Internet platforms; and
  4. Promoting discourse and transparency by defining existing terms in the statute like “otherwise objectionable” and “good faith” with specific language, and requiring platforms to publicly document when they take moderation actions against content unless that may interfere with law enforcement or risk harm to an individual.

Legislation to alter Section 230

In 2020, several bills were introduced through Congress to limit the liability protections that Internet platforms had from Section 230 as a result of events in the preceding years. EARN IT Act of 2020 In March 2020, a bi-partisan bill known as the Eliminating Abusive and Rampant Neglect of Interactive Technologies (EARN IT) Act was introduced to the Senate, which called for the creation of a 15-member government commission (including administration officials and industry experts) to establish “best practices” for the detection and reporting of child exploitation materials. Internet services would be required to follow these practices; the commission would have the power to penalize those who are not in compliance, which can include removing their Section 230 protections. While the bill had bi-partisan support from its sponsors (Lindsey GrahamJosh HawleyDianne Feinstein, and Richard Blumenthal) and backing from groups like National Center for Missing and Exploited Children and the National Center on Sexual Exploitation, the EARN IT Act was criticized by a coalition of 25 organizations, as well as by human rights groups including the Electronic Frontier Foundation, the American Civil Liberties Union, and Human Rights Watch. Opponents of the bill recognized that some of the “best practices” would most likely include a backdoor for law enforcement into any encryption used on the site, in addition to the dismantling of Section 230’s approach, based on commentary made by members of the federal agencies that would be placed on this commission. For example, Attorney General Barr has extensively argued that the use of end-to-end encryption by online services can obstruct investigations by law enforcement, especially those involving child exploitation and has pushed for a governmental backdoor into encryption services. The Senators behind EARN IT have stated that there is no intent to bring any such encryption backdoors with this legislation. Wyden also was critical of the bill, calling it “a transparent and deeply cynical effort by a few well-connected corporations and the Trump administration to use child sexual abuse to their political advantage, the impact to free speech and the security and privacy of every single American be damned.” Graham stated that the goal of the bill was “to do this in a balanced way that doesn’t overly inhibit innovation, but forcibly deals with child exploitation.” As an implicit response to EARN IT, Wyden along with House Representative Anna G. Eshoo proposed a new bill, the Invest in Child Safety Act, in May 2020 that would give US$5 billion to the Department to Justice to give additional manpower and tools to enable them to address child exploitation directly rather than to rely on technology companies to rein in the problem. The EARN IT Act advanced out of the senate judiciary committee by a unanimous 22-0 vote on July 2, 2020, following an amendment by Lindsey Graham. Graham’s amendment removed the legal authority of the proposed federal commission, instead giving a similar authority to each individual state state government. The bill was introduced into the House on October 2, 2020.Limiting Section 230 Immunity to Good Samaritans ActIn June 2020, Hawley and three Republican Senators, Marco RubioKelly Loeffler and Kevin Cramer, called on the FCC to review the protections that the Big Tech companies had from Section 230, stating in their letter that “It is time to take a fresh look at Section 230 and to interpret the vague standard of ‘good faith’ with specific guidelines and direction” due to the “a lack of clear rules” and the “judicial expansion” around the statute. Hawley introduced the “Limiting Section 230 Immunity to Good Samaritans Act” bill in the Senate on June 17, 2020, with co-sponsors Rubio, Mike Braun and Tom Cotton, which would allow providers with over 30 million monthly U.S. users and over US$1.5 billion in global revenues to be liable to lawsuits from users who believed that the provider was not uniformly enforcing content; users would be able to seek damages up to US$5,000 and lawyers fees under the bill. Platform Accountability and Consumer Transparency (PACT) ActA bi-partisan bill introduced by Senators Brian Schatz and John Thune in June 2020, the “Platform Accountability and Consumer Technology Act” would require Internet platforms to issue public statements on their policies for how they moderate, demonetize, and remove user content from their platforms, and to publish public quarterly reports to summarize their actions and statitics for that quarter. The bill would also mandate that platforms conform with all court-ordered removal of content deemed illegal within 24 hours. Further, the bill would eliminate platforms’ Section 230 protections from federal civil liability in cases brought against the platforms and would enable states’ attorney generals to enforce actions against platforms. Schatz and Thune considered their approach more of “a scalpel, rather than a jackhammer” in contrast to other options that have been presented to date. Behavioral Advertising Decisions Are Downgrading Services (BAD ADS) ActHawley introduced the Behavioral Advertising Decisions Are Downgrading Services Act in July 2020, which would remove Section 230 protections for larger service providers (30 million users in the U.S. or 300 million globally and with more than US$1.5 billion in annual revenue) if their sites used behavioral advertising, with ads tailored to the users of the sites based on how the users had engaged with the site or where they were located. Hawley had spoken out against such ad practices and had previously tried to add legislation to require service providers to add “do not track” functionality for Internet ads. Online Freedom and Viewpoint Diversity ActSenators Lindsey GrahamRoger Wicker and Marsha Blackburn introduced the Online Freedom and Viewpoint Diversity Act in September 2020. The bill, if passed, would strip away Section 230 liability protection for sites that fail to give reason for actions taken in moderating or restricting content, and require them to state that said content must have a “objectively reasonable belief” it violated their site’s terms or the site could be penalized. The bill would also replace the vague “objectionable” term in Section 230 with more specific categories, like “unlawful” material where a website would not become liable for taking steps to moderate content.

Executive Order on Preventing Online Censorship

United States President Donald Trump has been a major proponent of limiting the protections of technology and media companies under Section 230 due to claims of an anti-conservative bias. In July 2019, Trump held a “Social Media Summit” that he used to criticize how Twitter, Facebook, and Google handled conservative voices on their platforms. During the summit, Trump warned that he would seek “all regulatory and legislative solutions to protect free speech”. The two tweets on May 26, 2020 from President Trump that Twitter had marked “potentially misleading” (inserting the blue warning icon and “Get the facts…” language) that led to the executive order

In late May 2020, President Trump made statements that mail-in voting would lead to massive fraud, in a pushback against the use of mail-in voting due to the COVID-19 pandemic for the upcoming 2020 primary elections, in both his public speeches and his social media accounts. In a Twitter message on May 26, 2020, he stated that, “There is NO WAY (ZERO!) that Mail-In Ballots will be anything less than substantially fraudulent.” Shortly after its posting, Twitter moderators marked the message with a “potentially misleading” warning (a process it had introduced a few weeks earlier that month primarily in response to misinformation about the COVID-19 pandemic) linking readers to a special page on its site that provided analysis and fact-checks of Trump’s statement from media sources like CNN and The Washington Post, the first time it had used the process on Trump’s messages. Jack Dorsey, Twitter’s CEO, defended the moderation, stating that they were not acting as a “arbitrator of truth” but instead “Our intention is to connect the dots of conflicting statements and show the information in dispute so people can judge for themselves.” Trump was angered by this, and shortly afterwards threatened that he would take action to “strongly regulate” technology companies, asserting these companies were suppressing conservative voices.

On May 28, 2020, Trump signed “Executive Order on Preventing Online Censorship” (EO 13925), an executive order directing regulatory action at Section 230. Trump stated in a press conference before signing his rationale for it: “A small handful of social media monopolies controls a vast portion of all public and private communications in the United States. They’ve had unchecked power to censor, restrict, edit, shape, hide, alter, virtually any form of communication between private citizens and large public audiences.” The EO asserts that media companies that edit content apart from restricting posts that are violent, obscene or harassing, as outlined in the “Good Samaritan” clause 230, are then “engaged in editorial conduct” and may forfeit any safe-harbor protection granted in 230. From that, the EO specifically targets the “Good Samaritan” clause for media companies in their decisions to remove offensive material “in good faith”. Courts have interpreted the “in good faith” portion of the statute based on its plain language, the EO purports to establish conditions where that good faith may be revoked, such as if the media companies have shown bias in how they remove material from the platform. The goal of the EO is to remove the Section 230 protections from such platforms, and thus leaving them liable for content. Whether a media platform has bias would be determined by a rulemaking process to be set by the Federal Communication Commission in consultation with the Commerce Department, the National Telecommunications and Information Administration (NTIA), and the United States Attorney General, while the Justice Department and state attorney generals will handle disputes related to bias, gather these to report to the Federal Trade Commission, who would make determinations if a federal lawsuit should be filed. Additional provisions prevent government agencies from advertising on media company platforms that are demonstrated to have such bias.

The EO came under intense criticism and legal analysis after its announcement. Senator Wyden stated that the EO was a “mugging of the First Amendment”, and that there does need to be a thoughtful debate about modern considerations for Section 230, though the political spat between Trump and Twitter is not a consideration. Professor Kate Klonick of St. John’s University School of Law in New York considered the EO “political theater” without any weight of authority. The Electronic Frontier Foundation‘s Aaron Mackey stated that the EO starts with a flawed misconstruing of linking sections 230, which were not written to be linked and have been treated by case law as independent statements in the statute, and thus “has no legal merit”.

By happenstance, the EO was signed on the same day that riots erupted in Minneapolis, Minnesota in the wake of the killing of George Floyd, an African-American from an incident involving four officers of the Minneapolis Police Department. Trump had tweeted on his conversation with Minnesota’s governor Tim Walz about bringing National Guard to stop the riots, but concluded with the statement, “Any difficulty and we will assume control but, when the looting starts, the shooting starts.”, the latter phrase a phrase attached Miami Police Chief Walter E. Headley to deal with violent riots in 1967. Twitter, after internal review, marked the message with a “public interest notice” that deemed it “glorified violence”, which they would normally remove for violating the site’s terms, but stated to journalists that they “have kept the Tweet on Twitter because it is important that the public still be able to see the Tweet given its relevance to ongoing matters of public importance.” Following Twitter’s marking of his May 28 tweet, Trump said in another tweet that due to Twitter’s actions, “Section 230 should be revoked by Congress. Until then, it will be regulated!”

By June 2, 2020, the Center for Democracy & Technology filed a lawsuit in the United States District Court for the District of Columbia seeking preliminary and permanent injunction from the EO from being enforced, asserting that the EO created a chilling effect on free speech since it puts all hosts of third-party content “on notice that content moderation decisions with which the government disagrees could produce penalties and retributive actions, including stripping them of Section 230s protections”.

The Secretary of Commerce via the NTIA sent a petition with a proposed rule to the FCC on July 27, 2020 as the first stage of executing on the EO. FCC chair Ajit Pai stated in October 2020 that after the Commission reviewed what authority they have over Section 230 that the FCC will proceed will putting forth their proposed rules to clarify Section 230 in October 15, 2020. Pai’s announcement, which came shortly after Trump again called for Section 230 revisions after asserting Big Tech was purposely hiding a reporting of leaked documents around Hunter BidenJoe Biden‘s son, was criticized by the Democratic FCC commissioners Geoffrey Starks and Jessica Rosenworcel and the tech industry, with Rosenworcel stating “The FCC has no business being the president’s speech police.”

A second lawsuit against the EO was filed by activist groups including Rock the Vote and Free Press on August 27, 2020, after Twitter had flagged another of Trump’s tweets for misinformation related to mail-in voting fraud. The lawsuit stated that should the EO be enforced, Twitter would not have been able to fact-check tweets like Trump’s as misleading, thus allowing the President or other government officials to intentionally distribute misinformation to citizens.

I would assume that “extort” isn’t the word that the author of a new report on Section 230 would prefer. Maybe “no carrot, all stick”? Actually, he instead uses the much nicer phrase “leverage to persuade platforms to accept a range of new responsibilities.” But either way, the pitch is pretty similar:

Hey, tech companies — that’s a nice bit of protection from legal liability you’ve got there. It would be a real shame if anything were to happen to it. Maybe if you cut a big enough check every month, we could make sure it stays nice and secure.

For those unfamiliar, Section 230 of the Communications Decency Act of 1996 is the bedrock law that allowed for the evolution of the digital world we know today, for better and for worse — the “twenty-six words that created the internet.” It says that, when it comes to legal liability, websites should be treated more like newsstands than like publishers.

In the print world, if the Daily Gazette prints something libelous about someone, that person can sue the newspaper. But they generally can’t sue the Barnes & Noble where copies of the Gazette were being sold. The thinking is that if you made a newsstand legally liable for the content of every single newspaper and magazine and book it sells…that’d be a pretty strong incentive to get out of the newsstand business. And even if a newsstand stayed open, it would likely become more boring, culling the publications it carries to a few middle-of-the-road options in order to limit its liability.

Section 230 says that an “interactive computer service” would not be “treated as the publisher or speaker of any information” that is provided by a third party — like one of its users posting a comment or sharing a link. So if you post defamatory material about your neighbor on Facebook, you are legally liable for it — but Facebook isn’t. And indeed, it would be hard f

or Facebook, Twitter, Google, or any other sort of digital service provider to exist in their current forms without Section 230. If they were all legally responsible for everything on their platforms, it’d be hard to imagine they’d let random users publish on them.

Section 230 also allows sites to moderate legal content without (generally) being open to litigation. In America, it’s perfectly legal to be a Nazi and to say pro-Nazi things — but if YouTube removes a pro-Nazi video, the Nazi can’t sue claiming his First Amendment rights have been violated.

There has been a lot of political hubbub about Section 230 of late; both Donald Trump and Joe Biden say they want to revoke it. Trump sees it as protecting dastardly social media companies that target conservatives and try to fact-check his tweets. Biden see it as protecting dastardly social media companies that amplify Trump’s falsehoods and extremist content.

Into this debate comes this new paper, by former Businessweek journalist Paul Barrett, now deputy director of NYU Stern’s Center for Business and Human Rights. It’s titled “Regulating Social Media: The Fight Over Section 230 — and Beyond.” It’s a good and valuable contribution, with excellent background summaries of various points of view and filled with good ideas…and one not-as-good one I’m going to complain about for a bit.

Barrett argues for a three-step approach:1. Keep Section 230

The law has helped online platforms thrive by protecting them from most liability related to third-party posts and by encouraging active content moderation. It has been especially valuable to smaller platforms with modest legal budgets. But the benefit Section 230 confers ought to come with a price tag: the assumption of greater responsibility for curbing harmful content.

2. Improve Section 230

The measure should be amended so that its liability shield provides leverage to persuade platforms to accept a range of new responsibilities related to policing content. Internet companies may reject these responsibilities, but in doing so they would forfeit Section 230’s protection, open themselves to costly litigation, and risk widespread opprobrium.

3. Create a Digital Regulatory Agency

There’s a crisis of trust in the major platforms’ ability and willingness to superintend their sites. Creation of a new independent digital oversight authority should be part of the response. While avoiding direct involvement in decisions about content, the agency would enforce the responsibilities required by a revised Section 230.

So the threat of opening up massive legal liability should be used as “leverage to persuade platforms to accept a range of new responsibilities related to policing content” — to turn it into “a quid pro quo benefit.” What could those responsibilities be? The paper offers a few ideas (emphases mine).One, which has been considered in the U.K. as part of that country’s debate over proposed online-harm legislation, would “require platform companies to ensure that their algorithms do not skew toward extreme and unreliable material to boost user engagement.”

Under a second, platforms would disclose data on what content is being promoted and to whom, on the process and policies of content moderation, and on advertising practices.

Platforms also could be obliged to devote a small percentage of their annual revenue to a fund supporting the struggling field of accountability journalism. This last notion would constitute a partial pay-back for the fortune in advertising dollars the social media industry has diverted from traditional news media.

I like the idea of the tech giants giving money to journalism as much as anyone. And I have no particular objection to items 1 and 3 on the paper’s to-do list. But I have to say No. 2 — making liability protection contingent on accepting other, sometimes only tangentially related policy proposals — bugs me. A few reasons:

  • Any of these ideas could become law without getting Section 230 involved. If Congress wants to tax Facebook and Google and use the proceeds to fund journalism, it can…just do that. If it believes requiring disclosure of algorithms and transparency in moderation policies are good ideas, it can pass laws to do so. If a company refuses, fine them or sue them to make them change. There’s no need to tie even the most sensible or well-intended regulations to the legal protection that has basically allowed the Internet to exist.Speaking of which…
  • Section 230 protects every website, not just Facebook and other giants. Every blog, every personal website, every online forum, every chat app, every app where people review restaurants or books or gadgets — they’re all able to function the way they do because of 230, which regulates “interactive computer services,” not just giant social media companies.Why can news sites publish reader comments? Because of Section 230. Imagine if your favorite news outlet was suddenly liable for potentially massive damages because some rando posted “John Q. Doe is a child molester!” under one of its stories. What would an outlet in that situation likely do? Kill off the comments or any other kind of public input that increases liability.Which is why…
  • Incentivizing regulation-via-lawsuit is a bad way to encourage good behavior. We’ve already seen, with cases like Peter Thiel killing Gawker and an Idaho billionaire targeting Mother Jones, that litigation is a powerful tool for the uber-rich to go after news sources they don’t like. Even if the suits don’t have merit, they can easily soak millions of dollars out of news companies with thin margins. Removing Section 230 protections would mean, for example, that a politician could sue a local gadfly blogger over a comment getting moderated or not.In other words…
  • A policy like this favors incumbents and the powerful. Facebook and Google have the profits to be able to deal with lawsuits. But if you’re a small upstart hoping to become the next big thing? Good luck paying lawyers the first time someone uploads a libelous twizzle or a faceplurp or a snapdat or whatever you call it. And frankly, different parts of the Internet should have different ideas about what content is allowable. What counts as “extreme” content on Facebook might not be what counts as “extreme” content on a niche forum.And finally…
  • Do you trust the government to get content moderation right — when the potential penalties for getting it wrong are so huge? Let’s say Congress passes a law saying that, in order to retain their legal protections, websites must “ensure that their algorithms do not skew toward extreme and unreliable material.” Okay — that would require a definition of “extreme and unreliable materials.” Whatever that definition is, it will mark a universe of acceptable speech that is smaller than what the First Amendment allows. And whatever that definition is, it will be up to the executive branch — whether via a new regulatory agency, the Department of Justice, or some other entity — to do rule-making around it and to enforce it.To be blunt: Would you trust the Trump administration to use that power well? This is a president who, just a few months ago, signed an executive order declaring it unacceptable that a Democratic congressman’s tweet “peddling the long-disproved Russian Collusion Hoax” was allowed. The order didn’t do much, practically speaking, because an executive order can’t cancel Section 230. But if whether or not Twitter had legal protections was based on an administration determination that it was not promoting “extreme and unreliable materials,” the scenario is very different. Literally just yesterday, Trump said Twitter should not be allowed to keep up an obviously photoshopped meme of Mitch McConnell and that “Mitch must fight back and repeal Section 230, immediately. Stop biased Big Tech before they stop you!”

Into this debate comes this new paper, by former Businessweek journalist Paul Barrett, now deputy director of NYU Stern’s Center for Business and Human Rights. It’s titled “Regulating Social Media: The Fight Over Section 230 — and Beyond.” It’s a good and valuable contribution, with excellent background summaries of various points of view and filled with good ideas…and one not-as-good one I’m going to complain about for a bit.

Barrett argues for a three-step approach:

So the threat of opening up massive legal liability should be used as “leverage to persuade platforms to accept a range of new responsibilities related to policing content” — to turn it into “a quid pro quo benefit.” What could those responsibilities be? The paper offers a few ideas.

Benton likes the idea of the tech giants giving money to journalism as much as anyone. And I have no particular objection to items 1 and 3 on the paper’s to-do list. But he has to say No. 2 — making liability protection contingent on accepting other, sometimes only tangentially related policy proposals — bugs him. A few reasons:

  • Any of these ideas could become law without getting Section 230 involved. If Congress wants to tax Facebook and Google and use the proceeds to fund journalism, it can…just do that. If it believes requiring disclosure of algorithms and transparency in moderation policies are good ideas, it can pass laws to do so. If a company refuses, fine them or sue them to make them change. There’s no need to tie even the most sensible or well-intended regulations to the legal protection that has basically allowed the Internet to exist.Speaking of which…
  • Section 230 protects every website, not just Facebook and other giants. Every blog, every personal website, every online forum, every chat app, every app where people review restaurants or books or gadgets — they’re all able to function the way they do because of 230, which regulates “interactive computer services,” not just giant social media companies.Why can news sites publish reader comments? Because of Section 230. Imagine if your favorite news outlet was suddenly liable for potentially massive damages because some rando posted “John Q. Doe is a child molester!” under one of its stories. What would an outlet in that situation likely do? Kill off the comments or any other kind of public input that increases liability.Which is why…
  • Incentivizing regulation-via-lawsuit is a bad way to encourage good behavior. We’ve already seen, with cases like Peter Thiel killing Gawker and an Idaho billionaire targeting Mother Jones, that litigation is a powerful tool for the uber-rich to go after news sources they don’t like. Even if the suits don’t have merit, they can easily soak millions of dollars out of news companies with thin margins. Removing Section 230 protections would mean, for example, that a politician could sue a local gadfly blogger over a comment getting moderated or not.In other words…
  • A policy like this favors incumbents and the powerful. Facebook and Google have the profits to be able to deal with lawsuits. But if you’re a small upstart hoping to become the next big thing? Good luck paying lawyers the first time someone uploads a libelous twizzle or a faceplurp or a snapdat or whatever you call it. And frankly, different parts of the Internet should have different ideas about what content is allowable. What counts as “extreme” content on Facebook might not be what counts as “extreme” content on a niche forum.And finally…
  • Do you trust the government to get content moderation right — when the potential penalties for getting it wrong are so huge? Let’s say Congress passes a law saying that, in order to retain their legal protections, websites must “ensure that their algorithms do not skew toward extreme and unreliable material.” Okay — that would require a definition of “extreme and unreliable materials.” Whatever that definition is, it will mark a universe of acceptable speech that is smaller than what the First Amendment allows. And whatever that definition is, it will be up to the executive branch — whether via a new regulatory agency, the Department of Justice, or some other entity — to do rule-making around it and to enforce it.

Barrett’s paper acknowledges many of these problems. Here’s how it described a hypothetical world where Section 230 has been repealed:

The tech giants need greater regulation on a host of issues. But Section 230 has become a political football for all the wrong reasons. Don’t hold the legal heart of the open web hostage in the process.

Republicans are crying “censorship” to pressure Facebook, Twitter, and Google to let them spread misinformation. But if they really got rid of Section 230, they’d be dragging everyone else down with them.

Facebook CEO Mark Zuckerberg, Twitter CEO Jack Dorsey, and Google/Alphabet CEO Sundar Pichai appeared remotely before the Senate Commerce Committee on Wednesday for yet another congressional hearing on content moderation and supposed anti-conservative bias.

The title of the hearing was “Does Section 230’s Sweeping Immunity Enable Big Tech Bad Behavior?” Some conservatives say that Section 230 of the Communications Decency Act of 1996 is the reason why the Big Tech companies can get away with “censoring” their content without facing legal repercussions.

That’s literally the opposite of what’s going on.

What is Section 230? 

Basically, it protects tech companies from legal liability for what their users post on their platforms, with exceptions when it comes to illegal activity such as copyright infringement, sex trafficking, and other federal crimes. It puts the legal liability on the user who posts the content, not the company that hosts it.

Without Section 230, Big Tech companies would be more cautious about what’s allowed on their platforms. Some might even abandon user-generated content. Why allow it if the legal threats aren’t worth the trouble?

Big Tech companies aren’t legally liable for, say, harmful QAnon conspiracies and President Donald Trump’s coronavirus misinformation thanks to Section 230. 

When they do take down or limit the reach of content, it’s their choice — they’re not required by law to do so. Section 230 gives companies the ability to moderate as they please. That’s why complaints about them violating the First Amendment are meritless. They have a right to not host certain types of content.

And it’s not just Big Tech that would be affected. Do you have a blog? Without Section 230, you could be held liable for what your readers say in the comments section.

Getting rid of Section 230 might lead to the censorship of everyone, in that nobody would be able to share their ideas and opinions on social media. 

Republicans on the 230 committee have accused the digital platforms of liberal bias and have called for limits on Section 230’s liability shield, which has granted Big Tech almost blanket immunity in the past. Democrats scoff at the notion of there being any anti-conservative slant on the platforms, pointing to data that shows right-wing content garnering much of the traffic on Facebook and to reports that Facebook has modified its newsfeed algorithm to the detriment of left-leaning news sites. Instead, Democrats fault the tech platforms for failing to adequately moderate content to stop the spread of disinformation, harmful content, extremism and voter manipulation. Both critiques miss the heart of the matter.

First, the platforms’ business models — originally designed to manipulate and addict — are the very reason that disinformation goes viral in the first place. Second, whether the tech platforms are making fair, unbiased moderation decisions about what content reaches millions of people is the wrong question to ask. Instead, Congress and the American people should ask why we are allowing Big Tech that power in the first place. We must drastically deconcentrate control over speech so that any single company’s bias or business model cannot have a sizable impact on public discourse.

Too much control over the marketplace of ideas is antithetical to democracy. That’s why America has long had rules that limit such power, like media ownership limits that prohibit the same corporation from owning a TV station and newspaper in the same place. (Trump’s Federal Communications Commission, incidentally, is currently trying to roll back those rules.) The government failed to limit consolidation and allowed Facebook and Google to morph into monopoly monsters. Big Tech didn’t simply grow by being the best; the House Judiciary Antitrust Subcommittee’s recent 450-page report details just how extensive their anticompetitive behavior has been. About 68% of American adults get some news from social media, according to Pew. And Facebook is now the second-largest news provider when measured in terms of share of Americans’ attention, according to a University of Chicago study.

We read all day so you don’t have to. Get our nightly newsletter for all the top business stories you need to know. Sign Me Up By subscribing you agree to our privacy policy. Meanwhile, the news industry has been in a state of crisis, with newsroom employment falling 51% between 2008 and 2019, and more than half of America’s counties now lacking a daily local paper. In the digital age, online advertising largely could support journalism just as regular advertising did in the past. But Facebook and Google account for 85% to 90% of the growth of the more than $150 billion North American and European digital advertising market, according to Jason Kint, CEO of Digital Content Next, a main trade association for publishers. In 2019, Facebook made roughly $1.2 billion a week from ad revenue alone. Not only is a strong press critical to combat deceptive propaganda, but the very same business model that sucks away news publishers’ revenue also helps disinformation and polarization thrive. The platforms have made some policy changes purporting to lessen disinformation and election interference. Facebook announced it is blocking new political and issue ads the week before the election, while allowing existing ads to continue to run, and it will take measures to combat voter suppression, like partnering with state election authorities to remove false claims about voting. Google-owned YouTube has promised to remove videos that include deceptive information about voting, and Google has said it launched two features in Search that offer information about voting. Twitter has rolled out changes like labeling misleading tweets and removing tweets meant to incite election interference. But a recent study by the German Marshall Fund found that “the level of engagement with articles from outlets that repeatedly publish verifiably false content has increased 102 percent since the run-up to the 2016 election.” The platforms’ problems are nowhere near solved. As a result of their extensive data collection, the platforms have so much data on each individual user, and on millions of people who are like them, that their algorithms can target individuals based on their particular vulnerabilities — precisely identifying which people are most likely to be susceptible to, as one investigative report found, “pseudoscience,” like coronavirus misinformation. Facebook and Google have grown their data collection capabilities through a series of mergers (think Google’s purchases of DoubleClick, Admob, YouTube and Android; and Facebook’s acquisitions of Instagram and WhatsApp). In turn, their data collection furthers their monopoly power and is the reason they dominate digital advertising. The platforms can track you and then allow other parties to hyper-target ads and content at you based on that surveillance.

Facebook and Google’s data collection infrastructures are vast indeed. The platforms have all-encompassing views of what people read, think, do, believe, buy, watch, where they go and who they are with. Google, for example, collects user information through its own services like Search, YouTube, Maps, Android, Chrome, Google Play, Gmail and Google Home. Mark Zuckerberg often claims that Facebook is committed to free expression. But free expression is impossible when one company controls the flow of speech to close to 3 billion people, using individualized, targeted feeds of content intended to make people react. Instead of a public marketplace of ideas, each user sees a private, personalized set of facts, making it nearly impossible to counter harmful speech with more speech. Advertising used to be done based on context (e.g. you see an ad for diapers because you’re reading an article about newborns), not identity (e.g. you see an ad for diapers because a social media platform identified you as pregnant after collecting your ovulation data from an app). An important solution is to return to context-based advertising. At the very least, digital platforms should be required to tell users who is funding the messages they see and why exactly they are being personally targeted, under basic truth-in-advertising rules already on the books. Facebook has provided an ad library application programming interface for researchers, but critics say it is not transparent enough. Leaving it up to the platforms to self-regulate is not a solution: The platforms’ business models are too profitable for the corporations to give them up voluntarily. The House Antitrust Judiciary report provides Congress with a path forward that will enable it to deconcentrate the platforms’ control over speech: Structurally break them up, reform the antitrust laws, pass non-discrimination rules that prohibit preferential treatment to certain types of content or themselves and pass interoperability rules that require platforms to make their infrastructure compatible with upstart competitors. Strong privacy rules are also needed to curb the surveillance that fuels the platforms’ targeting. Starting today, we must dismantle the platforms’ business models that are tearing apart America — and countries around the globe. And we must enforce the antitrust laws we already have, while pushing for legislative reform. Our democracy cannot afford to wait.

Conclusion

I have covered all sides of this matter. I do believe that their is need for some kind of regulation in the Big tech companies. They have been guilty of censorship, and blocking out important news that goes against their beliefs. So if they want to do this I think they should lose the protection of 230. I believe they were instrumental in the election debacle that we find ourselves in. They say that they are trying to prevent fake news from being posted on their sites. That is not their job. They know what is illegal and immoral. That they should censor. It doesn’t take a genius to know what is right and what is wrong. Child pornography is bad, and should be blocked. If someone tries to incite a riot or tries to coerce someone into committing a crime, that is bad. That should be blocked. But if one is just having a lively debate or is discussing news whether it be fake or otherwise is ok. They are trying to be “Big Brother” from the novel 1984. The creators of these sites are very intelligent people. So we are supposed to believe that they don’t know what is censorship or not is inconceivable. They have made billions of dollars from the protection provided by this law. So much money that they are in fact trying to subvert our government, to protect their power base. It has nothing to do with freedom of speech or the protection from the big bad wolf. It is all about greed and power. If you have a question about anything, just follow the money, the trail will always lead you to the correct answers.

Resources

en.wikipedia.org, “Telecommunications Act of 1996,” By Wikipedia Editors; en.wikipedia.org, ” Section 230,” By Wikipedia editors; niemanlab.org, ” Section 230 to force the tech giants into paying for the news?” By Joshua Benton; mashable.com, “What really happens if Republicans get rid of Section 230,” By Matt Binder; cnn.com, ” Forget bias, the real danger is Big Tech’s overwhelming control over speech,” By Sally Hubbard;

Postings for Big Tech, Social Media and Corporations

https://common-sense-in-america.com/2020/09/19/what-is-woke/https://common-sense-in-america.com/2020/08/06/much-to-do-about-tiktok/https://common-sense-in-america.com/2020/08/05/did-the-mob-leave-las-vegas/https://common-sense-in-america.com/2020/08/01/why-are-tech-companies-biased/https://common-sense-in-america.com/2020/06/17/corporate-donations-to-the-blm-and-attempt-to-placate-the-left/https://common-sense-in-america.com/2020/11/10/how-did-the-communications-decency-act-affect-social-media/https://common-sense-in-america.com/2020/06/09/electric-cars-are-they-worth-the-hype/https://common-sense-in-america.com/2020/06/12/5g-networking-who-will-win-the-race/https://common-sense-in-america.com/2020/06/24/news-bias-what-is-the-media-afraid-of/https://common-sense-in-america.com/2020/10/27/machine-learning-fairness/https://common-sense-in-america.com/2020/07/12/is-social-distancing-destroying-our-moral-fiber-and-culture/https://common-sense-in-america.com/2020/07/27/when-and-why-did-the-media-become-biased-is-it-a-tool-of-the-left/