Is Section 230 the Root of Censorship?

No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” 

Section 230(c)(1) is a provision of the Communications Decency Act (CDA) of 1996, a federal law in the United States that regulates the behavior of online platforms and internet service providers. Contained therein are what some authorities call, Twenty-six words that created the internet.”

The obscure provision has become a cornerstone of modern American society in the Information Age, though most people have never even heard of it. 

To supporters of 230, its a touchstone; a foundational rock upon which to build a new frontier of justice and equity. To its detractors, Section 230 is a millstone, dragging Silicon Valley down a path of dystopian error.

What is Section 230?

Section 230 provides immunity to online platforms and internet service providers from liability for third-party content posted on their websites or services. This means that if a user posts something defamatory, illegal, or otherwise problematic on a website, the website itself is generally not held responsible for that content.

The law is sometimes credited with enabling the growth of the internet and online platforms by allowing them to host user-generated content without fear of being held liable for that content. However, it has also been the subject of controversy, particularly in recent years.

Supporters of Section 230 argue that it allows platforms to efficiently moderate content without being bogged down by frivolous lawsuits. They believe it enables platforms to quickly remove problematic content without fear of legal repercussions, which helps to create a safer and more productive online environment for all users.

Critics of Section 230 argue that the law allows online platforms to avoid responsibility for the spread of harmful or false information. They accuse some companies of using Section 230 as a shield to avoid being held accountable for their role in the spread of harmful or false information, particularly in cases where they may have been aware of the content but failed to remove it promptly.

In this particular objection to Section 230, its opponents may be gaining ground. In August of 2021, a federal judge ruled that Twitter could be sued in a lawsuit accusing the platform of refusing to remove child pornography and block traffickers.

Twitter refused to remove child porn because it didnt violate policies,’” according to the lawsuit.

The federal suit, filed Wednesday by the victim and his mother in the Northern District of California, alleges Twitter made money off the clips, which showed a 13-year old engaged in sex acts and are a form of child sexual abuse, material, or child porn, the suit states,” reported the New York Post on January 21, 2021.

The charge of profiting from the images may prove particularly troublesome for the social media giant. Twitter advertisers threatened to abandon the platform in 2022 after major brand advertisements appeared side-by-side with accounts dedicated to child pornography.

We were horrified,” said stunned Cole Haan president David Maddocks after Reuters found a Cole Haan ad juxtaposed next to a tweet trading teen/child” content. Either Twitter is going to fix this, or well fix it by any means we can, which includes not buying Twitter ads.”

While platforms like Twitter face legal issues about the content they allow on their site, social media companies are facing an equal amount of pressure over content moderators have removed.

Opponents of Section 230 argue that it enables online platforms to engage in censorship by allowing them to remove or limit access to content that they deem objectionable. The law, they argue, can be used to suppress dissenting views and stifle free speech.

Judging by the recently released Twitter Files,” this may have been the case, if not the intention, of some content moderators who perhaps got a bit carried awayor were carried away on a current of government content moderation requests. 

As the Twitter Files have revealed the pipeline of content moderation requests running from branches of the U.S. government to the offices of Twitter, another potential conflict of interest is coming into focus.

Lawmakers making frequent requests for the removal of content they find objectionable have a great deal of regulatory power over social media companies, including Section 230 protections.

Is a request really a request” if it comes from a government agency or representative with the power to repeal the very laws protecting social media companies from liability for illegal content?

It doesnt help matters that many of the requests for content moderation came from Section 230-friendly Democrats. Nor does it help that some of the most vocal critics of Section 230, like Sen. Josh Hawley, were often singled out for content moderation and shadow banning.

On a social media platform where the pornographic images of exploited children go untouched by content moderators, a demonstrated capability for precise, immediate targeting of right-wing content and “misinformation” seems a bit incongruous. 

If social media sites dont monitor in a certain way, in favor of powerful corporate interests for instance, what other repercussions might they potentially face?

Covid-19 Drugmakers Pressured Twitter to Censor Activists Pushing for Generic Vaccine,” reported investigative journalist Lee Fang for The Intercept on January 16, 2023. The Intercept is hardly known as a right-wing publication and the outlets revelations were shocking.

In the report, Fang laid out his evidence that the drugmaker BioNTech pressured Twitter executives to censor an activist campaign to secure generic vaccines and low-cost therapeutics for developing nations.

It is not clear to what extent Twitter took any action on BioNTechs request,” wrote Lee Fang, noting that Twitter executives argued in emails as to how to handle the requests.

Though Twitter execs combed through the activist social media accounts involved in the effort, they apparently found nothing that violated the companys policies and asked BioNTech for further clarification to, get a better sense of the content that may violate our policies.”

But it shows the extent to which pharmaceutical giants engaged in a global lobbying blitz to ensure corporate dominance over the medical products that became central to combatting the pandemic,” concluded Lee.

Ultimately, the campaign to share Covid vaccine recipes around the world failed,” he noted.

Choosing what content you allow to be published makes you a publisher. And since social media giants like Twitter seem very much into choosing what and whose content users are allowed to seethe Taliban, yes; CDC data, nothey are behaving as publishers, not like utility providers. They may soon be facing a reckoning as such.

The Supreme Court is about to hear a case that could upend protections Big Tech has enjoyed for yearsand the internet may never be the same,” reported Fortune on February 18, 2023. 

For the first time, the Supreme Court will hear arguments on Section 230.

Will social media executives be able to explain why illegal content proliferates while content moderators engage in what appears to be a bespoke campaign of censorship-for-order?

Or will the Supreme Court inspire new restrictions on Big Tech?

(Written by Brooke Bell)

Back to blog

Leave a comment