Gonzalez v. Google to be debated on the Supreme Court docket

By Kaden Silva

The Supreme Court of the United States (SCOTUS) is scheduled to hear Gonzalez v. Google on February 21, a case that questions Section 230(1)c of the Communications Decency Act of 1996.

Historically, Section 230 was drafted in order to shield platforms from being legally liable for the content that users post. The case was brought to the Supreme Court after being dismissed by the United States District Court Northern District of California, and following it, the Ninth District Court of Appeals affirmed this dismissal, according to Oyez.

The results of this case will have enormous implications for the regulation of online speech facilitated by social media companies and may lead to tighter restrictions on content uploaded to social media platforms.

Gonzalez v. Google was precipitated by a chain of terrorist attacks in Paris, wherein a U.S. national named Nohemi Gonzalez was among 19 victims of a shooting on November 15, 2015, within a patchwork of other terrorist attacks in the wider Paris region. The family pressed charges against Google, claiming their recommendation algorithms aided the attacks by recommending ISIS propaganda and recruitment videos to the perpetrators, thereby violating the Anti-Terrorism Act, according to SCOTUSblog.

This is not the only case internet platforms have come under attack for user content. For example, Taamneh v. Twitter Inc. also questions whether or not the platforms Google, Twitter, and Facebook have “knowingly” provided meaningful assistance to foreign terror organizations, violating the Anti-Terrorism Act

In another case, NetChoice v. Paxton, social media companies challenged HB20, a Texas law enacted with the intention of preventing users in Texas from being “censored” solely based on the user or their viewpoint being expressed. The 5th Circuit Court of Appeals ruled in favor of Texas, claiming that social media companies were subject to non-discriminatory statutes regulating “common carrier” entities.

These cases may signal a change in course in how the judiciary determines liability for social media companies. Congress’ laissez-faire approach to internet discourse may have worked back then, but with the advancement of AI and machine learning algorithms, this approach is being called into question by critics.

In an interview with Vox on their podcast series Recode Decode in April 2019, Former Democratic Speaker of the House Nancy Pelosi said, “[Section 230] is a gift to [tech companies] and I don’t think that they are treating it with the respect that they should. For the privilege of 230, there has to be a bigger sense of responsibility on it, and it is not out of the question that that [privilege] could be removed.” 

The US Department of Justice also suggested areas for reform, in an open letter to the Trump Administration on September 23, 2020. They recommend a clause be introduced to address “Bad Samaritans” — platforms that purposely incentivize the publishing, use and dissemination of inflammatory or illegal content. They also recommend certain clauses that protect internet platforms should be amended to define limits on the immunities provided by the law, such as clarifying that Section 230 cannot be used by tech platforms to block antitrust lawsuits.

Defenders of Section 230 believe the benefits still outweigh the costs of the statute. The Electronic Frontier Foundation (EFF), a nonprofit organization promoting digital privacy and free speech, believes that Section 230 has created a flourishing environment for internet discourse.

“Without Section 230’s protections, many online intermediaries would intensively filter and censor user speech, while others may simply not host user content at all,” EFF said. “It allows users to share speech and opinions everywhere, from vast conversational forums like Twitter and Discord, to the comment sections of the smallest newspapers and blogs.”

This is part of the wider debate over internet speech and can fundamentally change how private entities conduct moderation on online forums, and how public entities will adjudicate cases related to internet speech.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.