Trump and CDA Section 230: The End of an Internet Exception?

The debate on liability of social media providers and Section 230 of the US Communications Decency Act.

31/07/2020
  • Español
  • English
  • Français
  • Deutsch
  • Português
  • Opinión
cda-230.jpg
-A +A

The debate on liability of social media providers and Section 230 of the Communications Decency Act

 

Should anybody in the world be allowed to anonymously post anything they want on widely-accessed social media platforms such as Facebook or Twitter, with no control at all? Probably not. In fact, social media platforms do monitor content and remove or block material that they consider objectionable. This might not have been an issue if there were many competing platforms: users would vote with their feet for platforms that correspond with their own ideas of acceptable content.

 

This, however, is not the case. In reality, a very small number of service providers dominate the market, imposing their views of what is acceptable content and what is not. This monopolistic hold of a few social media platforms over public discourse has led to increasing criticism. There have been calls to revisit a US law enacted some 25 years ago to expressly favor competition and innovation among providers of services that allow users to publish their views on the internet. Most recently, the matter escalated into a seeming conflict between the US President Donald Trump and Twitter. This culminated in an executive order after the social media platform added a fact-check link to one of Donald Trump’s tweets for the first time, and then hid another one of his tweets, citing violation of Twitter’s terms of service. The executive order targets Section 230 of the Communications Decency Act (CDA 230), which confers immunity to internet companies for content they host but is generated by their users.

 

Given that this escalating tension is also taking place in a non-competitive market, it is appropriate and urgent to revisit the issue, not least because the law in question does not seem fit for its intended purpose in the present environment. Since CDA 230 is also one of the very few internet-specific laws, questioning it could be seen as heralding the end to an internet exception created 25 years ago.

 

The Origins of an Internet Exception

 

Starting around 1990, internet service providers began offering new services to their increasing user base. These new services included an electronic version of a bulletin board, where users were able to “post” (in effect, publish electronically) whatever they wanted on the provider’s platform. The services offered at the time were quite primitive compared to Facebook, Instagram, Twitter, etc. However, they raised an obvious legal issue: who would be responsible for removing illegal content, and who would be held liable if such illegal content was not removed. Did that responsibility lie with the service provider, or the user, or both?

 

One such service at the time, CompuServe, stated that it would not, in any way, monitor or regulate what users posted. It thus sought to be viewed as a “carrier” or a newsstand: a passive distributor of material or content which could not be held liable for what was published on its site. Another service, Prodigy, had a different approach: its staff reviewed content and prevented the publication of what Prodigy considered illegal or inappropriate.

 

Both were subject to legal challenges. In Cubby, Inc. v. CompuServe Inc., the court held that CompuServe was indeed a passive distributor, and thus, not liable for the content it published. In Stratton Oakmont, Inc. v. Prodigy Services Co., the court found Prodigy liable because it exercised control over content, like a regular publisher, and therefore, had the same liability as a newspaper.

 

At the time, some people thought that the internet should not be governed by offline laws, and internet service providers lobbied to have a special law passed that would allow them to monitor content without becoming fully liable for what they published. The lobbying was successful and led to the US Congress adopting Section 230 of the Communications Decency Act.

 

This law held internet service providers strictly liable for the publication of material that infringes copyright, but exempted them from liability for any other material they published. Service providers were also exempted from liability if they exercised editorial control, in particular, by specifying “acceptable use conditions” and removing any material that they considered obscene or offensive, provided they acted in good faith.

 

As Roger Cochetti, one of the lobbyists who was involved at the time, recently stated:

 

“The world [in 1994] was a very different place than it is today: Around 25 million people used the internet (up from 14 million the year before); there were about 3,000 websites (up from a few hundred the year before); [Amazon, Facebook, etc. did not exist].

 

Against this background, a small number of Washington insiders began to formulate the ideas that soon led to major laws and regulations that have determined to this day that the internet is fundamentally different from broadcasting, publishing, cable TV, telephony and even private computer networks.”

 

The Current Context

 

As Cochetti correctly notes, many things have changed since the mid-1990s. Three things are of particular interest to us here:

 

  • The internet has been, and continues to be, used by both government and non-government actors to disrupt (or, at least, to attempt to disrupt) the security, politics, and economies of perceived enemies.
  • There is increasing concentration of key service providers on the internet, such as search engines, social networks, online shopping services (for example, see here, here, and here). This concentration arises from economies of scale and network effects. Arguably, this could have been foreseen. But it either was not foreseen, or was tolerated, or even desired.
  • The monetization of data, sometimes called “the internet’s original sin”, has weakened traditional media, eroded democracy, and led to “surveillance capitalism”. As senior internet engineer Geoff Huston puts it: “The internet has been changed irrevocably from being a tool that allows computers to communicate, to a tool that allows enterprises to deploy tools that are simply intended to monetize users”.

 

From around 2016, social media companies, particularly Facebook and Twitter, have been criticized on the grounds that their content control is not politically neutral. This criticism has largely been voiced in the US. Although, arguably, even stronger criticism could be voiced by non-US actors since the “acceptable use” criteria for social media platforms are typically developed by white men from the US based on their perception of what is acceptable.

 

The criticism focuses on two issues. On the one hand, it is said that social media platforms must act quickly to remove material that promotes or facilitates terrorism, endangers children, promotes hate, etc. On the other hand, it is said that social media must not censor or infringe upon free speech. The last criticism acquires particular significance because under CDA 230 there is no effective judicial, political, or democratic control on the way in which platforms control content.

 

This ongoing debate and criticism came to a head recently when Twitter flagged some content posted by US President Trump and he issued an executive order calling, inter alia, for a review of the exemptions from liability granted under CDA 230. This order was criticized on a number of grounds, in particular, because it might violate the free speech provisions of the US Constitution (which apply to companies as well as individuals).

 

Possible Ways Forward

 

Following this executive order, the US Department of Justice (DoJ) initiated a consultation in which it put forward the following ideas for discussion:

 

  • Increase the incentives for online platforms to remove illicit content
  • Change CDA 230 to remove the immunity for civil enforcement actions brought by the federal government
  • Promote competition
  • Make the terms of CDA 230 more precise to promote free speech and increase the transparency of how content is moderated/removed.

 

However, the DoJ also proposed that the law should clearly provide that a platform’s removal of content consistent with its terms of service does not, on its own, render the platform a publisher or speaker for all other content on its service.

 

This last proposal appears to strengthen CDA 230 and enshrine the current situation. However, US Presidential candidate Joe Biden and others have called for an outright repeal of CDA 230.

 

An outright repeal would mean that social media platforms would be subject to the same laws as any other media. That is, repealing CDA 230 would be consistent with the (now) generally accepted principle that offline laws apply equally online. In such a situation, roughly speaking, the media is fully liable for published content if it cannot identify the author, and is co-responsible if it does know the author.

 

It must be stressed that CDA 230 is one of the few laws that are internet-specific, in the sense of creating a special regime for the internet that is different from the regime that applies offline.

 

With the benefit of hindsight, it appears that CDA 230 was a mistake. It was initiated in a rush to create a limited legislative solution to a broader issue before the implications had been properly understood. It would have been better to let the courts deliberate longer on applying existing laws to the internet, before attempting to frame a new law.

 

Be that as it may, the debate is now open: should CDA 230 remain unchanged? Should the US continue to attempt to include provisions equivalent to CDA 230 in free trade agreements, that is, to impose its regime on other countries?

 

What should be done to reduce the power of the existing social media giants to increase diversity? Would better enforcement of the existing antitrust law be sufficient? Or should there be ex ante regulation to encourage competition? If so, what sort of regulation would make sense? For example, could it encourage greater use of decentralized, federated, open source solutions such as Mastodon or Diaspora?

 

Would technology help? For instance, would it make sense to use strong authentication of content (e.g. adult entertainment), thus allowing end-users to filter it?

 

Should social media be public services (provided perhaps using federated open source solutions)? If so, how could freedom of speech be guaranteed, given that all governments restrict freedom of speech (even if the US restricts it less than others)?

 

Regarding freedom of speech, it is important to recall that neither national laws, nor international laws (e.g. the Universal Declaration of Human Rights, and the International Convention on Civil and Political Rights) provide for a right to be published in the medium of one’s choice. Rather, citizens are entitled to say (freedom of speech) or print (freedom of the press) whatever they want (with some limitations) and they are entitled to organize a system for distributing what they print.

 

A publisher (for example, of a newspaper) is not obliged to publish content provided by citizens: it publishes what it wants (with some exceptions, for example, rectification of incorrect or libelous material).

 

The regime is a bit different for radio and television broadcasters: since radio spectrum is scarce, its use is regulated, and broadcasters may be obligated to publish certain content: for example, they may have to accord equal time to all political candidates during election periods.

 

Are social media modern broadcasting systems? Should some broadcasting system rules apply to social media? As the prescient scholar Eszter Hargittai put the matter back in 1998:

 

“Although the Web is heralded as a truly democratic medium, one wonders to what degree such corporate powers influence the nature of its equality. Yes, communication media such as radio, television, and print media also have gatekeepers that screen what information is available and noticed by the public. The question is, can the internet continue to live up to expectations considering its democratic and egalitarian treatment of information with an increase in gatekeeper activity on the network?”

 

“A sudden unprecedented negative event may push the public and the government toward a situation where quick regulatory solutions are urged without much deliberation of their consequences. The medium could see a tightening up through government regulation. However, a sudden disaster is not necessary to tighten up certain aspects of the medium. The market may do that through the emergence of a few censoring software giants. This would be nothing new in the media and technology industries. Media giants already exist, and so do near-monopolies in other realms of related technologies. The question remains: how much effect will this have on the Internet's current form? And how will have users shaped the Internet by then to possibly retain some of its deemed-to-be-essential characteristics?”

 

As noted above, it is generally accepted that offline laws apply equally online. As a corollary, national laws apply, or at least should apply, even to multinational internet platforms (as found in 2000). Presently, social media providers can determine, the geographic location of a user with reasonable reliability. Should they have acceptable use policies that are tailored to the user’s national jurisdiction?

 

The time has come to debate ways to abolish CDA 230 and to replace it with a regime that will give us what we want: an internet that favors diverse and open communications and is not controlled by a few companies whose main interest is to increase the personal wealth of the few men who own them.

 

July 2, 2020

 

 

- Richard Hill is involved in discussions on internet governance both in
Switzerland and at the international level. Previously, Richard was the Secretary for the ITU-T Study Groups dealing with numbering and tariffing issues, network operations, and economic and policy issues.Richard holds a PhD. in Statistics from Harvard University and a B.S. in Mathematics from M.I.T.

 

 

https://botpopuli.net/trump-and-cda-section-230-the-end-of-an-internet-exception

 

https://www.alainet.org/fr/node/208210?language=es
S'abonner à America Latina en Movimiento - RSS