Section 230 of the Communications Decency Act changed the way social platforms were held accountable for their content. But, what is Section 230? Learn how Section 230 came to be and what it means for today’s social media titans… Is big social media coming to an end?
It’s 1996—and you’re waiting for Netscape Navigator to load so you can read more about this new fangled MP3 format. You rummage around your desk looking for that Windows 95 disk 13 again. You’re a pioneer in the birth of a new era in human communication—and you barely even knew it.
Meanwhile, deep in the bowels of the US Capitol building something else was brewing that would shape the modern world in ways we couldn’t begin to imagine. Section 230 of the Communications Decency Act changed the way social platforms were held accountable for their content. Read on and learn how Section 230 came to be and what it means for today’s social media titans—in the first of our new blog series.
What is Section 230?
Welcome to the Internet's formative years. It’s 1996 and less than 1% of the world’s population are ‘Internet Users’ (according to Global Village). At the time, this amounted to 36 million users in comparison to the 4.4 billion today. Back then we ‘early adopters’ were running around with 56K modems, excitedly listening out for that familiar series of clicks and pings that connected us to Netscape and AOL.
At this time there was no Facebook. There wasn’t even Friends Reunited, let alone MySpace! The Internet was just about to happen, the dotcom bubble was a mere bauble, and we stood at the edge of a technological big bang.
US Lawmakers were to be the ones to give this new-fangled technology an opportunity to grow. They saw potential in the Internet becoming a force for good and revolutionising access to information and education. They didn’t want fledgling Internet platforms to be burdened with frivolous lawsuits if they were still treated under the law as ‘publishers’ of content. This is where Section 230 comes in.
To know why they felt this way, we have to look deeper into the media culture of 1996.
Setting the Scene in 1996
1996 was the height of the tabloid news industry. Practically every week a fresh libel case was brought against the News of The World by some member of the UK Royal Family. Sordid details of celebrity life were everywhere, libel actions and out of court settlements were commonplace and the behaviour of the ‘paparazzi’ was itself daily news. This was never more clear than during the tragic circumstances of the death of Princess Diana in 1997. However, purveyors of this content, (the newsagents and newsstands) were never sued nor called into court.
In the world of print media, it’s easy to see the line of demarcation between the published newspaper and the cornershop. The newsagent is a local distributor, with no influence over what’s actually published. Without editorial control, it would be unreasonable for the distributor to be held responsible for the acts of the publisher and impossible for the distributor to moderate all newspapers in their shop. This line of demarcation between publisher and distributor is much more confusing online—there is no obvious physical separation between the website you’re viewing and the content it’s carrying.
Prior to 1996 there was a flurry of bulletin boards and online forums. The public had found their voice. We were beginning to see the potential of the internet as a platform for communication. However, there was a great deal of confusion about the legal standing of public posts in online forums—with two important cases emerging to highlight the dilemma. This was the question of ‘Intermediary Liability’ that Section 230 was intended to address.
Contrasting Cases: CompuServe & Prodigy
The need for a common standard in treatment was illustrated by two contrasting legal cases that embroiled two of the early mass internet service providers, Compuserve and Prodigy. Both of these organisations were early online platforms providing what was then referred to as ‘home information services’. They offered a combination of public information streams, news content and ‘forumesque’ functionality for what was considered at the time ‘mass users’ and were essentially precursors to what we recognise as Social Media today. A few years apart each found themselves in court as defendants in lawsuits with similar characteristics but contrasting outcomes as a result of their content moderation policies.
CompuServe’s approach to content was Laissez Faire and provided no moderation whatsoever. On the other hand, Prodigy saw itself as a ‘family orientated’ service, and was naturally moderating content in the interest of protecting minors. Both organisations were subject to litigation for damages caused by third parties and in the case of CompuServe, the judge ruled that it was not liable for content on it’s site. By contrast, Prodigy, who had moderated content in the interest of providing a family friendly environment, were effectively taking ‘editorial control’ by doing so and considered to be a ‘de facto publisher’. Their commitment to user safety had made them liable.
Before Section 230, the platform operators’ best approach to avoid litigation was by declining to moderate anything at all. CompuServe was on the right side of the court ruling but was being referred to as ‘The Wild West’.
The problem here was that the law was encouraging platform providers' indifference to the nature of third party content, and had created a surge of undesirable material. Was it still possible for the Internet to become a force for good in society; or was it impossible to create safe places for children and education?
The Compromise Agreement
At the point of drafting in 1996, US lawmakers were searching for solutions to two problems. On one hand they wanted to shelter online platforms from the costs of legal defence, while on the other hand they needed the Internet to become a safe place for public communication and education. This would require platforms to moderate content.
So, Section 230 is essentially a compromise agreement between the US legislators and the online platforms. In exchange for legal protection, operators would be required to moderate content for the protection of its’ users and to eliminate all illegal speech and criminal activity.
Section 230 and Social Media
Since 1996, the legal shelter of Section 230 has provided online platforms with a fertile environment for growth and this is evident in it’s almost preposterous scale. Facebook, essentially a website with no product of its own other than our desire to connect with each other, now has a market capitalisation of $1,004Bn—which would put it into the top 10% of countries if this was compared to GDP. Facebook is hardly a startup in need of protection anymore. Twitter, although with a smaller market cap, is still bigger than 50% of the world’s countries by GDP. This is to say nothing of the, now evident, power of these platforms to control access to information and to influence public opinion through the practice of ‘moderation’
In conclusion, Section 230 appears to have achieved its objective—but at what cost? Is Facebook's market cap a true reflection of its real value? Does a transport mechanism for ‘other people's content’ really embody greater value than the national product of entire nation states? Has the promise of the internet to democratise information been ‘realised’ or ‘monopolised?’ Has Section 230 really stimulated the growth of a wholesome and innovative online environment—or has it smothered innovation and created new monsters on a scale never seen before, capable of distorting everything from global capital markets to public opinion and everything else in between?
What will happen if Section 230 is reformed? Is this an opportunity or a threat? These are the questions we’ll be investigating over the coming weeks, so stay tuned.