Echo in NYC

Musing into the void

architecture black and white building business

The New Governors

Wow, it’s been a while, huh? I started this blog ages ago, but as the pandemic happened I just… stopped. And that’s okay, I think holding myself to an arbitrary standard of posting isn’t great for my brain anyway.

So anyway! I recently read a legal paper called The New Governors: The People, Rules, and Processes Governing Online Speech (free PDF!), by Kate Klonick and published in the Harvard Law Review vol 131:1598.

It’s a fascinating look at social media moderation through the lens of legal analysis, with lots of information and history around how the current status quo of online content moderation came to be. This is something that hits close to home, as I’ve been watching trends on online censorship for probably a decade now.

Part One: Legal Stuff

So, historically speaking, most of the internet we’ve dubbed social media originated in the United States. This means most online platforms fall under the 1996 law called the Communications Decency Act (CDA), and specifically section 230 which specifies “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider”, which essentially shields services from liability for UGC (user generated content). That’s pretty cool, as getting sued for some fashy shit you didn’t do is not fun.

It also provides a Good Samaritan clause (thanks Christian norms) that allow platforms to exercise judgement to remove offensive material. This is really important, as despite billionaire statements to the contrary exercising a full First Amendment analysis of every bit of hate speech and spam on a platform simply isn’t feasible.

Now, one may think that private entities have zero obligations under the First Amendment, but in fact that’s not technically true. Marsh v. Alabama was a Supreme Court decision around a privately-owned company town ejecting a Jehovah’s Witness for distributing religious literature. The court ruled that if an organization fulfills the role of government, it has at least some First Amendment obligations.

Does this apply to social media? Well… probably not. In a case of a union protest in a shopping mall, the court ruled that just because a space is open to the public does not mean it inherently is functioning as the government. That being said, that decision happened in the 60’s, and social media as it is today is more and more fulfilling the role of a sort of government. Packingham v. North Carolina is case where North Carolina tried to bar registered sex offenders from social media, but the Supreme Court ruled that, “[barring] access to social media altogether is to prevent the user from engaging in the legitimate exercise of First Amendment rights.”

Wild, huh? This is essentially stating that the US government cannot categorically bar people from accessing privately owned social media sites. These companies cannot also be held liable for the actions of their users, and are encouraged to come up with their own moderation strategies.

So… how do the norms of social media moderation come about?

Part Two: Cultural Norms

Let’s be real: most of these companies arose in California, and have ideas around free speech baked into their conception. These are USA-centric norms, based on European Enlightenment ideals with a healthy dose of capitalism heaped on top:

Facebook’s mission is to “[g]ive people the power to build community and bring the world closer together.” But even this, Willner acknowledged, is “not a cultural-neutral mission. . . . The idea that the world should be more open and connected is not something that, for example, North Korea agrees with.”

p. 1621-2

In the beginning, a lot of this stuff wasn’t really thought about. YouTube, back when it first started to take off, didn’t really have policies per-se. A lot of stuff taken down went by gut instinct, and the inclination from the start was that by default things should stay up. This ideal is especially noticeable on Twitter, which remains one of the few platforms that allows nudity and sexual material on it.

One other early development was the idea that these platforms would protect their user’s speech from governmental policing. This does not mean they wouldn’t give your info to the cops (they do), but they don’t like being forced to take stuff down without being forced to by a court order.

This led to conflict with other cultures, as this idea that governments should not dictate what people are yelling about in the Internet isn’t a universal one. A good example they list is Thailand, where disrespecting the monarchy is illegal and punishable by imprisonment up to 15 years. When the Thai government blocked YouTube, representatives traveled to Thailand to try to resolve the issue:

Every Monday literally eighty-five percent of the people show up to work in a gold or yellow shirt and dress and there’s a historical reason for it: the only source of stability in this country is this King . . . They absolutely revere their King. . . . Someone at the U.S. Embassy described him as a “blend of George Washington, Jesus, and Elvis.” Some people . . . tears came to their eyes as they talked about the insults to the King and how much it offended them. That’s the part that set me back. Who am I, a U.S. attorney sitting in California to tell them: “No, we’re not taking that down. You’re going to have to live with that.”

Interview with Nicole Wong, p. 1623

This led to the development of geoblocking, where content that would infringe on a given country’s laws would be blocked from that country but not worldwide. That led to things like Turkey banning YouTube in 2007 for refusing to remove content they deemed offensive.

So. These companies were not really held liable for content, and generally resisted governments from telling them to take stuff down. Why are things moderated at all then? What motivation do companies even have to censor things? Why the fuck does shit get zucced all the time?!

Answer: money! See, the paper points out that the incentive for these platforms is to make the space as appealing to as many people as they can. For instance, if a platform was overwhelmed by spam or nazi propaganda or whatever a lot of people would not like that. They would leave, and with them the sweet ad dollars.

Because of that, the incentive is for companies to allow for a wide variation of discourse as long as it doesn’t stray out of cultural norms. This idea is known as the Overton Window, the range of what is acceptable discourse to the wider culture.

Another motivation is what the company decides is the tone they want for their platform. Twitter, as mentioned, tends to err on the side of their conception of free speech. Facebook focuses on the nebulous ideal of “family friendly”, which somehow often ends up towards the racist uncle side of the family, but I digress.

These ideals combine to form what the paper calls Standards vs Rules.

Part Three: Rules & Standards

As stated earlier, a lot of these platforms started out with nebulous ideas influenced by US-centric ideals. Moderation went by gut, which works out okay if you are small but falls apart quickly once things need to scale up. For instance, what is deemed offensive varies a lot by who you ask, which doesn’t make a great moderation policy.

In practice, this need of specificity lends itself to a set of public facing generic ideals, or standards: don’t be a jerk. don’t be racist. don’t spam people.

Internally however, policies tend to be a lot more specific and rapidly changing. What does being a jerk entail? Well, for instance that could be don’t threaten people or groups with violence. That sort of idea quickly leads to platforms persecuting marginalized communities who are (understandably) lashing out at their oppressors. Okay then, maybe specific threats of violence? How specific is specific though? And what even is violence anyway? Bodily harm? Doxxing? Slurs?

In the early drafts we had a lot of policies that were like: “Take down all the bad things. Take down things that are mean, or racist, or bullying.” Those are all important concepts, but they’re value judgments. You have to be more granular and less abstract than that. Because if you say to forty college students [content moderators], “delete all racist speech,” they are not going to agree with each other about what’s racist.

Dave Willner, former head of policy at Facebook (p. 1633)

These sorts of things are determined internally by a set of rules, which are derived from aforementioned standards but have a very specific set of factors. This is needed as you need to train moderators in these rules instead of instructing them to “take down bad stuff”, and it’s proven effective at reducing human bias. It also provides a way for platforms to write code that performs moderation in an automated fashion, which is important when you are talking about millions of posts a second.

These moderation standards and rules end up reinforcing the dominant cultural norms, which has side effects of perpetuating marginalization. Think about it: why are people allowed to be racist or transphobic or anti-Semitic on these platforms, even subtly? Because USA-centric society tends to be tolerant of these views for the most part. Yes, they are frowned upon, but also how many times have we heard “oh well it’s only politics”?

The paper goes into how Facebook moderates content that people flag. I won’t repeat it all here, but the overall idea is that content is sent to specifically trained first level moderators. These people tend to be working in conditions not dissimilar to a call center: poorly paid contractors, strictly monitored, and given little discretion or trust. Things can then escalate to second level moderators, who can resolve thorny issues within existing policies. Sometimes, when something is really uncertain, it is escalated to the third and final level. These folks tend to be direct employees, and have backgrounds in law and are able to set policy for the platform as a whole. If you appeal something, it tends to go to level two folks and sometimes higher.

Now, this pattern doesn’t really hold necessarily for every platform out there. But it’s a good idea of how things function for online moderation at a global scale. Your own needs as an individual are deeply unimportant to the platform, and nuances of whether content is in fact deserving of being taken down are obscured. Like much of our society, a lot of the time when things can actually change is when the government or media takes on your cause, as individuals like ourselves have little effect on this monolithic system.

Is this system a good one? Not really, but we don’t have many alternatives. Corporations like to operate in secrecy around their policies, and the idea that one’s own content is theirs is not common in a society that places the idea of ownership onto private interests like we do.

I feel that radical alternatives exist out there- how Mastodon or the Fediverse as a whole handles moderation is really interesting, for instance. But until we regain power from the oligopolies who rule our digital lives we can’t really escape this monolithic model. Legislation may help, but it’s far more likely to do harm given the intricacies at play. Law is in some ways even more rigid than code, after all.

Until that happens, take backups and try and save nuances for platforms you have more control over. Your content can be deleted from the universe at any time, as many sex workers can tell you. From an anarchist perspective, try to create communities and norms that are accepting of you- bubbles are not that bad.

%d bloggers like this: