News

How Facebook is Reconfiguring Freedom of Speech in Situations of Mass Atrocity: Lessons from Myanmar and the Philippines

Jenny Domino is a Satter Human Rights Fellow (funded through the Human Rights Program) working with ARTICLE 19 to counter hate speech. In an article for OpinioJuris, she argues that Facebook’s secrecy around its community standards and its intermediary status as a hosting “platform” detract from international law’s ability to hold the corporation accountable for its role encouraging harmful rhetoric that fuels mass atrocity. Find the full text of the article below and at OpinioJuris.org. The views expressed in this article are the author’s own and not the views of ARTICLE 19.

Facebook has been described as a service to democracy. This perception arguably peaked during the Arab Spring uprisings, touted as Facebook’s crowning glory in its mission to connect people. The past two years have effectively undermined that rhetoric, as serious lapses in the Cambridge Analytica scandal and the Russian hacking in the 2016 US Presidential election have shown.

In Southeast Asia, we don’t need to look far to see how Facebook has been used to oppress. The OHCHR Fact-Finding Mission in Myanmar recently concluded that Facebook was instrumental in the dissemination of hate speech against the Rohingya. In the Philippines, disinformation on Facebook has enabled the triumph and reign of Duterte, whose war on drugs has reportedly claimed thousands of civilian lives. Notably, both situations are under preliminary examination at the International Criminal Court. If Facebook has failed in a mature democracy such as the United States, it has all the more failed in struggling democracies. Rather than bringing the world closer, Facebook has facilitated the spread of divisive rhetoric even within borders.

Selective transparency

This year, Facebook finally published its Community Standards in an effort to be more transparent. It has also started to publish a report of their Community Standards enforcement. These were announced during the first Asia-Pacific Facebook Community Standards Forum held last month in Singapore, which I attended.

Conspicuously, relevant information on how these rules operate remain shrouded in secrecy. We have the applicable rules and the results of their implementation, but we are left in the dark as to what happens in between. Facebook did disclose the type of people they hire as content moderators (ranging from counter-terrorism experts to previous law enforcers), or the fact that they use human labor and algorithms to review content, but when pressed about the details on the process, they invoke their content reviewers’ safety in refusing to disclose any information on this aspect.

This seems odd. If Facebook has chosen to disclose the type of people they are hiring, why can’t they disclose their procedure on content moderation, which is arguably less likely to reveal the identity of their content reviewers and thus expose them to physical risk?

This is crucial in monitoring Facebook’s efforts to improve its operations in situations of mass atrocity. Information on procedure would help civil society monitor social media companies’ timely detection and moderation of hate speech posted on their platform, which could prevent further escalation of violence or abuse towards a victim group. This, in turn, could strengthen the normative force of the Genocide Convention’s preventive provisions, including the crime of direct and public incitement to commit genocide.

Information on procedure can also shed light on how a certain situation will be prioritized over others. During the forum, Facebook admitted that the company prioritizes certain content over others, but there is no information on how these priorities are decided. The situation in Myanmar is unique in that the UN itself made a finding on Facebook’s enabling role; what metric does Facebook plan to use moving forward? This will be crucial in the work of the ICC, whose recent statementson potential situation countries include references to incitement to violence, where the battle is increasingly being fought on Facebook.

The kind and depth of information that Facebook chooses to disclose regarding its Community Standards reflect the ease with which corporations can evade accountability through the use of the “platform” nomenclature. Tarleton Gillespie has written about how intermediaries manipulate the ambivalent, multi-layered meanings of the term “platform” to serve different constituencies – ordinary citizens, businesses, policymakers, and so forth. Though platforms seem to only “facilitate” expression, there is nothing neutral about the curating, filtering, and “orchestrating” of posted content that they take on. As mediators of content, platforms also crucially mediate relations among users, between users and the public, between users and sellers, and even between users and governments.

Facebook’s earlier reference to itself as a neutral platform could explain its previous indifference to the impact of its operations in Southeast Asia, but it also continues to frame corporate policy on public engagement. Because the term “platform” does not carry with it a clear, corresponding set of obligations for accountability on Community Standards enforcement, Facebook can conveniently choose what to disclose depending on what their interests dictate at a given point in time – whether improving public image, expanding operations, or fulfilling the demands of the UN.

This is worrying. While our demand for transparency from our public officers is guaranteed by law, our expectation from Facebook is not. We can only know what Facebook lets us know despite the impact of their Community Standards enforcement to situations of mass atrocity.

Community standards in lieu of human rights standards

The Community Standards is Facebook’s own version of “human rights” in its online marketplace. These standards regulate Facebook users’ behavior in much the same way that the international human rights regime dictates what we can or cannot do as members of a socio-political space.

Some of the rules in the Community Standards are either vague or over-inclusive under human rights standards. Examples include the definition of “terrorism,” “bullying,” and “harassment,” or what constitutes terrorist or hate organizations. However, the Community Standards remain “legal”; that is, it is within Facebook’s right to impose as a private company regulating its “private” community. As a product policy, the Community Standards serve Facebook’s own agenda to protect users. This, in effect, superimposes Facebook’s standards on human rights standards.

The conflation of both standards proves tricky when activities of the “online community” emerge from the confines of the platform and onto the real world. In Myanmar and the Philippines, Facebook’s pervasive presence is not only compounded by digital illiteracy and democratic struggle; as the ICC’s preliminary examination activities illustrate, public discourse is set against the backdrop of the most serious crimes of international concern. This means that the landscape that enables public discourse to happen, famously embodied by the metaphor of the “marketplace of ideas” in the United States, may even be compromised.

The problem with state regulation

Content-based regulation has been the knee-jerk response by Southeast Asian states. In the Philippines, legislators have proposed the enactment of a law that would regulate fake news. Meanwhile, draft versions of a hate speech bill are underway in Myanmar.

These responses prove inadequate when the regulator is also a speaker and has much to gain from Facebook itself. During the Holocaust and the Rwanda genocide, dissent was silenced through censorship and state monopoly of media access. In contrast, today, the state is only one of many actors that benefit from Facebook’s services. In Myanmar and the Philippines, Facebook is used by the state to disseminate official information, ranging from propaganda to holiday announcements. Facebook itself took down the official page of senior officials of the Tatmadaw shortly after the OHCHR released its findings. This reality has caused a shift in strategy to suppress dissent. Silence has now given way to loud, ubiquitous noise thanks to the ephemeral, hyperactive “newsfeed”.

As the state itself benefits from social media platforms, it will likely not join a treaty – such as the draft treaty on business and human rights – that would undermine its own interests through international regulation. At the same time, when the Facebook user (i.e., state) becomes the regulator of content, a conflict of interest would arise and may even compromise the integrity of the intermediary.

Human rights in the Facebook era

The Facebook problem is the latest manifestation of an old debate – the dominant focus on human rights as claims against the state. Global governance mechanisms have forced corporations to be accountable on issues of slavery, labor relations, and climate change. It is time the international community demand the same with more rigor from social media companies whose business model has facilitated the incitement of serious international crimes. Otherwise, Facebook will continue to hold hostage our ability to sell our ideas in the marketplace according to the profit-driven demands of its economic monopoly, where the currency may be human life.