In a digitalized and increasingly polarised world, questions about how to protect freedom of expression online while curbing hate speech and online abuse, are at the centre of discussions about human rights protection.
Freedom of expression is a cornerstone human right that allows individuals to seek, receive or impart information and participate in civic space. It is necessary for democratic societies as it ensures accountability by enabling people to freely debate and raise concerns with governments. This right is protected in Article 19 of the Universal Declaration of Human Rights (UDHR) and given legal force through all major international and regional human rights treaties.
In an increasingly polarised world, questions about how to protect freedom of expression online while curbing hate speech and online abuse, are at the centre of discussions about human rights protection online and offline. The scope of the right to freedom of expression is broad: it also includes expression that others may find deeply offensive. At the same time, the right to freedom of expression is not an absolute right, and states may, under certain exceptional circumstances, restrict it. However, all the restrictions must meet so called three-part test. This means that restrictions must be; (1) provided by law; (2) pursue a legitimate aim as exhaustively listed under Article 19 (3) International Covenant on Civil and Political Rights (ICCPR); and (3) be necessary and proportionate to the aim pursued. Additionally, states must restrict speech that amounts to advocacy of hatred that constitutes incitement to discrimination, hostility or violence on the basis of protected characteristics.
Applying this test in the online setting is increasingly a difficult task.
Digital technologies and the use of social media platforms (SMPs) have had an incredibly positive effect on freedom of expression and information, facilitating public debate, sharing of information and strengthening civic space. In countries where traditional media is restricted either by governments or corporate interests, social media provides a unique space for expression. However, hate speech and online abuse in SMPs has become a pervasive problem reflective of underlying societal inequalities and divisions.
At its most extreme, online hate speech and incitement to violence have played a part in recent atrocities, including as explicitly noted by UN officials investigating atrocities against the Rohingya ethnic group in Myanmar. Similarly, social media live streaming of extreme violence and terrorism was an important aspect of the Christchurch mosque attacks in March 2019 in New Zealand. These and other events have resulted in growing pressures towards social media companies to control the content posted and shared by their users. Online censorship is therefore increasingly privatised and SMPs are de facto become adjudicators of what is acceptable expression online. This raises serious questions for the protection of freedom of expression online.
First, SMPs should not be in charge of regulating expression online. It is also problematic to make them liable for the content they have not been involved in modifying.
Second, the content moderation practices of SMPs – under their own community guidelines or terms of service are also problematic. Often, these guidelines are not sufficiently clear and accessible for their users to know what is and what is not permitted on the platform. They are also implemented in inconsistent and non-transparent manner and there is only very limited redress for user whose content is removed.
Tackling hate speech online
Addressing hate speech and content moderation online is complex. In general, consistent with the Guiding Principles on Business and Human Rights, dominant SMPs have a responsibility to respect international standards on freedom of expression and privacy, conduct human rights impact assessments of their work and provide redress to their users in cases of violations. They must ensure that their content moderation policies are responsive to the needs of victims of hate by adopting a radically new approach to transparency and establishing real accountability mechanisms.
There are a number of initiatives to improve content moderation practices of SMPs. One is a proposal to create so called Social Media Councils, a new broad multi-stakeholder accountability mechanism for content moderation. There are also company based initiatives exploring internal accountability such as the Facebook Oversight Board. There are other current and emerging issues-based initiatives, such as the, the Christchurch call to Eliminate Violent and Extremist Content Online or the Global Internet Forum to Counter Terrorism (GIFCT). However, these initiatives need be transparent and include diverse groups of stakeholders, including civil society, to come up with sustained solutions to addressing the challenges in the online sphere whilst respecting international human rights.
Beyond self-regulatory models for SMPs, several states changed or are proposing to change the existing immunity from liability regimes for SMPs (for example Germany or France). The risk is the increment of inadequate, vague and overbroad laws that seek to address hate speech online through the regulation of companies creating a chilling effects on freedom of expression and that do not address the underlying causes of hate speech.
Importantly, in order to respond to hate speech online, States need to look beyond online content and focus also on addressing the root causes of discrimination and taking positives steps to address societal problems that online hatred and abuse is symptomatic of. Hatred is built on fear and ignorance, hence, more tolerance and effective counter speech is the best response to hate speech. Tackling hate speech requires the support and access to redress for individuals and groups that are the receiving end of hate speech. Governments need to actively promote tolerance and equality, through government institutions, such as Equality Commissions, in collaboration with other actors, including civil society organisations, media, faith leaders and other societal actors.
Existing international human rights standards, including the Rabat Plan of Action, provide guidance on how to address hate speech in freedom of expression compliant manner. This includes calls on political and religious leaders, public officials, and the media to not only refrain from hate speech, but actively reject such expression and speak out against it. Political leaders need to recognize and act on their responsibilities, as stated in the recent call for action from UN human rights experts. In addition, the UN Secretary General’s Strategy and Plan of Action on Hate Speech launched earlier this year implores, effective efforts to tackle hate speech must both seek to address the root causes driving hate, while also meeting the needs of victims of hate. Tackling hate speech requires the protection of freedom of expression, as only through the exercise of this right can we respond and support those most impacted by hate speech and violence.
- Camden Principles on Freedom of Expression and Equality, (2009)
- Hate Speech Explained: A Toolkit, (2015)
- Manila Principles on Intermediary Liability, (2015)
- Tackling Hate: Action on UN standards to promote inclusion, diversity and pluralism- Protecting free speech and freedom of religion or belief for all, (2018)
- Side-stepping rights: Regulating Speech by Contract, (2018)
- Social Media Council, (2018)
- Facebook Community Standards: Analysis against international standards on freedom of expression, (2018)
- Video: Action on UN standards to tackle hate (2019)
 See Article 19 of the International Covenant on Civil and Political Rights (ICCPR), Article 9 of the African (Banjul) Charter on Human and Peoples’ Rights (ACHPR); Article 13 of the American Convention on Human Rights (AmCHR), and Article 10 of the European Convention on Human Rights (ECHR).
 See ARTICLE 19, Hate Speech Explained: A Toolkit, 2015, for further on information on typologies of hate speech.