ONLINE HATE SPEECH AND INTERMEDIARY LIABILITY IN THE AGE OF ALGORITHMIC MODERATION
in cotutela con University of Luxembourg - Université du Luxembourg
DOTTORATO DI RICERCA IN LAW, SCIENCE AND TECHNOLOGY
Ciclo 36
Settore Concorsuale: 12/C1 - DIRITTO COSTITUZIONALE
Settore Scientifico Disciplinare: IUS/08 - DIRITTO COSTITUZIONALE
ONLINE HATE SPEECH AND INTERMEDIARY LIABILITY IN THE AGE OF ALGORITHMIC MODERATION
Presentata da: Xxxxxx Xxxx
Coordinatore Dottorato
Xxxxxx Xxxxxxxxx
Supervisore Xxxxxx Xxxxxxxxx Supervisore Xxxx Xxxx
Co-supervisore
Xxxxxxxx Xxxxxx
Esame finale anno 2024
PhD-FDEF-x
The Faculty of Law, Economics and Finance Department of Legal Studies
DISSERTATION
Defence held on 04/07/2024 in Bologna to obtain the degree of
DOCTEUR DE L’UNIVERSITÉ DU LUXEMBOURG EN DROIT
and DOTTORE DI RICERCA IN LAW, SCIENCE AND
TECHNOLOGY
by
Xxxxxx XXXX
Born on 15 November 1995 in Mondovì (Italy)
Online Hate Speech and Intermediary Liability in the Age of Algorithmic Moderation
Dissertation defence committee
Prof. Dr. Xxxx X. Xxxx, dissertation supervisor
Full Professor, Université du Luxembourg
Prof. Dr. Xxxxxx Xxxxxxxxx, co-supervisor
Full Professor, Università Commerciale “L. Bocconi”
Prof. Dr. Xxxxxxxx Xxxxxxxx, chair
Full Professor, Università degli Studi di Milano
Prof. Dr. Xxxxxx Xxxxxxxxxxxx, vice-chair
Full Professor, Università degli Studi di Bari “A. Moro”
Prof. Dr. Xxxxxx Xxxx Xxxxxxxx
Full Professor, Università degli Studi di Milano Bicocca
Prof. Xxxxx Xxxxxx Xxxxxxx
Associate Professor, Sapienza Università di Roma
Online Hate Speech and Intermediary Liability in the Age of Algorithmic Moderation
Table of Contents
1.1. Objectives of the research 1
1.1.1. Background of the research: old and new challenges in the fight against hate speech 1
1.1.2. Objectives and research questions 4
1.2.1. Material scope of the research 6
1.2.2. Territorial scope of analysis 7
1.2.3. Aspects of interdisciplinarity 8
1.3.1. Chapter 2: Setting the framework on hate speech governance 8
1.3.2. Chapter 3: Intermediary liability and hate speech in Europe 9
1.3.3. Chapter 4: Comparative perspectives 10
1.3.4. Chapter 5: Platform standards and automated moderation 11
2. Hate Speech and Substantive Equality: A Theoretical Framework 13
2.2. The concept of hate speech in the global and European context 14
2.2.1. Origins of the term and constitutional approach to hate speech in the United States 15
2.2.2. Lessons from international human rights law 18
2.2.3. Hate speech in Europe 23
2.2.3.1. The Council of Europe 23
2.2.3.2. The European Union 28
2.3. The transatlantic debate on hate speech regulation 33
2.3.1. The liberal approach: the US model of the free marketplace of ideas 33
2.3.2. The militant approach: the case of Europe 36
2.4. Hate speech and the Internet 39
2.4.1. Free speech and information in the digital age 39
2.4.2. Main characters of online hate speech 43
2.4.2.4. Cross-jurisdictional nature of online content 47
2.4.3. The role of algorithmic content moderation and curation 48
2.5. Anti-discrimination perspectives on hate speech: a substantive equality approach 51
2.5.1. Hate speech as domination: some takeaways from speech act theory 51
2.5.2. Substantive equality as a lodestar for hate speech governance 54
2.5.2.1. The concept of substantive equality 54
2.5.2.2. Substantive equality and hate speech in the European multi-level human rights protection system 57
2.5.3. Hate speech governance and substantive equality in the world of bits 61
3. Hate Speech and Intermediary Liability: The European Framework 64
3.2. Internet intermediaries and the triangular model of online speech regulation 66
3.2.1. Internet intermediaries 66
3.2.2. New-school speech regulation and constitutional challenges 68
3.3. Intermediary liability and hate speech: case law from the ECtHR .70
3.3.1. The case of Delfi AS v Estonia 70
3.3.2.1. MTE and Xxxxx.xx v Hungary 72
3.3.2.2. Subsequent developments 74
3.4. Intermediary liability and hate speech: the framework of the EU 80
3.4.1. Intermediary (non)liability at the turn of the millennium: the e-Commerce Directive 80
3.4.2. Judicial activism of the Luxembourg Court 82
3.4.3. A new phase for the EU 87
3.4.3.1. The “new season” of content moderation regulation 87
3.4.3.2. The new sectoral framework on illegal content 90
3.4.3.3. The Code of Conduct on Illegal Hate Speech 95
3.5. The Digital Services Act 100
3.5.1. The Digital Services Act package 100
3.5.2. The rules on the liability of providers of intermediary services 102
3.5.3. The new due diligence obligations for a transparent and safe online environment 103
3.5.3.1. Provisions applicable to all providers of intermediary services 106
3.5.3.2. Provisions applicable to providers of hosting services 108
3.5.3.3. Provisions applicable to providers of online platforms 110
3.5.3.4. Obligations for providers of very large online platforms and of very large online search engines to manage systemic risks 113
3.5.3.5. Standards, codes of conduct, and crisis protocols 115
3.5.4. DSA and hate speech moderation 118
3.5.4.1. Applicability of the DSA to hate speech moderation 118
3.5.4.2. Hate speech moderation and equality in the DSA 120
3.6. Conclusions 122
4. Hate Speech and Intermediary Liability: A Comparative Overview 124
4.1. Introduction 124
4.2. Domestic legislation of EU Member States 126
4.2.1. Germany and the NetzDG: a controversial model? 126
4.2.1.1. Content of the NetzDG 126
4.2.1.2. Controversial aspects: NetzDG and freedom of expression 128
4.2.1.3. Controversial aspects: NetzDG and EU law 131
4.2.2. Beyond the NetzDG: intermediary liability for third-party hate speech across other European experiences 133
4.2.2.1. France: the laws against the manipulation of information and the (maimed) Avia Law 133
4.2.2.2. Italy: of failed legislative attempts and an inconsistent case law 135
4.2.2.3. Spain: the Protocolo para combatir el discurso de odio en línea 140
4.2.3. Democratic backsliding and speech governance in Eastern Europe: the case of “memory laws” in Poland and Hungary 141
4.3. The United Kingdom’s Online Safety Act 147
4.3.1. Scope of the Act 147
4.3.1.1. Material scope of the Act: the debate over the “legal but harmful” provisions and the new “triple shield” 147
4.3.1.2. Subjective scope of the Act: regulated services 149
4.3.1.3. Territorial scope of the Act 150
4.3.2. The new duties for Internet service providers 150
4.3.2.1. Main duties of care 151
4.3.2.2. Codes of practice for duties of care 153
4.3.2.3. Enforcement of Category 1 providers’ terms of service 154
4.3.3. Online Safety Act and hate speech 154
4.3.3.1. Hate speech constituting a criminal offence 154
4.3.3.2. “Legal but harmful” hate speech 156
4.4. The United States 157
4.4.1. United States’ tolerance towards the “thought we hate” 157
4.4.2. Intermediary liability in the US and the rise of Section 230 158
4.4.3. Private moderation and the state action doctrine 161
4.4.4. The Untouchables? Critics and recent developments on the interplay between Section 230, state action doctrine, and the First Amendment 163
4.4.4.1. The strange case of Texas’ HB 20 and Florida’s SB 7072 165
4.4.4.2. Questioning platforms’ immunity for harmful content: Xxxxxxxx v Google, Twitter v Taamneh, and Xxxxxx v Xxxxx 168
4.4.5. Digital Services Act and the United States 172
4.5. A global overview on hate speech and intermediary liability 174
4.5.1. Asia 174
4.5.2. Africa 177
4.5.3. Latin America 178
4.5.4. Australia 179
4.6. Conclusions 181
5. Platform Moderation and Hate Speech in the Algorithmic Age: Preserving Substantive Equality 183
5.1. Introduction 183
5.2. Hate speech and providers: an overview of very large online platforms’ terms and conditions 184
5.2.1. Meta Platforms and the Oversight Board 185
5.2.1.1. The definition of hate speech under Meta’s standards 185
5.2.1.2. Hate speech in the “case law” of the Oversight Board 186
5.2.1.3. Promoting equality and counternarratives 190
5.2.2. Other platforms 193
5.2.2.1. X’s policies 193
5.2.2.2. YouTube’s policies 194
5.2.2.3. TikTok’s policies 195
5.2.3. Observations and conclusions 195
5.3. Artificial Intelligence and hate speech moderation 197
5.3.1. The many forms of content moderation 197
5.3.2. The rise of automated hate speech moderation 199
5.3.3. An introduction to automated hate speech detection systems 202
5.3.3.1. Classification systems: machine-learning, deep-learning, and natural language processing 202
5.3.3.2. Training datasets 203
5.3.3.3. Feature extraction techniques 204
5.3.3.4. Recent developments: large language models 206
5.3.4. Challenges and limitations 207
5.3.4.1. The challenges of multi-modality and context 207
5.3.4.2. Automated moderation and biases 209
5.4. Algorithmic errors and fundamental rights 211
5.4.1. The inevitability of error 211
5.4.2. Acceptable errors and substantive equality 213
5.4.3. Mitigating the impact of errors: areas of action 215
5.5. Algorithmic hate speech moderation in Europe: constitutional challenges and substantive equality 218
5.5.1. Constitutional aspirations of the Digital Services Act 218
5.5.2. A renovated Code of Conduct on Hate Speech? 219
5.5.2.1. DSA, co-regulation, and hate speech 220
5.5.2.2. Renovating the scope of applicability of the Code of Conduct 222
5.5.2.3. Renovating the content of the Code of Conduct through the lens of substantive equality 222
5.5.3. AI regulation beyond the Digital Services Act 225
5.6. Conclusions 227
6. Concluding Remarks 229
6.1. Main findings of the research: an overview 229
6.2. The challenges ahead 234
References 236
Bibliography and online resources 236
Institutional sources 264
Case Law 269
Table of Legislation 275
Abstract
This research aims to investigate the impact of liability-enhancing legal strategies in the context of the governance of online hate speech. Indeed, the increased reliance of the law on the role of private platforms for the purposes of moderating and removing hate speech deeply affects constitutional principles and individual fundamental rights. For instance, the enhancement of intermediary liability and responsibilities can contribute to the phe- nomenon of the over-removal of user content, with little regard to basic constitutional guarantees. Furthermore, research has shown that the ever-increasing use of automated systems for hate speech moderation gives rise to a whole new set of challenges and issues related to the concrete risk of errors and biased results, leading to a disproportionate re- moval of content produced by minority, vulnerable, or discriminated groups of people. After dealing with the question concerning the rationale(s) of hate speech regulation and arguing for an increased role for the principle of substantive equality in this regard, this work investigates the developing trends concerning the imposition of forms of interme- diary liability with respect to the spread of hate speech content across the Internet, keep- ing a close eye on the evolving European framework. In doing so, this work also explores the relationship between platforms’ content moderation practices and the promotion of fundamental rights and values – including the principle of substantive equality – espe- cially in the light of the ever-increasing use of artificial intelligence systems for the de- tection and removal of hate speech. In the context of the European Union, it is held that such reflections are of utmost importance particularly following the adoption of the Dig- ital Services Act. In this respect, the work argues for the need for a renewed code of conduct on hate speech, with a view to further protecting constitutional values and the rights of users.
Keywords: Hate Speech; Intermediary Liability; Non-Discrimination; EU; Content Moderation; Artificial Intelligence; Platform Governance; Freedom of Expression; Sub- stantive Equality; Internet.
List of Acronyms
ACHR American Convention on Human Rights ACLU American Civil Liberties Union
AG Advocate General
AGCOM Italian Communications Regulatory Authority AI Artificial Intelligence
AOL America Online
ARCOM Authorité de Regulation de la Communication Audiovisuelle et Numérique
AVMSD Audiovisual Media Services Directive BVerfG Bundesverfassungsgericht
CAI Committee on Artificial Intelligence (of the Council of Europe) CDA Communications Decency Act
CFREU Charter of Fundamental Rights of the European Union CJEU Court of Justice of the European Union
CoC Code of Conduct
CoE Council of Europe Cons Cons. Conseil Constitutionnel
CSA Conseil Supérieur de l’Audiovisuel
CSAM Child Sexual Abuse Material
DMA Digital Markets Act
DMCA Digital Millennium Copyright Act DSA Digital Services Act
DSC Digital Services Coordinator
DSM Digital Single Market
EBDS European Board for Digital Services
ECHR Convention for the Protection of Human Rights and Funda- mental Freedoms (European Convention on Human Rights)
ECRI European Commission against Racism and Intolerance ECtHR European Court of Human Rights
ECD Directive 2000/31/EC (e-Commerce Directive) EDSM European Digital Single Market Strategy
EU European Union
FRA European Union Agency for Fundamental Rights GDPR General Data Protection Regulation
ICCPR International Covenant on Civil and Political Rights
ICERD International Convention on the Elimination of All Forms of Racial Discrimination
INRA (Polish) Institute of National Remembrance Act ISIS Islamic State of Iraq and Syria
ISP Internet service provider
LGBTQIA+ Lesbian, gay, bisexual, transgender, queer, intersexual, asexual, etc.
LLM Large Language Model
LSSI Ley de Servicios de la Sociedad de Información y Comercio Electrónico
NetzDG Netzwerkdurchsetzungsgesetz
NGO Non-Governmental Organization
NLP Natural Language Processing
OB Oversight Board
OECD Organization for Economic Co-operation and Development Ofcom Office of Communications (UK)
OHWP Online Harms White Paper
OSA Online Safety Act
OTT Over-the-top
POCs People of colour
PragerU Xxxxxx University
SCOTUS Supreme Court of the United States TERREG Terrorist Content Online Regulation TEU Treaty on the European Union
TFEU Treaty on the Functioning of the European Union TGI Tribunal de Grande Instance
TUSMA Testo Unico dei Servizi di Media Audiovisivi
UK United Kingdom of Great Britain and Northern Ireland UNESCO United Nations Educational, Scientific and Cultural Organiza-
tion
UNGPs United Nations Guiding Principles on Business and Human Rights
URL Uniform Resource Locator
US(A) United States (of America)
VLOP Very large online platform VLOSE Very large online search engine VSP Video-sharing platform
1.
Introduction
Summary: 1.1. Objectives of the research. – 1.1.1. Background of the re- search: old and new challenges in the fight against hate speech . – 1.1.2. Ob- jectives and research questions. – 1.2. Notes on methodology. – 1.2.1. Mate- rial scope of the research. – 1.2.2. Territorial scope of analysis. – 1.2.3. As- pects of interdisciplinarity. – 1.3. Structure of the work – 1.3.1. Chapter 2: Setting the framework on hate speech governance. – 1.3.2. Chapter 3: Inter- mediary liability and hate speech in Europe. – 1.3.3. Chapter 4: Comparative perspectives. – 1.3.4. Chapter 5: Platform standards and automated modera- tion.
1.1. Objectives of the research
1.1.1. Background of the research: old and new challenges in the fight against hate speech
Hate speech regulation has long been a controversial topic for discussion, due to the in- evitable repercussions that a legal response aimed at curbing the phenomenon has on freedom of expression. The debate, both in academia and politics, has been particularly prolific during the second half of the twentieth century – notably because of the many legislative reactions (domestic and international) enacted against hate speech in the wake of World Wars I and II – and has reemerged in recent years as a result of the birth of the Internet and of online platforms which, while representing extraordinary tools for the ex- pansion of the right to freedom of expression and information, have also proven to be an avenue for the dissemination of hateful and discriminatory content.1
The act of defining what hate speech actually is from a legal perspective and of iden- tifying which utterances fall within the scope of the term raises itself important and sig- nificant challenges, not only because different jurisdictions may choose to adopt their own definitions of the conducts subject to being sanctioned, but also because the expres- sion has often been adopted in the context of the general public debate as well as in the context of philosophical, linguistic, sociological, and psychological discussion.
1 In this sense see, for example, European Commission, ‘Communication from the Commission to the European Parliament and the Council, A More Inclusive and Protective Europe: Extending the List of EU Crimes to Hate Speech and Hate Crime’ COM(2021) 777 final.
Therefore, a variety of interpretations and connotations of “hate speech” are nowadays available and the challenge that the law faces, vis-à-vis the plethora of possible meanings, is that of identifying the appropriate boundaries between permissible and impermissible speech and, consequently, the appropriate boundaries beyond which the imposition of legal sanctions or restrictions ceases to be an acceptable political choice and starts repre- senting an unconstitutional impingement on freedom of expression.
Traditionally, the debate has indeed been mainly focused, precisely, on addressing these questions, that is, whether (and to what extent) regulation on hate speech is com- patible with the democratic asset of the state. Different responses have been given by different jurisdictions. Thus, against the backdrop of a constitutional framework where the protection of free speech under the First Amendment is treated as an almost absolute value, the US have generally limited the admissible scope for legal intervention only to those rare cases where “hate speech” takes the forms of a true threat of an imminent law- less action or of low-value “fighting words”,2 provided that such interventions are not motivated by the goal of punishing the expression of a certain – albeit disparaging and discriminatory – viewpoint.3
Conversely, within the European context, hate speech has generally been found to rep- resent a phenomenon directly infringing the dignity and right to equality of those individ- uals or groups it targets and, as such, to be deserving of being constrained with a view to balancing the protection of freedom of expression with the promotion of other equally important constitutional values and principles. In many cases, the European Court of Hu- man Rights (ECtHR) has held that the utterance of certain, particularly egregious, forms of hate speech amounts in fact to an “abuse of right” under the European Convention on Human Rights (ECHR)4 and, as such, is removed from the guarantees Article 10 sets for freedom of expression and information.5
The main challenge that jurisdictions have traditionally had to face, therefore, has thus been that of establishing what the boundaries and limits to free speech are, based on their own constitutional value framework,6 and of identifying when, conversely, a certain ex- pression leaves the domain of admissible speech, becoming something else – a “fighting word”, a true threat, an abuse of right, or, more in general, an utterance constituting illegal speech. Far from being solved, the debate around what should be the contours of legal and illegal hate speech is still ongoing and has, even recently, been at the centre of highly polarized narratives in certain jurisdictions. Think, for instance, of the highly debated Zan
2 That is, those words “which, by their very utterance, inflict injury or tend to incite an immediate breach of the peace”. Xxxxxxxxxx v New Hampshire 315 US 568 (1942) 582. See infra, §2.2.1.
3 Xxxxxxxxxxx v Ohio 395 US 444 (1969); RAV v City of St Xxxx 505 US 377 (1992).
4 Convention for the Protection of Human Rights and Fundamental Freedoms 1950 art 17.
5 See, ex multis, Xxxxxxx v France (dec) [2003] ECtHR 65831/01, ECHR 2003-IX; Witzsch v Germany
(2) (dec) [2005] ECtHR 7485/03; Xxxxxxx v the United Kingdom (dec) [2004] ECtHR 23131/03, ECHR 2004-XI; Xxxxx Xxxxxx v Russia (dec) [2007] ECtHR 35222/04; M’bala M’bala v France (dec) [2015] ECtHR 25239/13, ECHR 2015-VIII. See more infra, §2.2.3.1.
6 On the role of the value framework of a country in the creation and application of law, with specific regard to the governance of the digital sphere, see Xxxxxx Xxxxxxxxx, ‘The Quadrangular Shape of the Ge- ometry of Digital Power(s) and the Move towards a Procedural Digital Constitutionalism’ (2023) 29 Euro- pean Law Journal 10.
Draft Law,7 a project – ultimately rejected by the Senate – for a legislative reform of the Italian framework on hate speech that aimed to amplify the scope of action of the relevant provisions of the Criminal Code, so as to include sexual orientation, gender identity, gen- der, sex, and disability among the protected grounds of discrimination.8
Following the creation of the Internet and the increasing spread of online digital plat- forms for freedom of expression, regulators have had to deal with a whole new set of issues and have had to redefine their strategies. Indeed, the fight against harmful or illegal content has to deal, today, with the specific characteristics of the contemporary algorith- mic age, described by Xxxxxx in the following terms:
The Algorithmic Society features the collection of vast amounts of data about individuals and facilitates new forms of surveillance, control, discrimination and manipulation, both by governments and by private companies. Call this the problem of Big Data. The Algo- rithmic Society also changes the practical conditions of speech as well as the entities that control, limit, and censor speech. first, digital speech flows through an elaborate pri- vately-owned infrastructure of communication. Today our practical ability to speak is subject to the decisions of private infrastructure owners, who govern the digital spaces in which people communicate with each other. This is the problem of private governance of speech.9
Against this backdrop, lawmakers across the world have progressively turned towards forms of speech governance attempting to harness the computational power10 of private owners of digital infrastructures with a view, in particular, to increasing their spheres of liability and accountability with respect to the online presence of illegal or harmful con- tent.11 Through the implementation of such strategies, the goal is to push providers of intermediary services, especially those offering hosting or online platform services, to take the necessary actions to reduce as much as possible the presence of content that is illegal or at least considered to be at odds with the interests of the public at large.
This developing trend in the overall governance of online speech has recently become increasingly relevant – and will likely become even more important – also in the context of the fight against hate speech. For instance, the new Regulation (EU) 2022/2065,12 com- monly known as the Digital Services Act, has set the basis for a new era for the European regulation of content moderation practices.13 Similarly, legislative attempts in the same direction have been made at the level of domestic state law, as showcased by the exam- ples, for instance, of Germany and France.14
7 AS 2005 (XVIII), Misure di prevenzione e contrasto della discriminazione e della violenza per motivi fondati sul sesso, sul genere, sull’orientamento sessuale, sull’identità di genere e sulla disabilità.
8 See infra, §4.2.2.2.
9 Xxxx X Xxxxxx, ‘Free Speech in the Algorithmic Society: Big Data, Private Governance, and New School Speech Regulation’ (2018) 51 U.C. Xxxxx Law Review 1149, 1153.
10 Xxxxxxx Xxxxxxx, Computational Power: The Impact of ICT on Law, Society and Knowledge
(Routledge 2021).
11 On the rise of such forms of regulation at the European level, see infra, §3. With regard to other jurisdictions, see infra, §4.
12 Regulation (EU) 2022/2065 of the European Parliament and of the Council of 19 October 2022 on a Single Market for Digital Services and amending Directive 2000/31/EC (Digital Services Act), OJ L 277/1.
13 See infra, §3.5.4.
14 See infra, §§4.2.1., 4.2.2.1.
However, in the light of the increased reliance on the private owners of digital infra- structures for the attainment of the goal of protecting public debate from unwarranted pollutions of the informational ecosystem, it is necessary to investigate and reflect upon the potential impacts this might have on the constitutional framework and the fundamen- tal rights of individual users affected. In particular, it has correctly been noted on multiple sides that the enhancement of intermediary liability and responsibilities will likely have the effect of causing the over-removal of users’ content with little regard to the necessary guarantees for the protection of their freedom of expression rights.15
Furthermore, addressing these matters is particularly important in light of the rise in the use of automated systems for content moderation and content curation. Artificial in- telligence (AI) is today an essential and inescapable resource for platforms to detect, re- move, and filter out unwarranted items from the Internet.16 The deployment of those tools, nevertheless, gives rise to a whole new set of challenges and issues, not only due to the limited transparency characterizing the functioning of implemented algorithms, but also due to the concrete risk of errors and biased results leading to a disproportionate removal of content produced by minority, vulnerable, or discriminated groups of people. Such an issue is particularly problematic when it comes to the governance of the phenomenon of hate speech.17
1.1.2. Objectives and research questions
The purpose of the present work is to investigate the ways in which “new-school”18 speech regulation strategies have been developing in recent years – both inside and out- side Europe – and how those strategies actually relate to the governance of online hate speech, with a view to mapping out what challenges are lying ahead and to suggesting possible courses of action to address and face those challenges on the European level.
To this end, Chapter 2 first focuses on the preliminary and inescapable set of questions concerning the rationale(s) behind the legislative choice to intervene to restrict and limit the scope of freedom of expression with a view to reducing the spread of hate speech. What is, in particular, the constitutional stance of such a choice? What interests does the proscription of hate speech aim to protect? How are those interests balanced with the fundamental right to freedom of expression? Clearly, the answer to such questions is not univocal, both because the responses will vary from jurisdiction to jurisdiction and be- cause jurisdictions proscribing hate speech generally offer a plurality of reasons justifying their choice. Nevertheless, the present work argues for a regulatory approach towards hate
15 In this sense, with specific regard to the Digital Services Act, see, among others, Xxxx Xxxxxx, ‘The Digital Services Act and Its Impact on the Right to Freedom of Expression: Special Focus on Risk Mitiga- tion Obligations’ (DSA Observatory, 27 July 2021) <xxxxx://xxx-xxxxxxxxxxx.xx/0000/00/00/xxx-xxxxxxx-xxx- vices-act-and-its-impact-on-the-right-to-freedom-of-expression-special-focus-on-risk-mitigation-obliga- tions/> accessed 3 December 2021. See more infra, §3.5.3.4.
16 See infra, §5.3.
17 See infra, §5.4.
18 Xxxx X Xxxxxx, ‘Old-School/New-School Speech Regulation’ (2014) 127 Harvard Law Review 2296.
See infra, §3.2.2.
speech governance that – cognizant of the fact that hate speech inherently perpetuates dynamics of dominance between speaker and targets – aims to serve as a remedy precisely against such dynamics. In this sense, this work suggests using the lens of substantive equality when dealing with the regulation of hate speech, which should ultimately be ori- ented towards the active promotion of an equal standing of all demographics in society.
The second set of questions, addressed within Chapter 3, concerns the ways in which the law has evolved in Europe – both at the level of the Council of Europe and at the level of the European Union (EU) – with regard to the area of intermediary liability in general and of liability for third-party hate speech in particular. This analysis is done with due regard to critically investigating how the resulting framework does, in fact, relate with constitutional values and fundamental rights and whether the principle of (substantive) equality enters or not into such a framework. How has the ECtHR case law evolved with respect to intermediary duties to remove illegal content and, specifically, hate speech? Is intermediary liability for third-party hate speech consistent, in the context of the ECHR framework, with the right to freedom of expression? In parallel to the development of that case law, which trends have been followed on these matters by the EU? What novelties, in particular, has the 2022 Digital Services Act introduced in the context of the regulation of online content moderation and to what extent do these novelties apply to the case of hate speech moderation? What are the main limitations of the Digital Services Act and how will such novelties affect freedom of expression and the right to equality?
These questions are, furthermore, strictly related with another set of questions ad- dressed within Chapter 4. Indeed, the evolving EU legal framework on hate speech mod- eration is not set within a void. Its implementation will necessarily have to take into ac- count the domestic legislation of the various EU Member States but may also, given the transnational character of online content, clash with the legal systems of foreign jurisdic- tions. To what extent, therefore, is EU law consistent with the law of Member States? How are different legal systems outside of Europe addressing hate speech and the pres- ence of illegal content on the Internet? What challenges may arise with respect, in partic- ular, to the relationship between the Digital Services Act and the constitutional frame- work of the United States (US)? Are other jurisdictions following a regulatory model similar to the European one?
Finally, precisely because developing legal trends – both inside and outside of Europe
– are progressively shifting towards increasing the liability and responsibility of providers of intermediary services to remove unwarranted content, a fourth set of research ques- tions, dealt with in Chapter 5, will focus specifically on the ways in which private plat- forms enforce their own duties and content moderation practices. Indeed, the manner in which these private actors govern online speech, and hate speech in particular, has highly significant consequences in terms of how users’ fundamental rights are affected and in terms of whether such practices actually enable to reach the inherent goals of hate speech governance. In this context, close consideration is taken of the use of AI for the purposes of detecting and removing hateful content. More specifically, how is hate speech defined and treated under private platforms’ terms and conditions and what is the system of values
underpinning those terms and conditions? How are these private rules enforced from a technical point of view? What are the main characteristics, capabilities, and limitations of automated systems of content moderation? What is the overall impact of private con- tent moderation practices on freedom of expression and the right to equality? In light of such impact, what are the challenges that lie ahead for the implementation of EU law, notably the Digital Services Act? Can the principle of substantive equality represent a valid focal point to orient future legislative and policy choices?
1.2.1. Material scope of the research
As described above, the present research is focused on the analysis of legislative re- sponses against the dissemination of online illegal or harmful content, with a close eye on the phenomenon of hate speech. In this respect, the notion itself of hate speech is not always clear, also due to the significant increase in the use of the term in everyday lan- guage and in the context of non-legal debates and discussions. As better clarified within Chapter 2, the present research mainly refers to a concept of hate speech that is, in its essence, comparable to that adopted by the European Commission against Racism and Intolerance inside its General Policy Recommendation No. 15.19
Admittedly, the definition contained within that Recommendation is in fact rather am- ple as regards its scope of application, because it considers as relevant a wide variety of expressive conducts. Nevertheless, that definition is highly relevant inasmuch as it clari- fies that the specific feature distinguishing hate speech is that it is “based on a non-ex- haustive list of personal characteristics or status that includes ‘race’, colour, language, religion or belief, nationality or national or ethnic origin, as well as descent, age, disabil- ity, sex, gender, gender identity and sexual orientation”.20 In other words, hate speech is inherently rooted in and is a direct expression of discrimination.21
It is for this reason that, within the present work, specific attention is given to the analysis of the phenomenon of hate speech under the lens of its strict interrelation with the typical categories of anti-discrimination law. In particular, the work borrows from that field the concept of substantive equality, intended as the – constitutionally relevant – as- piration towards the active elimination of the barriers to the pursuit of true equality be- tween societal demographics. More specifically, the work tends to refer to the concept of substantive equality as theorized by Xxxxxx Xxxxxxx, who, rather than considering it as a unitary principle, identifies it as a complex one, composed of a variety of dimensions.22 Particular attention will be given to the “participative” dimension of substantive equality,
19 European Commission against Racism and Intolerance, ‘General Policy Recommendation No. 15 on Combating Hate Speech’ (Council of Europe 2015) CRI(2016)5. See infra, §2.2.3.1.
20 ibid.
21 See more infra, §2.2.4.
22 Xxxxxx Xxxxxxx, ‘Substantive Equality Revisited’ (2016) 14 International Journal of Constitutional Law 712. See infra, §2.5.2.1.
which, according to the present work, provides a lens of utmost importance for the defi- nition of responses to the upcoming challenges of hate speech governance.
Furthermore, the analysis contained within the present work shall focus specifically on the interrelations between liability-enhancing regulation, private governance of online speech, and related impacts on fundamental rights of users.
In this respect, it is first of all important to stress that, from a terminological point of view, terms relating to “Internet intermediaries”, “Internet service providers”, “online platforms”, etc., shall generally be used interchangeably as umbrella expressions to refer to the composite and extremely wide category of private actors that are active in the mar- ket of digital services. Nevertheless, when discussing the content or application of spe- cific legislative acts, the work shall rely on the technical terms and definitions contained within those sources. Thus, for instance, when referring to the framework established by the Digital Services Act, the term “online platform” shall be intended as referring specif- ically to “a hosting service that, at the request of a recipient of the service, stores and disseminates information to the public”.23
Second, because the purpose of the research is to investigate how the law can influence the hate speech moderation practices of private actors and how those practices affect, in turn, the liberties of users – and, consequently, the governance of hate speech from an anti-discriminatory perspective –, this work, while acknowledging the rise of new non- human purveyors of hate speech, shall not deal specifically with that aspect. In particular, the spread of more and more advanced generative AI systems, including large language models, has raised the challenge of the emergence of new forms of hate speech originating from those technologies. While representing a critical challenge for the future, such an issue falls outside the scope of this research. AI will, instead, be considered inasmuch as it is increasingly used by platforms for the purposes of detecting and removing hate speech and may thus impact, in particular, the right for users to enjoy online their right to freedom of expressions in conditions of equality.24
1.2.2. Territorial scope of analysis
The research mainly considers the European legal framework on hate speech moderation, taking into consideration the developments occurring both within the case law of the EC- tHR and within the body of legislation of the EU. Within the Old Continent, indeed, dig- ital policies, including policies concerning the governance of online speech, are increas- ingly confronted with on a supranational – rather than merely national – level, with sig- nificant regulatory interventions especially from EU institutions.
Thus, Chapter 2 mainly considers the debate on hate speech regulation by focusing on the way European Courts and European legal and policy documents have addressed the matter. Chapter 3, similarly, contains an extensive review of the European framework on intermediary liability regulation. Chapter 5, dealing with platform governance practices
23 DSA art 3, lett (i). See infra, §3.5.3.
24 See infra, §5.
and on the use of automated systems for hate speech moderation, investigates how EU law can face the human rights challenges raised by these practices and tools, with a view to fostering the injection within them of principles and values that are the expression of the European constitutional framework.
At the same time, the research, acknowledging in particular the transnational nature of the phenomenon of hate speech, also considers other legal frameworks from a compara- tive perspective. In Chapter 2, for example, specific regard is given to the approach of the US towards hate speech governance against the background of the evolution of First Amendment jurisprudence across the last two centuries. Additionally, Chapter 4 consid- ers the topic of intermediary liability legislation and hate speech governance precisely by taking a comparative overview of jurisdictions both within and outside the EU.
1.2.3. Aspects of interdisciplinarity
The research mainly addresses the topic of hate speech governance from a legal perspec- tive. Thus, in this respect, the work is based on an extensive review of relevant literature on this topic and related issues, as well as upon landmark case law and legislation. As already mentioned, the analysis mainly addresses the European landscape, but compara- tive elements are also present throughout the work. Through this analysis, the work aims to identify the rationale justifying the adoption of measures against hate speech, the novel challenges brought about in this respect by the Internet, and the issues in terms of funda- mental rights related to the development of new legislative responses. The goal is, ulti- mately, to suggest a key for the interpretation of the phenomenon as a whole and, thus, to suggest preliminary tools to address the challenges still lying ahead.
Nevertheless, the full understanding of the phenomenon of online hate speech, as well as of the role and impact of contemporary practices of (private) content moderation, also requires considering relevant technological aspects. In this respect, the legal and policy analysis is complemented by a review of relevant technical literature. Specifically, the analysis contained in Chapter 5 aims to give an overview of the technical aspects of the AI systems deployed to remove hate speech content from the Internet, with a view to highlighting those systems’ limitations and the consequent effects on fundamental rights and public speech governance policies.
While the previous sections have already highlighted the key aspects of the present work, the following subsections will give a more detailed overview of the content of the disser- tation’s Chapters.
1.3.1. Chapter 2: Setting the framework on hate speech governance
Chapter 2 introduces the many issues and challenges relating to the development of an adequate hate speech governance system, both within the online and offline environment.
First, it aims to give the reader the necessary background information concerning the or- igins of the notion of “hate speech” in the US system and to give an overview of how the international framework on hate speech has evolved throughout the twentieth century.25 It also introduces the European regional framework on hate speech, considering both the case law of the ECtHR and the legislation of the EU. In this way, the concept itself of hate speech is better investigated from a legal point of view, serving as a baseline for the remainder of the work.
The Chapter then moves on to address the debate concerning the main rationales be- hind the possible legal options vis-à-vis the phenomenon of hate speech. The US frame- work is, in particular, taken as a model of a “liberal” and “tolerant” approach towards the “thought that we hate”.26 Conversely, the European perspective, especially that enshrined within the judgments of the ECtHR, is taken as a model of a more “militant” approach, oriented towards the protection of the rights, dignity, and equality of groups traditionally targeted by hate speech.
In this respect, the peculiar aspects characterizing specifically the online dimension of hate speech are also investigated, with a view to highlighting the emerging challenges set by the Internet and to showcasing how digital technologies have themselves been de- scribed in different terms across the two sides of the Atlantic. In the US, the narrative has in fact generally been optimistic, with the recognition of the Internet as an extraordinary avenue for free speech, whereas on the Eastern side of the Atlantic more attention has been given to the new risks and threats posed by it.
The Chapter, finally, argues for an interpretation of the hate speech phenomenon as inherently grounded in its relationship to the perpetuation of the dynamics of power and dominance within the social fabric, starting from some basic notions and concepts taken from speech act theory. As a result, the Chapter suggests that the purpose of hate speech governance should be precisely to combat the dominance dynamics entailed by it and argues that, in order to do so, legal strategies in this area should be buttressed by follow- ing, as a target, the principle of substantive equality.
1.3.2. Chapter 3: Intermediary liability and hate speech in Europe
Chapter 2 having explored the main features characterizing the phenomenon of hate speech both offline and online, Chapter 3 delves into the developments undergone by ECtHR case law and EU legislation in terms of intermediary liability for third-party con- tent.
25 See, in particular, International Covenant on Civil and Political Rights 1966 arts 19–20; International Convention on the Elimination of All Forms of Racial Discrimination 1965 art 4.
26 Matal v Tam 582 US (2017) 25.
With respect to the ECtHR, specific attention is given to the landmark judgments of Delfi27 and MTE,28 as well to the subsequent legacy of those decisions.29 In this sense, the Chapter discusses, in particular, how the ECtHR case law has established a rather exceptional approach towards intermediary liability for third-party hate speech content, as opposed to other types of unlawful material. Indeed, whereas from MTE onwards the Strasbourg Court has adopted a narrow approach towards the governmental enforcement of forms of intermediary liability for the dissemination of illegal content, due to concerns related to Article 10 ECHR, hate speech, representing itself an abuse of freedom of ex- pression, has generally been considered to be deserving of more invasive state interven- tion.
As regards the EU, Chapter 3 stresses the shift from an inherently liberal original phase towards an increasingly interventionist approach. In this respect, the Chapter first inves- tigates the active role of the Court of Justice of the EU (CJEU) in adapting the interpre- tation of the e-Commerce Directive30 in the light of the evolving technological paradigm. Then, the work gives an overview of the most recent (from the end of the 2010s onwards) legislative trends characterizing the Union’s policy strategies on content moderation, crit- ically assessing the characteristics of the developing framework and the challenges aris- ing from a constitutional and human rights law perspective.
Finally, the Chapter moves on to analyse the significant development in EU law rep- resented by the enactment of the already mentioned Regulation (EU) 2022/2065, that is, the Digital Services Act. The new Regulation, indeed, operates a general and horizontally applicable reform of the system established in 2000 by the e-Commerce Directive. In particular, the Chapter aims to give an overview of the new legislation, focusing on the new set of rules on providers’ due diligence obligations “for a transparent and safe online environment”, while also investigating the relationship between the Act and the challenge of hate speech moderation.
1.3.3. Chapter 4: Comparative perspectives
Chapter 4 gives a broad overview, from a comparative perspective, of how the challenges raised by online hate speech have – or have not – been addressed by different jurisdic- tions.
First, the Chapter explores the relationship between the EU framework and the domes- tic legislation of some notable Member States. Among these, specific consideration is
27 Delfi AS v Estonia [2015] ECtHR [GC] 64569/09, ECHR 2015.
28 Magyar Tartalomszolgáltatók Egyesülete and Xxxxx.xx Zrt v Hungary [2016] ECtHR 22947/13.
29 See, in particular, Xxxx v Sweden (dec) [2017] ECtHR 74742/14; Xxxxxxx v Norway [2019] ECtHR 43624/14; Standard Verlagsgesellschaft Mbh v Austria (no 3) [2021] ECtHR 39378/15; Xxxxxxx v France [2023] ECtHR [GC] 45581/15, ECHR 2023.
30 Directive 2000/31/EC of the European Parliament and of the Council of 8 June 2000 on certain legal aspects of information society services, in particular electronic commerce, in the Internal Market ('Directive on electronic commerce’), OJ L 178/1.
given to the German Network Enforcement Act31 which, enacted in 2017, has since then served as an internationally relevant blueprint for the regulation of intermediary liability with respect to user-generated hate speech. Subsequently, the experiences of three major EU countries – France, Italy, and Spain – are described. All three jurisdictions, indeed, have addressed the phenomenon of online hate speech differently – with more or less successful outcomes – and thus showcase the variety of domestic legal tools that the ap- plication of the Digital Services Act will have to take into consideration. Additionally, the Chapter critically discusses the online speech governance approaches of two Eastern European countries, Poland and Hungary, that have suffered in recent years from forms of democratic backsliding. Once again, the relationship of those national approaches to the EU’s Digital Services Act is the main focus, especially in the light of the adoption in those countries of much debated “memory laws”.
Second, Chapter 4 describes the recently adopted UK Online Safety Act, with a view to outlining its material, subjective, and territorial scope of application, the new set of duties imposed upon providers of Internet services, and the role the Act shall play in the fight against online hate speech across the UK. The Online Safety Act, indeed, offers many interesting terms of comparison with the Digital Services Act, the two pieces of legislation having aims and goals that largely coincide.
Third, the Chapter takes once again a look at the legal framework of the US concerning intermediary liability, a framework that is, in fact, radically different from the one char- acterizing the EU. In this respect, Chapter 4 addresses in particular the rise, at the end of the 1990s, of the famous Section 230 of the Communications Decency Act,32 outlining the fundamental role played by the provision in the development of the US case law on intermediary liability. The interplay between Section 230, the state action doctrine, and the First Amendment is also dealt with, as well as the increasing critiques moved both by conservatives and liberals towards the current system and the attempts that have thus been made on both sides to amend the provision. Indeed, the success or failure of such attempts may well support or hamper a positive relationship between the Digital Services Act and US constitutional law.
Finally, the Chapter briefly outlines some other legislative approaches worldwide. It is indeed important to highlight the plurality of techniques that can and have been adopted with respect to online speech governance and to bear in mind, specifically, that the regu- latory strategies of Western democracies may have to deal with other regulatory frame- works.
1.3.4. Chapter 5: Platform standards and automated moderation
The goal of Chapter 5 is mainly that of investigating how providers of intermediary ser- vices themselves have addressed the phenomenon of hate speech, both in terms of the
31 Gesetz zur Verbesserung der Rechtsdurchsetzung in sozialen Netzwerken (Netzwerk- durchsetzungsgesetz - NetzDG) 2017 (BGBI I S 3352).
32 Communications Decency Act 1996.
policies adopted and in terms of the practical means of enforcement of those policies. This analysis is indeed considered to be essential for the purposes of identifying what further challenges still lie ahead in the governance of online hate speech and of defining the future strategies to be implemented by the EU in this respect.
The Chapter is, in its essence, structured into two parts. The first part focuses on the private anti-hate speech strategies applied by major providers of intermediary services. In particular, the work deals with the policies, standards, and terms and conditions for- mulated by those actors, with a close eye on the case of Meta platforms – whose terms and conditions are analysed in the light of the decisions rendered in recent years by the Meta Oversight Board – and also considering the cases of X, YouTube, and TikTok. The goal is, in this respect, to search for common patterns and features, as well as to compare those platforms’ policy instruments with the European legal framework and, importantly, their consistency with the principle of substantive equality. From a more technical per- spective, the Chapter also considers the technical means through which hate speech is actually moderated by private platforms, focusing specifically on the rise of AI detection systems and giving an overview of their main features, their functioning and limitations. The second part of the Chapter contemplates the challenges that the ways in which platforms moderate hate speech pose to the law and, specifically, to European hate speech governance and the protection of constitutional values and fundamental rights. In this respect, the work highlights how the resort to AI systems for content moderation and content curation necessarily entails the presence of certain margins of error – thus requir- ing policymakers and lawmakers to define the limits of “acceptability” of error – and suggests substantive equality as a proxy to determine the borders of acceptable errors in the context of hate speech moderation in Europe. The Chapter also indicates some areas of action to be addressed – namely, the areas of transparency, rule of law, and due process
– and underlines how the Digital Services Act may indeed serve as the baseline for such mitigating interventions within the European context.
Most notably, the Chapter argues that the adoption of more specific guidelines with regard to the moderation of hate speech could represent a noteworthy asset. In this respect, the Chapter calls for a renovation of the current EU Code of Conduct on Illegal Hate Speech.33
33 Code of Conduct on Countering Illegal Hate Speech Online 2016.
2.
Hate Speech and Substantive Equality: A Theoretical Framework
Summary: 2.1. Introduction. – 2.2. The concept of hate speech in the global and European context. – 2.2.1. Origins of the term and constitutional ap- proach to hate speech in the United States. – 2.2.2. Lessons from international human rights law. – 2.2.2.1. Article 20 ICCPR. – 2.2.2.2. Article 4 ICERD. – 2.2.3. Hate speech in Europe. – 2.2.3.1. The Council of Europe. – 2.2.3.2. The European Union. – 2.2.4. Interim conclusions. – 2.3. The transatlantic debate on hate speech regulation. – 2.3.1. The liberal approach: the US model of the free marketplace of ideas. – 2.3.2. The militant approach: the case of Europe. – 2.4. Hate speech and the Internet. – 2.4.1. Free speech and information in the digital age. – 2.4.2. Main characters of online hate speech. – 2.4.2.1. Perma- nence. – 2.4.2.2. Itinerancy. – 2.4.2.3. Anonymity. – 2.4.2.4. Cross- jurisdictional nature of online content. – 2.4.3. The role of algorithmic content moderation and curation. – 2.5. Anti-discrimination perspectives on hate speech: a sub- stantive equality approach. – 2.5.1. Hate speech as domination: some takea- ways from speech act theory. – 2.5.2. Substantive equality as a lodestar for hate speech governance. – 2.5.2.1. The concept of substantive equality. – 2.5.2.2. Substantive equality and hate speech in the European multi-level human rights pro- tection system. – 2.5.3. Hate speech governance and substantive equality in the world of bits. – 2.6. Conclusions.
The purpose of the present Chapter is to introduce the many issues and challenges relating to the development of an adequate hate speech governance system both within the online and offline environment. The concept of “hate speech”, indeed, is not univocal, nor are the legal approaches to such a phenomenon within the global context. The rationale, itself, behind a legislative reaction against hate speech has long been the topic of a doctrinal and political debate which is far from being solved. The perspective adopted within the pre- sent work is, nonetheless, that hate speech governance, at least within the European con- text, should be driven, primarily, by the goal of fostering and promoting the substantive equality of the individuals and groups of individuals that are more commonly vulnerable to hate speech victimization.
The Chapter is structured as follows. Section 2.2 aims to give the reader the necessary background information concerning the origins of the notion of “hate speech” (§2.2.1),
as well as about the international (§2.2.2) and regional – namely, European (§2.2.3) – human rights framework on hate speech, so as to identify common patterns and/or incon- sistencies (§2.2.4).
Section 2.3 underscores the different, and often opposing, approaches that the law can take with respect to the discussed phenomenon: for this purpose, the United States (§2.3.1) and European (§2.3.2) perspectives are considered, as they represent key models for “liberal” versus “militant” approaches to hate speech regulation.
Section 2.4 focuses on the context of the Internet, highlighting, in particular, how the specific characters of online communication and information (§2.4.1) can influence the way hate speech is disseminated and distributed and the way this may affect its targets (§2.4.2). Due regard is also given to the increasingly important role played, in the context of expression and information rights across the Internet, by the resort to algorithmic prac- tices of content moderation and content curation (§2.4.3).
Section 2.5, moving from some preliminary notions grounded in speech act theory, investigates the links and connections between the European approach to hate speech governance, as described in the previous sections, and the principle of substantive equal- ity. In particular, after having stressed the capability of hate speech to produce illocution- ary effects consisting of the perpetuation of dynamics of power and dominance within the social tissue (§2.5.1), the Section moves on to argue that the principle of substantive equality could (and should) be invoked as a lens to interpret the goals of hate speech governance, the purpose of which could be intended precisely as providing a remedy against those dynamics of power and dominance (§2.5.2). It is also noted that, that being the case, governing the phenomenon of hate speech in the digital sphere raises specific challenges related, in particular, to the (more and more automated) private moderation systems deployed by platforms (§2.5.3).
Finally, Section 2.6 briefly provides some interim conclusions which shall represent the steppingstone for Chapter 3.
2.2. The concept of hate speech in the global and European context
When addressing the phenomenon of “hate speech”, one of the most significant chal- lenges is that of identifying what the expression actually means. Admittedly, there is in fact no universally accepted definition of the term. On the one hand, if one considers the phenomenon of hate speech from a legal perspective, one is confronted with an extraor- dinary variety of legal frameworks across the globe, which may vary not only with respect to the solutions adopted but also with respect to the actual scope of the notion of “hate speech” itself. On the other hand, “hate speech” is not only a legal concept, as it is also relevant for other fields of knowledge such as philosophy, linguistics, psychology, and
sociology. Additionally, the expression has increasingly entered the ordinary and every- day language of people who are not professionals of the law.1
The purpose of the present section is not that of offering a solution to the interpretive challenges set by the term but, rather, that of presenting an overview of its content under landmark international human rights law, as well as under the European human rights framework (hereby including both the Council of Europe and the European Union sys- tems). In other words, the purpose is to highlight what forms of speech and what types of hate may be included within the umbrella expression “hate speech”, at least within the Old Continent, and thus to identify the fundamental features characterizing the phenom- enon of “hate speech” as intended for the purposes of the present research.
2.2.1. Origins of the term and constitutional approach to hate speech in the United States
The Oxford English Dictionary – in defining “hate speech” as speech, address or written material capable of inciting hatred or intolerance, especially against a particular social group on the basis of its members’ ethnicity, religious beliefs, sexuality, etc. – clarifies that the origins of the term can be traced back to the United States.2 Indeed, the debate concerning hate speech and free speech in the US constitutional system dates back to the 1920s, when historical victims of prejudice and discrimination launched a concerted ef- fort to react against the forms of oppression they had traditionally been subjected to. In so doing, these groups entered into disagreement with the recently born American Civil Liberties Union (ACLU), dedicated primarily to the promotion and defence of the values of free speech.3
Subsequently, throughout the twentieth century, US constitutional jurisprudence on hate speech, under the guidance of the Supreme Court (SCOTUS), underwent a signifi- cant evolution. Most notably, after a brief period where the phenomenon was categorized as a form of “group libel” and was considered to be legitimately subjectable to punish- ment in the aftermath of Beauharnais v Illinois,4 the SCOTUS inaugurated with the 1969 decision of Xxxxxxxxxxx v Ohio5 a consistent strand of case law, still applicable today, cutting down significantly the possibility for the government to impose limitations and restrictions upon the utterance of hate speech. Indeed, the inherent rejection of any form of content- or viewpoint-based regulation, characterizing US First Amendment
1 Xxxxxxxxx Xxxxx, Hate Speech Law: A Philosophical Examination (Routledge 2015); Xxxxxxxxx Xxxxx, ‘What Is Hate Speech? Part 1: The Myth of Hate’ (2017) 36 Law and Philosophy 419; Xxxxxxxxx Xxxxx and Xxxxxxx Xxxxxxxx, The Politics of Hate Speech Laws (Routledge 2019); Xxxxx Xxxxxx, Discorsi d’odio. Modelli Costituzionali a Confronto (Xxxxxxx 2018).
2 ‘Hate, n.’ <xxxxx://xxx.xxx.xxx/xxxx/Xxxxx/00000> accessed 28 December 2022.
3 Xxxxxx Xxxxxx, Hate Speech: The History of an American Controversy (University of Nebraska Press 1994) 9–10.
4 Beauharnais v Illinois 343 US 250 (1951).
5 Xxxxxxxxxxx v Ohio 395 US 444 (1969).
jurisprudence,6 generally entails the exclusion of the constitutional legitimacy of hate speech bans, which can only be adopted in specific cases, such as when those expressions amount to “true threats”7 or, even more importantly, when they constitute “fighting words” – that is, when “by their very utterance” they “inflict injury or tend to incite an immediate breach of the peace”.8
With regard to the latter category, it is worth mentioning the case of Xxxxxxxxxx v New Hampshire, where, while defining for the first time the concept of fighting words in the context of US free speech jurisprudence, the SCOTUS held that this category, represent- ing a form of low-value speech, should not be considered worthy of full First Amendment protection, so that the adoption of legal reactions against it should generally be considered as allowed by the US Constitution. Indeed, “such utterances are no essential part of any exposition of ideas, and are of such slight social value as a step to truth that any benefit that may be derived from them is clearly outweighed by the social interest in order and morality”.9 Quite evidently, the Court’s approach to fighting words in Xxxxxxxxxx could have opened the doors to the constitutional legitimacy of many forms of hate speech bans in the US. Nonetheless, subsequent case law from the SCOTUS went on to reduce the scope of applicability of the category. First, the 1971 decision of Xxxxx v California re- defined that class of speech, concluding that it included only those words that, “when addressed to the ordinary citizen, are, as a matter of common knowledge, inherently likely to provoke violent reaction”.10 Through this judgment, the SCOTUS thus significantly heightened the standards required for the adoption of measures against fighting words, as showcased, namely, by the famous Skokie judicial saga.11
6 Xxxxxxxx X Xxxxx, American Constitutional Law (2nd edn, Foundation Press 1988) 789–792; Xxxxxx X Xxxxxx, ‘The Content Distinction in First Amendment Analysis’ (1981) 34 Stanford Law Review 113; Xxxxxxxx X Xxxxx, ‘Content Regulation and the First Amendment’ (1983) 25 Xxxxxxx & Xxxx Xxx Review 189; Xxxxx X Xxxxxxxx, ‘Content Discrimination and the First Amendment’ (1991) 139 University of Xxxx- sylvania Law Review 615; Xxxxxx Xxxxxxxx, ‘Content Discrimination Revisited’ (2012) 98 Xxxxxxxx Xxx Review 231. See also, ex multis, Police Department of the City of Chicago x Xxxxxx 508 US 92 (1972).
7 “‘True threats’ encompass those statements where the speaker means to communicate a serious ex- pression of an intent to commit an act of unlawful violence to a particular individual or group of individuals
… a prohibition on true threats ‘protect[s] individuals from the fear of violence’ and ‘from the disruption that fear engenders,’ in addition to protecting people ‘from the possibility that the threatened violence will occur.’ … Intimidation in the constitutionally proscribable sense of the word is a type of true threat, where a speaker directs a threat to a person or group of persons with the intent of placing the victim in fear of bodily harm or death”. Virginia v Black 538 US 343 (2003) 359–360.
0 Xxxxxxxxxx x Xxx Xxxxxxxxx 000 XX 000 (1942) 582.
9 ibid 572. See, xx Xxxxxxxxxx and on the concept of “low-value speech”, Xxxxxxxxx Xxxxxx, ‘The In- vention of Low-Value Speech’ (2015) 128 Harvard Law Review 2166.
10 Xxxxx v California 403 US 15 (1971) 20.
11 Indeed, in light of Xxxxx’x redefinition of “fighting words”, the Illinois Supreme Court held that the Village of Skokie’s refusal to allow a neo-Nazi parade was unconstitutional because the wearing of the Swastika and of Nazi regalia could not be considered to constitute a case of “fighting words”: “The display of the swastika, as offensive to the principles of a free nation as the memories it recalls may be, is symbolic political speech intended to convey to the public the beliefs of those who display it … It does not, in our opinion, fall within the definition of “fighting words,” and that doctrine cannot be used here to overcome the heavy presumption against the constitutional validity of a prior restraint”. Village of Skokie v Nat’l Socialist Party of America 373 NE2d 21 (Ill 1978) 24.
Additionally, the 1992 decision of RAV v City of St. Paul12 added further important limitations to the possibility for local, state, and federal authorities to impose governmen- tal restrictions on the phenomenon of hate speech. In this case, the applicant – a juvenile at the time of the facts – had been sentenced, together with other people, for having burnt a cross in front of the house of an African American who had recently moved into their neighbourhood. Such conduct had been punished under a local statute passed by the City of St. Xxxx, Minnesota, which made the placement on private or public property of sym- bols, objects, appellations, characterizations or graffiti, with the knowledge or reasonable expectancy that such action would stir anger, alarm or resentment in others on the basis of race, colour, creed, religion or gender, a misdemeanour. The SCOTUS unanimously held that the ordinance represented an inadmissible restriction on freedom of speech. Most notably, the majority, although accepting the view that the statute only specifically dealt with the class of fighting words, concluded nonetheless that its purpose was pre- cisely that of prohibiting otherwise permitted speech solely on the basis of the subjects the speech addressed.13 In other words, the Court’s majority argued that the choice of the statute to only address those fighting words that were based on the categories of race, colour, creed, religion, and gender inherently indicated the actual goal not of proscribing fighting words as such but, rather, of opposing the utterance of a specific point of view. As a result, the ordinance, precisely because of its “underbreadth”,14 was considered to be vitiated by viewpoint discrimination and, therefore, unconstitutional under the First Amendment.15
As a result, the US constitutional framework has repeatedly proven to be, in general terms, opposed to the adoption of forms of hate speech bans as such, precisely because the expression of discriminatory and dehumanizing opinions cannot, per se, be subjected to governmental constraints without these translating into unwarranted limitations on spe- cific viewpoints and, thus, upon the free marketplace of ideas protected by the First Amendment. Quite curiously, the global approach towards hate speech regulation has
12 RAV v City of St Xxxx 505 US 377 (1992).
13 ibid 381.
14 The expression “underbreadth”, in fact, was adopted in critical terms within Justice Xxxxx’x concur- ring opinion. See ibid 402.
15 “Although the phrase in the ordinance, ‘arouses anger, alarm or resentment in others,’ has been limited by the Minnesota Supreme Court’s construction to reach only those symbols or displays that amount to ‘fighting words,’ the remaining, unmodified terms make clear that the ordinance applies only to ‘fighting words’ that insult, or provoke violence, ‘on the basis of race, color, creed, religion or gender.’ Displays containing abusive invective, no matter how vicious or severe, are permissible unless they are addressed to one of the specified disfavored topics. Those who wish to use ‘fighting words’ in connection with other ideas – to express hostility, for example, on the basis of political affiliation, union membership, or homo- sexuality – are not covered. The First Amendment does permit St. Xxxx to impose special prohibitions on those speakers who express views on disfavored subjects … moreover, the ordinance goes even beyond mere content discrimination, to actual viewpoint discrimination … ‘fighting words’ that do not themselves invoke race, color, creed, religion, or gender – aspersions upon a person’s mother, for example – would seemingly be usable ad libitum in the placards of those arguing in favor of racial, color, etc., tolerance and equality, but could not be used by those speakers’ opponents”. ibid 391. In this respect see, among others, Xxxxx Xxxx Xxxx, ‘The Case of the Missing Amendments: R.A.V. v. City of St. Xxxx’ (1992) 106 Harvard Law Review 124; Xxxxxx Xxxxxxxxx, ‘Hate Speech in Constitutional Jurisprudence: A Comparative Anal- ysis’ (2002) 24 Xxxxxxx Law Review 1523.
evolved in a manner which is rather different from that of the jurisdiction where the term originated. Not only has the body of international human rights protection laws provided significantly for the introduction of legal measures and responses against the discussed phenomenon, but many regional, as well as national, frameworks have increasingly moved towards the imposition of limitations on the utterance and spread of forms of hate speech. In this respect, the following subsections will focus, specifically, on the UN and European landscapes.
2.2.2. Lessons from international human rights law
International human rights documents represent the necessary starting point of any dis- cussion concerning the imposition of legal limitations and restrictions to hate speech. Most notably, the International Covenant on Civil and Political Rights (ICCPR)16 and the International Convention on the Elimination of All Forms of Racial Discrimination (ICERD)17 have been paramount in shaping the subsequent development of hate speech regulations on a global scale.18
Consistently with the Universal Declaration of Human Rights,19 Article 19 of the ICCPR explicitly recognizes individuals’ right to freedom of expression, which includes the free- dom to seek, receive and impart information and ideas of all kinds, regardless of frontiers and through any means chosen. Nonetheless, paragraph 3 of the Article also recognizes that freedom of expression, since it “carries with it special duties and responsibilities”, may be subjected to certain restrictions when these are provided by the law and are nec- essary in order to guarantee the rights and reputation of others or to protect publicly rele- vant goods such as national security, public order, or public health or morals.
Additionally, Article 20, paragraph 2, notably affirms that “any advocacy of national, racial or religious hatred that constitutes incitement to discrimination, hostility or violence shall be prohibited by law”,20 a provision which is quite unique within the Covenant itself as it is the only one requiring (and not prohibiting) an active intervention by states par- ties.21 Although it does not employ the term “hate speech”, the ICCPR is thus considered to be one of the first and most significant documents introducing its notion and concept at an international level, as it recognized as legally relevant a set of conducts which per- tain specifically to the sphere of what hate speech is: that is, incitement to discrimination,
16 International Covenant on Civil and Political Rights 1966.
17 International Convention on the Elimination of All Forms of Racial Discrimination 1965.
18 Xxxxxxxxx Xxxxxxx, ‘Molding The Matrix: The Historical and Theoretical Foundations of International Law Concerning Hate Speech’ (1996) 14 Berkeley Journal of International Law 1.
19 Universal Declaration of Human Rights 1948 art 19.
20 International Covenant on Civil and Political Rights art 20.
21 Xxxx Xxxx, ‘Extreme Speech Under International and Regional Human Rights Standards’ in Xxxx Xxxx and Xxxxx Xxxxxxxxx (eds), Extreme Speech and Democracy (Oxford University Press 2009) 70.
incitement to hostility, and incitement to violence through the advocacy of hatred that is based either on ethnicity or religious beliefs.22
The relationship between Article 19 and Article 20 has raised suspicions as to their coherence. Such doubts, however, were first rejected by the Human Rights Committee in its General Comment No. 11 (1983), according to which “these required prohibitions are fully compatible with the right of freedom of expression”.23 Subsequently, the Committee confirmed its position once again in its General Comment No. 34 (2011), where it clari- fied that Article 20, paragraph 2, is to be considered as a lex specialis of Article 19, par- agraph 3: this meant, according to the Committee, that states parties, when implementing the hate speech prohibition, must comply with the threefold requirement set therein (i.e., prior provision by the law; legitimate aim; and proportionality).24
Furthermore, Article 20 does not require states to prohibit any type of advocacy of hatred, but only those forms of advocacy that constitute “incitement”, that is, those that aim at provoking specific reactions and are in fact capable of producing contingent harm.25 As a result, the threshold set by the provision is rather high and “does not ban hate speech outright but only requires the prohibition of certain qualified types of hate speech”.26 As underlined by Xxxxxxxxx, the act of incitement under Article 20 implies a triangular scheme where an advocator produces an “imminent risk” or “likelihood” that the audience will be stirred to discrimination, hostility and violence against the target group.27
22 In this respect, the Committee on the Elimination of Racial Discrimination clarified that, although “the term hate speech is not explicitly used in the Convention, this lack of explicit reference has not impeded the Committee from identifying and naming hate speech phenomena and exploring the relationship between speech practices and the standards of the Convention”. Committee on the Elimination of Racial Discrimi- nation, ‘General Recommendation No. 35. Combating Racist Hate Speech’ (United Nations 2013) CERD/C/GC/35 para 5.
23 Human Rights Committee, ‘General Comment No. 11. Prohibition of Propaganda for War and Incit- ing National, Racial or Religious Hatred (Art. 20)’ (United Nations 1983) para 2.
24 Human Rights Committee, ‘General Comment No. 34. Article 19: Freedom of Opinion and Expres- sion’ (United Nations 2011) CCPR/C/GC/34 paras 51–52. Similarly, in Xxxx v. Canada, the Human Rights Committee had declared that “restrictions on expression which may fall within the scope of article 20 must also be permissible under article 19, paragraph 3”. Xxxxxxx Xxxx v Canada [2000] Human Rights Com- mittee CCPR/C/70/D/736/1997 [10.6]. Prior to such clarifications, in fact, international law experts disa- greed on whether art 20, para 2, was to be recognized as a mere elaboration of art 19, para 3, or whether it were to be interpreted as a different and additional basis for the imposition of restrictions on freedom of expression: see Xxxxx Xxxxxxxxx and Xxxxxx Xxxxxxxx, ‘Article 20 of the International Covenant on Civil and Political Rights’ in Xxxxxx Xxxxxxx (ed), Striking a Balance. Hate Speech, Freedom of Expression and Non-Discrimination (Article 19 1992) 30.
25 Xxxxx Xxxxxxx, ‘Contribution to OHCHR Initiative on Incitement to National, Racial, or Religious Hatred’ (UN OHCHR 2011 Expert workshop on the prohibition of incitement to national, racial or religious hatred, Vienna, February 2011) <xxxxx://xxx0.xxxxx.xxx/xxxxxxx/xxxxxx/xxxxxxx/xxxxxxxx0000_xx- cpr/docs/ContributionsOthers/X.Xxxxxxx.doc> accessed 26 December 2022.
26 Xxxxxx Xxxxxxxxx, ‘Blasphemy versus Incitement: An International Law Perspective’ in Xxxxxxxxxxx X Xxxxxx, Xxxxx Xxxxxx and Xxxxx Xxxx (eds), Profane: Sacrilegious Expression in a Multicultural Age (University of California Press 2014) 285.
27 ibid 297–303. According to the Human Rights Committee, “the action advocated through incitement speech does not have to be committed for said speech to amount to a crime. Nevertheless, some degree of risk of harm must be identified”. Human Rights Committee, ‘Rabat Plan of Action on the Prohibition of Advocacy of National, Racial or Religious Hatred That Constitutes Incitement, to Discrimination, Hostility
A problematic aspect of the provision at hand is, nonetheless, represented by the defi- nition of the objects themselves of the inflammatory conduct: that is, the definition of what “discrimination”, “hostility”, and “violence” are, as well as their relationship with hatred itself. In this respect, the Human Rights Committee has not formally provided any further clarifications. According to an influential study prepared by the NGO Article 19 for the UN, nevertheless, “discrimination” should be understood as “any distinction, ex- clusion, restriction or preference” based on the membership of a certain category or group of persons, “which has the purpose or effect of nullifying or impairing the recognition, enjoyment or exercise, on an equal footing, of human rights and fundamental free- doms”;28 “violence” is defined as “the intentional use of physical force or power … that either results in or has a high likelihood of resulting in injury, death, psychological harm, maldevelopment, or deprivation”;29 finally, “hostility” is to be distinguished from “ha- tred” in that, where the latter is a “state of mind” characterized by intense and irrational emotions of opprobrium, enmity and detestation, the former rather implies a “manifested action” which is, therefore, an outward and material projection of hatred itself.30
The second essential provision concerning hate speech regulation within the international human rights framework is represented by Article 4 ICERD,31 which presents at least two fundamental differences from Article 20, paragraph 2. The first difference concerns the protected grounds of discrimination: whereas the ICCPR addressed advocacy of hatred based on “national, racial or religious” grounds, the ICERD ignores the phenomenon of religious hate speech focusing, rather, on “race”, “colour”, and “ethnic origin”. The sec- ond difference concerns the reaction against hate speech required by the Convention: in- deed, whereas the ICCPR simply obliges states to “prohibit” the conducts described above, leaving to them the choice to resort to civil, administrative, or criminal sanctions,32 the ICERD compels them to adopt the latter.
or Violence’ (United Nations 2013) A/HRC/22/17/Add.4 para 29. Similarly, the “Camden Principles” state that “incitement” refers to “statements about national, racial or religious groups which create an imminent risk of discrimination, hostility or violence against persons belonging to those groups” (emphasis added). Article 19, ‘The Camden Principles on Freedom of Expression and Equality’ (April 2009) <xxxxx://xxx.xx- xxxxx00.xxx/xxxx/xxxxx/xxxx/xxxxxxxxx/xxx-xxxxxx-xxxxxxxxxx-xx-xxxxxxx-xx-xxxxxxxxxx-xxx-xxxxxxxx.xxx> accessed 27 December 2022 principle 12.
28 Article 19, ‘Towards an Interpretation of Article 20 of the ICCPR: Thresholds for the Prohibition of Incitement to Hatred’ (Regional expert meeting on article 20, Vienna, 9/02 2010) 7
<xxxxx://xxx0.xxxxx.xxx/xxxxxxx/xxxxxx/xxxxxxx/xxxxxxxx0000_xxxxx/xxxx/XXX0Xxxxxxxxx.xxx> accessed 27 December 2022.
29 ibid.
30 ibid.
31 With regard to Article 4 ICERD, see Hare (n 21); Xxxxxxx Xxxxxxxxxx, The International Convention on the Elimination of All Forms of Racial Discrimination: A Commentary (Oxford University Press 2016); Xxxx Xxxxx Xxxxxxx, ‘Racial Speech and Human Rights: Article 4 of the Convention on the Elimination of All Forms or Racial Discrimination’ in Xxxxxx Xxxxxxx (ed), Hate Speech, Freedom of Expression and Non- Discrimination (Article 19 1992).
32 As a matter of fact, the Rabat Plan of Action explicitly states: “Criminal sanctions related to unlawful forms of expression should be seen as last resort measures to be applied only in strictly justifiable situa- tions”. Human Rights Committee, ‘Rabat Plan of Action’ (n 27) 34.
Article 4 ICERD opens with a general condemnation of all propaganda and organiza- tions “which are based on ideas or theories of superiority of one race or group of persons of one colour or ethnic origin, or which attempt to justify or promote racial hatred and discrimination in any form” and requires states parties to adopt immediate and positive measures to eradicate not only acts of discrimination but also all incitement to such dis- crimination. To reach this end, the provision orders that, with “due regard” to the princi- ples embodied in the Universal Declaration of Human Rights and within Article 5 ICERD,33 states parties declare as offences punishable by law “all dissemination of ideas based on racial superiority or hatred” and “incitement to racial discrimination, as well as acts of violence or incitement to such acts”.34
In its General Recommendation No. 35, nonetheless, the Committee on the Elimina- tion of Racial Discrimination specifically addressed the interpretation and scope of appli- cation of Article 4.35 Most notably, the Committee underlined that criminalization should only be resorted to in the most severe cases of racist expressions and should be enacted granting due respect to the principles of legality, proportionality and necessity.36 Addi- tionally, whereas the Committee had previously attached to Article 4 a strict or absolute liability regime,37 General Recommendation No. 35 adopted a much more careful ap- proach. Indeed, at least with respect to the conduct of incitement, it explicitly required that states parties take into account the intention of the speaker, as well as “the imminent risk or likelihood that the conduct desired or intended by the speaker will result from the speech in question”.38
33 “The phrase due regard implies that, in the creation and application of offences, as well as fulfilling the other requirements of article 4, the principles of the Universal Declaration of Human Rights and the rights in article 5 must be given appropriate weight in decision-making processes. The due regard clause has been interpreted by the Committee to apply to human rights and freedoms as a whole, and not simply to freedom of opinion and expression, which should however be borne in mind as the most pertinent refer- ence principle when calibrating the legitimacy of speech restrictions”. Committee on the Elimination of Racial Discrimination (n 22) para 19.
34 Unsurprisingly, many states parties have adopted reservations to the ICERD or have chosen ap- proaches diverging from that of the Committee on the Elimination of Racial Discrimination because of concerns and/or constitutional incompatibilities with art 4 as interpreted by the Committee itself: see Xxxxxxx (n 18) 53–60. With respect to the relationship between art 4 ICERD and the United States constitutional system, see Xxxx X Xxxxxxx, ‘Public Response to Racist Speech: Considering the Victim’s Story’ (1989) 87 Michigan Law Review 2320.
35 Committee on the Elimination of Racial Discrimination (n 22). With respect to General Recommen- dation No. 35, see Xxxxxxx XxXxxxxxx, ‘General Recommendation 35 on Combating Racist Hate Speech’ in Xxxxx Xxxxx and Xxxxxxxxx Xxxxxxxx (eds), Fifty Years of the International Convention on the Elimi- nation of all Forms of Racial Discrimination: A Living Instrument (Manchester University Press 2017). One of the paramount goals of the Recommendation was to reconcile the ICERD with Articles 19 and 20 ICCPR. In fact, the Convention was previously looked at as an outlier within the field, due to its reliance on the tools of criminal law as a means to fight racism.
36 Committee on the Elimination of Racial Discrimination (n 22) para 12.
37 According to a 1983 study of the Committee, “what is penalized … is the mere act of incitement, without any reference to any intention on the part of the offender or the result of such incitement, if any”. Committee on the Elimination of Racial Discrimination, ‘Positive Measures Designed to Eradicate All In- citement to, or Acts of, Racial Discrimination: Implementation of the International Convention on the Elim- ination of All Forms of Racial Discrimination, Article 4’ (United Nations 1986) CERD/2 para 96.
38 Committee on the Elimination of Racial Discrimination (n 22) para 16.
However, General Recommendation No. 35 has not fully resolved the debate concern- ing the criminalization of the conduct of dissemination of ideas based on racial superiority or hatred, with respect to which the threshold is arguably lower pursuant to the text of Article 4. Indeed, the General Recommendation, although recognizing the need that a range of contextual factors be taken into account in order to avoid an excessive restriction of freedom of expression, including the objectives of the speech and thus the intention of the speaker,39 does not seemingly extend to this conduct the requirement of likelihood or existence of a high risk of impact. This seems to be implicitly confirmed by the General Recommendation where it distinguishes the two conducts, declaring that whereas the pro- visions of Article 4 on dissemination of ideas “attempt to discourage the flow of racist ideas upstream”, those on incitement “address their downstream effects”.40
The framework resulting from the ICCPR and ICERD has helped give a fundamental impulse on a global scale with respect both to the definition of the phenomenon of hate speech and with respect to the adoption of legal responses to it. First, they offer an insight into the variety of conducts pertaining to the umbrella term “hate speech”, most notably by addressing both the case of “incitement” (to discrimination, violence, or hostility) and that of “dissemination of ideas”. Second, the two provisions suggest what the response of the law can (and should) be, by requiring states to scale the measures adopted based on the seriousness of the conduct and on a range of contextual features and conditions. Third, both Article 20 ICCPR and Article 4 ICERD attach to the notion of “hate” a nuclear content by identifying, in particular, who the targets of hatred should be in order for hate speech to be relevant under international law: that is, those individuals and groups that are subjected to victimization and discrimination due to a particular identifying feature.
In fact, the international human rights regime on hate speech is rather sectoral and nuclear if compared to the legal regimes actually developed, in the following years, across regional and domestic frameworks. Many jurisdictions adopting hate speech regulations have most notably extended the scope of grounds of discrimination addressed, providing, for example, for measures also encompassing sexist, homophobic, transphobic, or ableist speech.
Nonetheless, the historical role of the ICCPR and ICERD in setting the standards and in propelling state action in this field has been remarkable. Moreover, one of the most relevant merits of the treaties has possibly been the establishment of a direct link between hate speech and the violation of human rights and of the paramount principle of non-
39 ibid 15. In this respect, however, the UN High Commissioner for Human Rights had expressed a few years before a very different view, arguing that under art 4 ICERD “the dissemination of the idea itself is what attracts sanction without any further requirement about its intent or impact”. United Nations High Commissioner for Human Rights, ‘Incitement to Racial and Religious Hatred and the Promotion of Xxxxx- ance’ (United Nations 2006) A/HRC/2/6.
40 Committee on the Elimination of Racial Discrimination (n 22) para 30. Be that as it may, the question regarding the requirement of the element of likelihood still represents quite an open debate. See, on this point, Article 19, ‘Prohibiting Incitement to Discrimination, Hostility or Violence’ (2012) <xxxxx://xxx.xx- xxxxx00.xxx/xxxx/xxxxx/xxxxxxxxxxxx/0000/XXXXXXX-00-xxxxxx-xx-xxxxxxxxxxx-xx-xxxxxxxxxx.xxx> accessed 28 December 2022; Xxxxxxxx Xxxxxxxx, Online Political Hate Speech in Europe: The Rise of New Extrem- isms (Xxxxxx Xxxxx Publishing 2020) 36.
discrimination,41 a connection which has become more and more explicit throughout the subsequent years. Thus, for instance, the UN Special Rapporteur on minority issues high- lighted in 2015 that hate speech and incitement to hatred and violence are capable of damaging the “entire social fabric, unity and stability of societies” and that tolerance of and inaction against them “reinforce the subordination of targeted minorities, making them more vulnerable to attacks”.42
2.2.3. Hate speech in Europe
Although the international human rights framework has played a paramount role in the worldwide development of hate speech regulation, regional international and suprana- tional frameworks have also been fundamental – perhaps even more – in orienting state policies at a more de-centralized level. In the context of European countries, both the Council of Europe and the European Union have indeed been extremely influential with respect to this field, as will be underlined throughout the following subsections.
2.2.3.1. The Council of Europe
A variety of sources pertaining to the system of the Council of Europe (CoE) address the issue of hate speech from different perspectives and angles. The most relevant source is inevitably represented by the ECHR,43 whose provisions have stimulated the ECtHR to take an active role in shaping the way hate speech is dealt with in the Old Continent.
At least two provisions represent the backbone of the development of the Court’s case law in this field, that is, Article 10 on freedom of expression and Article 17 on the abuse of rights. To a certain extent, especially in recent years, Article 14 on the right to non- discrimination has also garnered increasing importance.44 Although recognizing that free- dom of expression is applicable also to those ideas and that information “that offend, shock or disturb the State or any sector of the population”,45 the ECtHR has in fact pro- gressively recognized the possibility for states to impose restrictions and limitations upon such freedom when it comes to confronting the phenomenon of hate speech. In this re- spect, the Court of Strasbourg has developed a two-tiered approach46 by which, while it
41 Besides, both the ICCPR and the ICERD address the principle of the right to non-discrimination, respectively at art 26 and art 2. On the relationship between hate speech regulation and the right to (sub- stantive) equality, see infra, §2.5.
42 Xxxx Xxxxx, ‘Report of the Special Rapporteur on Minority Issues’ (United Nations 2015) A/HRC/28/64 para 25.
43 Convention for the Protection of Human Rights and Fundamental Freedoms 1950.
44 The role of non-discrimination under the ECHR human rights framework was significantly extended following the adoption of Additional Protocol No. 12 in 2000, prohibiting contracting states from any form of discrimination with respect to the enjoyment of any right recognized by the state (and not only with respect to the fundamental rights and freedoms set directly within the ECHR): see Protocol No. 12 to the Convention for the Protection of Human Rights and Fundamental Freedoms 2000 (ETS No 177) art 1.
45 Xxxxxxxxx v the United Kingdom [1976] ECtHR 5493/72, Series A 24 [49].
46 Xxxxx Xxxxx, ‘Attacking Hate Speech under Article 17 of the European Convention on Human Rights’ (2007) 25 Netherlands Quarterly of Human Rights 641; Xxxxx Xxxxxxxxx, ‘Protecting Freedom of Expression: The Challenge of Hate Speech in the European Court of Human Rights Case Law Symposium:
generally addresses the matter of the consistency of such measures by applying the three- based test set by Article 10, paragraph 2, of the Convention, it classifies nevertheless the most egregious forms of hate speech as altogether amounting to forms of abuse of free- dom of expression under Article 17.
When applying Article 10, the ECtHR, in order to evaluate the consistency of the im- position of formalities, conditions, restrictions or penalties (civil, administrative, and/or criminal) on freedom of expression to combat hate speech, must assess the existence of a prior legislation setting that measure, the pursuit of one of the legitimate aims indicated by the ECHR itself,47 and the necessity of such a measure in a democratic society (i.e., the respect of the principle of proportionality). In this respect, the ECtHR takes into ac- count a variety of factors, including the purpose of the speaker, the content of the utter- ance, the context where the utterance is expressed, the identity of the speaker, the com- position of the audience, the medium employed, as well as the nature and seriousness of the measure adopted and, therefore, of the state interference upon freedom of expres- sion.48
Article 17, conversely, establishes that nothing in the Convention “may be interpreted as implying … any right to engage in any activity or perform any act aimed at the de- struction of any of the rights and freedoms set forth [t]herein or at their limitation to a greater extent than is provided for in the Convention”. In other words, the provision pro- hibits “the harmful exercise of a right by its holder in a manner that is manifestly incon- sistent with or contrary to the purpose for which such right is granted/designed”.49 In the context of hate speech, this means that there are some cases where utterances are of such a nature so as to constitute, per themselves, a violation of other interests protected by the Convention.
The origins of such an approach can be traced back to 1979, when the then European Commission of Human Rights (ECommHR) delivered a decision of inadmissibility for the case of Glimmerveen and Xxxxxxxxx v the Netherlands.50 In that case, the ECommHR had to assess whether the conviction of the applicant, president of a far-right political
Comparative Law of Hate Speech’ (2009) 17 Xxxxxxx Journal of International and Comparative Law 427; Xxxxxx Xxxxxx and Xxxx Xxxxxxxx, ‘The Abuse Clause and Freedom of Expression in the European Human Rights Convention: An Added Value for Democracy and Human Rights Protection?’ (2011) 29 Netherlands Quarterly of Human Rights 54; Xxxxxxx Xxxxx, ‘Dangerous Expressions: The ECHR, Violence and Free Speech’ (2014) 63 International & Comparative Law Quarterly 491; Xxxxxxx Xxxxxx, ‘L’Hate Speech a Strasburgo: Il Pluralismo Militante Del Sistema Convenzionale’ (2017) 4 Quaderni costituzionali 963; Ma- rina Castellaneta, ‘La Corte Europea Dei Diritti Umani e l’applicazione Del Principio Dell’abuso Del Di- ritto Nei Casi Di Hate Speech’ (2017) 11 Diritti umani e diritto internazionale 745.
47 I.e., national security; territorial integrity; public safety; prevention of disorder or crime; protection of health or morals; protection of the reputation or rights of others; prevention of the disclosure of infor- mation received in confidence; maintenance of the authority and impartiality of the judiciary. Convention for the Protection of Human Rights and Fundamental Freedoms art 10, para 2.
48 See, among others, Xxxx Xxxxx, Manual on Hate Speech (Council of Europe Publishing 2009).
49 European Court of Human Rights, ‘Guide on Article 17 of the European Convention on Human Rights
– Prohibition of Abuse of Rights’ (Council of Europe 2022) <xxxxx://xxx.xxxx.xxx.xxx/Xxxx- ments/Guide_Art_17_ENG.pdf> accessed 6 April 2023.
50 Xxxxxxxxxxx and Hagenbeek v the Netherlands [1979] ECommHR 8348/78, 8406/78, 18 Decisions and Reports 187.
party, for the possession – with a view to distribution – of leaflets inciting to racial dis- crimination was consistent with the ECHR. The Commission concluded that the ideas expressed within those leaflets were not at all compatible with a number of conventional values, namely those enshrined in Article 14 on the prohibition of discrimination, so that the expression of such views amounted to activity prohibited within the meaning of Arti- cle 17.51 Subsequent case law by the ECtHR often referred to the relation between hate speech and abuse of rights, sometimes applying Article 17 as a “guillotine” provision and sometimes using it as a parameter to interpret Article 10 itself.52
The resort to such a reference, besides, often relies directly on practical aspects of the single cases at issue. Nonetheless, some common patterns have emerged. Indeed, as noted by the ECtHR itself in the case of Xxxxx Xxxxxx v Russia, Article 17 has been found to be applicable notably to “statements denying the Holocaust, justifying a pro-Nazi policy, alleging the prosecution of Poles by the Jewish minority and the existence of inequality between them, or linking Muslims with a grave act of terrorism”.53 Thus, for instance, the cases of Xxxxxx and X’xxxx M’bala54 concerned precisely the application of Article 17 to the case of antisemitic propaganda (and satire), by confirming the conviction, respec- tively, of the author of a series of articles calling for the exclusion of Jewish people from social life and of a French comedian who had enacted a sketch which, in the opinion of the Court, had taken on the nature of an antisemitic rally rather than of an entertainment show. Similarly, in Norwood,55 the Strasbourg judges held that the applicant’s display of a poster associating the image of the Twin Towers in flames with the symbol of a crescent and star in a prohibition sign represented, especially in the immediate wake of 9/11, an abuse of rights.
As for the subject of Holocaust denial, the ECtHR has repeatedly held that the utter- ance of such ideas is not covered under Article 10 ECHR not only because it inherently represents an attack on the Jewish community but also because it goes against historically
51 ibid 195–196. As a matter of fact, subsequent case law on antisemitic speech and on Holocaust denial initially took a detour from the reasoning expressed in Glimmerveen. For instance, in X v the Federal Re- public of Germany [1982] ECommHR 9235/81, 29 Decisions and Reports 194, concerning a civil lawsuit against a person who had exposed a noticeboard defining the Holocaust a “Zionistic swindle”, the ECommHR chose art 10 as the relevant parameter. Besides, in the two subsequent decisions of Xxxxxx v the Federal Republic of Germany [1988] ECommHR 12194/86, 56 Decisions and Reports 205 and Xxxxx v Germany [1995] ECommHR 25096/94, the Commission adopted a hybrid approach, as it found the peti- tions manifestly ill-founded under art 10, para 2, while interpreting nonetheless that provision in the light of art 17. Thus, although art 17 was taken into account not as a principle capable on its own of determining inadmissibility of the request, the remark that the condemned acts had in fact breached the duties enshrined within that provision was employed as an argument to uphold the satisfaction of the proportionality test. Xxxxxx and Xxxxx thus seemingly forecast a subsequent return of the ECommHR and, subsequently, of the ECtHR, towards the original model set in Glimmerveen.
52 In this respect, see the already mentioned decisions of Xxxxxx v the Federal Republic of Germany (n 51); Xxxxx v Germany (n 51). See also, ex multis, Xxxxxx v Romania (dec) [2012] ECtHR 16637/06 [23]; Xxxxx and Xxxxxx v Bulgaria [2021] ECtHR 29335/13 [105]; Bonnet v France (dec) [2022] ECtHR 35364/19.
53 Xxxxx Xxxxxx v Russia (dec) [2007] ECtHR 35222/04 [4].
54 M’bala M’bala v France (dec) [2015] ECtHR 25239/13, ECHR 2015-VIII.
55 Xxxxxxx v the United Kingdom (dec) [2004] ECtHR 23131/03, ECHR 2004-XI. However, see, con- tra, Zemmour v France [2022] ECtHR 63539/19, where the Court chose to address the case based on art 10 rather than based on art 17.
ascertained facts.56 Coherently with such a strand of case law, in its 2020 judgment for the case of Xxxxx and others v France,57 the ECtHR held that the dissolution of some political movements and associations expressing intensely (and aggressively) xenopho- bic, antisemitic, and revisionist ideas was consistent with the Convention pursuant to Ar- ticle 17. In such a case, indeed, the Strasbourg judges concluded that those groups, be- cause their conducts amounted to abuse of rights, were not covered by Article 11 on the right of association as interpreted in the light of Article 10.58
Besides, the reference to the concept of “abuse of rights” is, in fact, a characteristic feature distinguishing the European human rights multi-level framework, being also acknowledged and recognized by Article 54 of the Charter of Fundamentals Rights of the European Union (CFREU), and thus represents a further defining aspect distinguishing the approach to hate speech – and, in general, to fundamental rights and liberties – taken on the Eastern side of the Atlantic as opposed to the US. Indeed, the liberal perspective on constitutional freedoms, characterizing the US, is not compatible with the notion itself of abuse of rights.59
In addition to the ECHR, the CoE framework has addressed the matter of hate speech also through the drafting of other policy documents and treaty-based instruments,60 often suggesting that contracting states take positive actions against it. Thus, for instance, the Additional Protocol of 2003 to the 2001 Budapest Convention on Cybercrime61 obliges contracting states to punish the conduct of distributing or making available through a computer system racist and xenophobic material, defined as “any written material, any image or any other representation of ideas or theories, which advocates, promotes or in- cites hatred, discrimination or violence” based on the grounds of “race”, colour, descent, national or ethnic origin, and religion.62
Furthermore, it is important to note that, whereas the Convention does not define nor mention the term and thus leaves to the Strasbourg Court the complex task of identifying
56 Xxxxxxx v France (dec) [2003] ECtHR 65831/01, ECHR 2003-IX; Witzsch v Germany (2) (dec) [2005] ECtHR 7485/03. See, in this respect, Xxxxx Xxxxx, ‘Holocaust Denial before the European Court of Human Rights: Evolution of an Exceptional Regime’ (2015) 26 European Journal of International Law 237. See, contra, Perinçek v. Switzerland, where the denial of the Armenian genocide was not considered to be able to trigger per se art 17 ECHR: Peri̇nçek v Switzerland [2015] ECtHR [GC] 27510/08, ECHR 2015. See, in this regard, Xxxxx Xxxxxxx, ‘Disputing the Indisputable: Genocide Denial and Freedom of Expression in Perincek v. Switzerland’ (2016) 25 Nottingham Law Journal 141.
57 Xxxxx and others v France [2020] ECtHR 77400/14, 34532/15, 34550/15.
58 “La Cour conclut que l’État a pu considérer que les associations requérantes et leurs dirigeants pour- suivaient des buts prohibés par l’article 17 et qu’ils avaient abusé de leur liberté d’association, en tant qu’organisation radicale menaçant le processus politique démocratique, en contradiction avec les valeurs de tolérance, de paix sociale et de non-discrimination qui sous-tendent la Convention. Dans leur dissolution, la Cour voit l’expression de décisions prise au regard d’une connaissance approfondie de la situation poli- tique interne et en faveur d’une ‘démocratie apte à se défendre’ … dans un contexte de persistance et de renforcement du racisme et de l’intolérance en France et en Europe”. ibid 138.
59 See, in this respect, Xxxxxxxx Xxxxxxxxxxx and Xxxxxx Xxxxxxxxx, Disinformation and Hate Speech (Boc- coni University Press 2020).
60 Xxxxxxx XxXxxxxxx, ‘The Council of Europe against Online Hate Speech: Conundrums and Chal- lenges’ (Council of Europe 2013) MCM(2013)005.
61 Convention on Cybercrime 2001 (ETS No 185).
62 Additional Protocol to the Convention on Cybercrime, concerning the criminalisation of acts of a racist and xenophobic nature committed through computer systems 2003 (ETS No 189) art 2, para 1.
what constitutes hate speech or not,63 other CoE policy documents are rather relevant in that they offer a clearer insight into this aspect. First, on 30 October 1997, the Committee of Ministers delivered its Recommendation No. R (97) 20 on “Hate Speech”,64 the Ap- pendix of which contained a series of principles meant to guide the action of CoE states.
According to the document, hate speech encompasses
all forms of expression which spread, incite, promote or justify racial hatred, xenophobia, anti-Semitism or other forms of hatred based on intolerance, including: intolerance ex- pressed by aggressive nationalism and ethnocentrism, discrimination and hostility against minorities, migrants and people of immigrant origin.65
In this respect, the Recommendation presents some significant features if compared with the ICCPR and with the ICERD. First, as to the types of conduct considered, it includes within the notion of hate speech not only those expressions that incite to hatred, but also those that simply spread, promote or justify such hatred. Second, as regards the grounds of discrimination to be addressed, the Recommendation, though focusing specifically on racism, seemingly leaves the door open to an expansion of the scope of the term “hate speech” by featuring an open clause. Indeed, the following years saw an increasingly expansive momentum of the set of protected categories.
Thus, Recommendation No. R (2010) 5 on Measures to Combat Discrimination on Grounds of Sexual Orientation or Gender Identity, adopted on 31 March 2010,66 included amongst the suggestions to contracting states the adoption of “appropriate measures” against all forms of expression “which may be reasonably understood as likely to produce the effect of inciting, spreading or promoting hatred or other forms of discrimination against lesbian, gay, bisexual and transgender persons” and, most notably, the prohibition of such forms of hate speech.67
Furthermore, General Policy Recommendation No. 15 on Combating Hate Speech,68 adopted in December 2015 by the European Commission against Racism and Intolerance (ECRI) of the Council of Europe, contains an even broader notion of “hate speech”, stat- ing that the term
entails the use of one or more particular forms of expression – namely, the advocacy, promotion or incitement of the denigration, hatred or vilification of a person or group of persons, as well any harassment, insult, negative stereotyping, stigmatization or threat of such person or persons and any justification of all these forms of expression – that is based on a non-exhaustive list of personal characteristics or status that includes “race”,
63 Xxxxxxxxx Xxxxxxx, ‘When To Say Is To Do: Freedom of Expression and Hate Speech in the Case- Law of the European Court of Human Rights’ (Seminar on Human Rights for European Judicial Trainers, Strasbourg, 7 July 2015). In fact, the notion of hate speech within the case law of the ECtHR is not always well-defined.
64 Committee of Ministers of the Council of Europe, ‘Recommendation No. R (97) 20 of the Committee of Ministers to Member States on “Hate Speech”’ (Council of Europe 1997) CM/Rec(97)20.
65 ibid, Appendix, Scope.
66 Committee of Ministers of the Council of Europe, ‘Recommendation No. R (2010) 5 of the Commit- tee of Ministers to Member States on Measures to Combat Discrimination on Grounds of Sexual Orientation or Gender Identity’ (Council of Europe 2010) CM/Rec(2010)5.
67 ibid Appendix, I.B.6.
68 European Commission against Racism and Intolerance, ‘General Policy Recommendation No. 15 on Combating Hate Speech’ (Council of Europe 2015) CRI(2016)5.
colour, language, religion or belief, nationality or national or ethnic origin, as well as descent, age, disability, sex, gender, gender identity and sexual orientation.69
ECRI’s General Policy Recommendation No. 15 thus extends the notion of hate speech to a wide range of forms of expression, including harassment, insulting, negative stereo- typing, stigmatization and threats, and also expands significantly the list of grounds of discrimination to be considered. Additionally, it also clarifies that such a list is “non- exhaustive”. ECRI’s definition has acquired a paramount importance within the frame- work of the Council of Europe (and, generally, within the European landscape), thus be- coming a fundamental standard for the legal and academic debate on hate speech in the Old Continent.70
Thus, coherently, the recent Recommendation No. R (2022) 16 on Combating Hate Speech of the Committee of Ministers71 declaredly built upon ECRI’s General Policy Recommendation No. 15 and adopted a similar definition of hate speech encompassing “all types of expression that incite, promote, spread or justify violence, hatred or discrim- ination … or that denigrates” persons “by reason of their real or attributed personal char- acteristics such as ‘race’, colour, language, religion, nationality, national or ethnic origin, age, disability, sex, gender identity and sexual orientation”. Although admittedly, in this case, the list of protected grounds of discrimination is not declared to be “non-exhaus- tive”, the enlargement of the scope of the term “hate speech”, especially when compared to Recommendation No. R (97) 20, is quite remarkable.
Within the European Union, Council Framework Decision 2008/913/JHA72 represents the most significant piece of legislation concerning hate speech, as it requires Member States of the EU to ensure the criminalization of a range of conducts pertaining to the phenomenon. In this respect, the text of the Framework Decision is in great part inspired by the international standards set by the ICCPR and the ICERD, as it obliges Member States to punish the public incitement to violence or hatred against persons or groups “defined by reference to race, colour, religion, descent or national or ethnic origin”.73
69 ibid 9.
70 See, for example, Xxxxxxxx (n 40) 39; Xxxx Xxxxxx and Xxxxxx Xxxxxx, ‘Hate Speech in the Public Online Debate’ (The Danish Institute for Human Rights 2017) 17.
71 Committee of Ministers of the Council of Europe, ‘Recommendation No. R (2022) 16 of the Com- mittee of Ministers to Member States on Combating Hate Speech’ (Council of Europe 2022) CM/Rec(2022)16.
72 Council Framework Decision 2008/913/JHA of 28 November 2008 on combating certain forms and expressions of racism and xenophobia by means of criminal law 2008 (OJ L 328/5).
73 ibid 1, para 1, lett (a). However, the Framework Decision also clarifies that Member States may decide to subject the possibility of punishing such instances of hate speech under the condition that such conducts are “carried out in a manner likely to disturb public order or which is threatening, abusive or insulting”, thus leaving to national jurisdictions quite a relevant margin of discretion as regards the limits of criminal- ization of the phenomenon. Quite interestingly, the 2016 Code of Conduct on Illegal Hate Speech drafted by the European Commission together with a range of IT Companies, refers directly to the Framework Decision 2008/913/JHA, recognizing that “hate speech” should be defined as “all conduct publicly inciting to violence or hatred directed against a group of persons or a member of such a group defined by reference
Additionally, the Framework Decision also makes it mandatory to define as criminal offences the conducts of public condonement, denial and gross trivialization of the crimes of genocide, of crimes against humanity, and of war crimes as defined within the Statute of the International Criminal Court74 as well as of the crimes defined in Article 6 of the Charter of the International Military Tribunal appended to the London Agreement of 1945.75 The document thus formally aims at introducing within the legal framework of all EU Member States the crime of denialism,76 quite in line, altogether, with the case law of the ECtHR on Holocaust denial. In such cases, however, the conduct should only be punishable under the condition that the condonement, denial, or gross trivialization con- cerns crimes that have been established by a final decision of a domestic or international court.77
A striking aspect of the Framework Decision is that, evidently, it only encompasses forms of hate speech on grounds of racial and religious discrimination, thus leaving be- hind many other potential targets. The reason behind this is connected to the rules on EU competences which generally exclude the field of criminal law. In order to be able to enact the Framework Decision, and thus in order to be able to impose upon Member States the duty to make hate speech conducts punishable, its drafters built upon the old Article 29 of the Treaty on the European Union (TEU),78 conflated today within Article 67 of the Treaty on the Functioning of the European Union (TFEU), pursuant to which the EU “shall endeavour to ensure a high level of security through measures to prevent and com- bat crime, racism and xenophobia … if necessary, through the approximation of criminal laws”.79 The specific reference to racism and xenophobia thus made it impossible for the EU lawmakers to also include, within the Framework Decision, also other forms of dis- crimination.80
to race, colour, religion, descent or national or ethnic origin”. Code of Conduct on Countering Illegal Hate Speech Online 2016.
74 Rome Statute of the International Criminal Court 1998 arts 6–8.
75 Charter of the International Military Tribunal appended to the Agreement by the government of the United Kingdom of Great Britain and Northern Ireland, the government of the United States of America, the provisional government of the French Republic and the government of the Union of Soviet Socialist Republics for the prosecution and punishment of the major war criminals of the European Axis (UN Treaty Series No 251) 284, art 6.
76 Xxxxx Xxxxx, ‘From Introduction to Implementation: First Steps of the EU Framework Decision 2008/913/JHA against Racism and Xenophobia’ in Xxxx Xxxxxxx, Xxxxxxxx Xxxxx and Xxxx Xxxxxx (eds), Holocaust and Genocide Denial (Routledge 2017).
77 Also with regard to this point, the Framework Decision is arguably consistent with the general ap- proach of the ECtHR in this matter. Indeed, the case law of the Strasbourg Court clearly indicates that the abuse clause of art 17 ECHR only applies to those cases of denialism where the genocide or war crime or crime against humanity represents an historically ascertained fact. See, in particular, Xxxxxxxx and Isorni v France [1998] ECtHR [GC] 24662/94, Reports 1998-VII; Peri̇nçek v Switzerland (n 56).
78 Treaty on the European Union (consolidated version of 2006).
79 Treaty on the Functioning of the European Union art 67, para 3.
80 Indeed, as clarified within the document’s text itself, also the reference to “religion” should be inter- preted restrictively, as it is “intended to cover, at least, conduct which is a pretext for directing acts against a group of persons or a member of such a group defined by reference to race, colour, descent, or national or ethnic origin” (emphasis added), meaning that Member States, although they may decide to extend the criminal protection required by the Framework Decision also to all cases of religious discrimination, are only required to do so inasmuch as religion constitutes in the case at hand a proxy for racial discrimination. Framework Decision 2008/913/JHA art 1, para 3.
To address such limitations, as well as to develop a more efficient and unitary action against the spread of the phenomenon, especially via the Internet, the European Commis- sion adopted in December 2021 a Communication81 prompting a Council decision to ex- tend the current list of “EU crimes” under Article 83, paragraph 1, TFEU82 to include also hate crimes and hate speech. The extension of the scope of action of such a provision would allow the harmonization of Member States’ criminal regulation of hate speech, namely through the establishment of minimum rules on its definition and the sanctions connected to it, and would thus open the doors to the possibility, stressed by the Commis- sion, of protecting also people targeted based on other grounds of discrimination, includ- ing, in particular, “sex, sexual orientation, age and disability”.83 However, the Council, although a majority expressed its favour towards the proposal in March 2022, has until now failed to adopt the suggested decision unanimously, as lamented by the Parliament’s Committee on Civil Liberties, Justice and Home Affairs in the report adopted at the end of November 2023: on this occasion, the Committee suggested inter alia to activate the so-called “passerelle clause”, with a view to making Article 83 “subject to reinforced qualified majority rather than the current required unanimity”.84
Besides, the width of the scope of the notion of “hate speech” under EU law is sensi- tively different when moving from the field of criminal law to other fields of the law. For instance, Xxxxxxxxx has highlighted how the CJEU has delivered a range of decisions under labour law recognizing as illicit pursuant to the equality directives the utterance by employers of public statements disparaging protected categories and declaring the
81 European Commission, ‘Communication from the Commission to the European Parliament and the Council, A More Inclusive and Protective Europe: Extending the List of EU Crimes to Hate Speech and Hate Crime’ COM(2021) 777 final. The proposal builds notably on the following documents: European Commission, ‘Communication from the Commission to the European Parliament, the Council, the Euro- pean Economic and Social Committee and the Committee of the Regions. A Union of Equality: Gender Equality Strategy 2020-2025’ COM(2020) 152 final; European Commission, ‘Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions. Union of Equality: LGBTIQ Equality Strategy 2020-2025’ COM(2020) 698 final; European Commission, ‘Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions. Union of Equal- ity: Strategy for the Rights of Persons with Disabilities 2021-2030’ COM(2021) 101 final.
82 Xxxx Xxxxxx, ‘Criminalising Hate Crime and Hate Speech at EU Level: Extending the List of Eu- rocrimes Under Article 83(1) TFEU’ (2022) 33 Criminal Law Forum 85.
83 European Commission, ‘Communication on Extending the List of EU Crimes to Hate Speech and Hate Crime’ (n 81) Annex, recital 6. Quite regrettably, the text of the proposal does not mention either “gender” nor “gender identity”, as it refers directly to the grounds of discrimination mentioned within art 19, para 1, TFEU (i.e., “sex, racial or ethnic origin, religion or belief, disability, age or sexual orientation”). Although this would not prevent the EU from taking actions against transphobic and gender-based hate speech, the lack of inclusion of such grounds within recital 6 of the proposal is seemingly an inadequate response to the calls for protection of the LGBTQIA+ community. As highlighted by Xxxxxx, “in line with societal developments and the EU objective of social inclusiveness or fighting social exclusion … the in- clusion of gender or gender identity – already employed, for example, by ECRI – in addition to (the more biological category of) sex would be appropriate”. Peršak (n 82) 98. Besides, such an approach would be more in line with European Commission, ‘LGBTIQ Equality Strategy 2020-2025’ (n 81).
84 European Parliament, ‘Report on Extending the List of EU Crimes to Hate Speech and Hate Crime’ (2023) 2023/2068(INI) point 7.
intention not to employ members of such categories.85 Such an approach, according to Xxxxxxxxx, represents an important tool to fight hate speech also through private, rather than criminal, EU law.86 With respect to media law, the Audiovisual Media Services Di- rective (AVMSD),87 as subsequently amended by Directive (EU) 2018/1808 (AVMSD Refit Directive),88 requires that both providers of audiovisual media services and provid- ers of online video-sharing platforms put in place measures to reduce the presence and dissemination of content amounting to “incitement to violence or hatred” based on any of the grounds referred to in Article 21 CFREU,89 the latter expressly prohibiting all dis- crimination based on “sex, race, colour, ethnic or social origin, genetic features, language, religion or belief, political or any other opinion, membership of a national minority, prop- erty, birth, disability, age or sexual orientation”.90
2.2.4. Interim conclusions
The normative frameworks offer an insight into the inherent issues connected to the def- inition of the phenomenon itself and, consequently, to the building of regulatory re- sponses at the international and regional level. “Hate speech” is, inherently, an umbrella term encompassing multiple and multi-faceted utterances, which jurisdictions may ad- dress in different and often conflicting ways.91 However, although the drafting of a uni- xxxxxxxx accepted definition of “hate speech” may thus amount to an insurmountable chal- lenge, some patterns can be identified.
The term “speech” can include a wide range of different types of utterances, to which different forms of regulatory response may correspond. Xxxxxxxxx Xxxxx, amongst xxx- xxx, identifies at least ten clusters of regulatory approaches worldwide, including the adoption of measures against incitement to hatred, against the denial of genocide or other crimes against humanity or war crimes, and measures against simple negative stereotyp- ing or stigmatization.92 Speech, moreover, does not simply include verbal language, but
85 Case C-54/07, Centrum voor gelijkheid van kansen en voor racismebestrijding v Firma Feryn NV [2008] ECLI:EU:C:2008:397; Case C-81/12, Asociaţia Accept v Consiliul Naţional pentru Combaterea Discriminării [2013] ECLI:EU:C:2013:275; Case C-507/18, NH v Associazione Avvocatura per i diritti LGBTI - Rete Lenford [2020] ECLI:EU:C:2020:289.
86 Xxxxxxxxxx Belavusau, ‘Fighting Hate Speech through EU Law’ (2012) 4 Amsterdam Law Forum 20; Xxxxxxxxxx Belavusau, ‘The NH Case: On the “Wings of Words” in EU Anti-Discrimination Law’ (2020) 5 European Papers 1001.
87 Directive 2010/13/EU of the European Parliament and of the Council of 10 March 2010 on the coor- dination of certain provisions laid down by law, regulation or administrative action in Member States con- cerning the provision of audiovisual media services (Audiovisual Media Services Directive), OJ L 95/1.
88 Directive (EU) 2018/1808 of the European Parliament and of the Council of 14 November 2018 amending Directive 2010/13/EU on the coordination of certain provisions laid down by law, regulation or administrative action in Member States concerning the provision of audiovisual media services (Audiovis- ual Media Services Directive) in view of changing market realities, OJ L 303/69.
89 AVMSD arts 6, 28b. See, in this respect, Xxxxxxxx Xxxxxxxx, Facebook and the (EU) Law: How the Social Network Reshaped the Legal Framework (Springer 2022) 198–201; Xxxxxx Xxxxxxxxx, Xxxxx Xxxxxxx and Xxxxxxxx De Xxxxxxxx, Internet Law and Protection of Fundamental Rights (Bocconi University Press 2022) 147–166.
90 Charter of Fundamental Rights of the European Union OJ C 364/1 2000 art 21.
91 As clearly shown by the landmark judicial saga of LICRA v Yahoo!. See infra, §2.4.2.3.
92 Brown, Hate Speech Law (n 1).
may also include non-verbal forms of expression such as, for instance, the burning of a cross.93 In this respect, the ample and multi-faceted definition contained within ECRI’s General Policy Recommendation No. 15 is, arguably, comprehensive and valid as it iden- tifies an extremely wide range of speech forms and utterances: it is no coincidence if, in presenting its Communication for extending the list of EU crimes, the European Com- mission still referred to it to describe the phenomenon the proposal aims to confront.94
Besides, all these forms of speech are similar in that their ultimate goal or effect con- sists of conveying, disseminating, and perpetrating the systemic discrimination of people or groups of people defined by specific common features. The intensity of such an intent, as well as the likelihood of that goal being achieved, represent the variables identifying the specific form of hate speech to be addressed and, as such, should be taken into account when developing any regulatory strategy to face the phenomenon. Thus, for instance, serious cases of direct incitement to violence may warrant severe action, including the use of criminal sanctions; whereas simpler cases of negative stereotyping may require more limited (if any) intervention of the law. The catchphrase “hate speech”, in this sense, has a sociological, rather than strictly legal, validity, as it includes phenomena which the law must inevitably treat differently.
Be that as it may, the usefulness of such an expression is still relevant for the literature and for policymakers precisely because it captures the essence of all the different forms of speech mentioned above, that is, their role in the perpetration of traditional dynamics of power between categories and “classes” of people.95 It is precisely in this sense that the term “hate speech” will be intended in the course of the present work, thus focusing on the common character of “hate” rather than upon the multiple possible meanings of “speech”.
In this respect, however, a further caveat is essential, as the word “hate” can be sub- jected itself to a multiplicity of different interpretations.96 In the present context, moreo- ver, the concept of hate is strictly interconnected with that of discrimination. In this sense, the OSCE practical guide to hate crime laws highlighted how in many cases hate crimes and hate speech can be performed by agents who do not, in fact, necessarily feel the sen- timent of “hate” and, for this reason, the guide suggests referring to “bias motive”, rather than “hate” motive, in order to stress the nature of these phenomena as intrinsically
93 Xxxxxxxxx Xxxxxx, ‘Dignity and Speech: The Regulation of Hate Speech in a Democracy Articles & Essays’ (2009) 44 Wake Forest Law Review 497, 501. See, for example, the notable case of RAV v City of St. Xxxx (n 12).
94 European Commission, ‘Communication on Extending the List of EU Crimes to Hate Speech and Hate Crime’ (n 81) 6. See also Xxxxxxxx Xxxx and others, Study to Support the Preparation of the European Commission’s Initiative to Extend the List of EU Crimes in Article 83 of the Treaty on the Functioning of the EU to Hate Speech and Hate Crime: Final Report (Publications Office of the European Union 2021) 38 <xxxxx://xxxx.xxxxxx.xx/xxx/00.0000/00000> accessed 9 April 2024.
95 Thus, with specific respect to racist speech, Xxxx X Xxxxxxx argues that it “is best treated as a sui generis category, presenting an idea so historically untenable, so dangerous, and so tied to perpetuation of violence and degradation of the very classes of human beings who are least equipped to respond that it is properly treated as outside the realm of protected discourse” (emphasis added). Xxxxxxx (n 34) 2357.
96 See, among others, Xxxxx, ‘What Is Hate Speech?’ (n 1).
discriminatory against very specific protected grounds.97 This perspective is also appar- ently welcomed by the European Commission, according to which, for both hate speech and hate crime, “it is the bias motivation that triggers the perpetrator’s action”.98
2.3. The transatlantic debate on hate speech regulation
The phenomenon of hate speech as described in the previous section, i.e., a wide range of speech utterances commonly characterized by their inherent goal of perpetrating forms of discrimination based on certain grounds, has triggered strikingly different legal reactions across the globe. Indeed, the choice to adopt measures restricting and/or punishing hate speech touches directly on the constitutional nerve of any jurisdiction, as it necessarily entails a curtailment of that fundamental pillar of democracy represented by freedom of expression: a dramatic choice which echoes the paradox of tolerance famously described in 1945 by philosopher Xxxx Xxxxxx.99 In this respect, the clearest dichotomy, at least among Western democracies, is the one between the model of the “tolerant democracy”, symbolized by the United States, and that of the “militant democracy”, promoted namely by most European countries as well as by the already described EU and ECHR frame- works.100
2.3.1. The liberal approach: the US model of the free marketplace of ideas
Building on Xxxxxx’x paradox of tolerance, Xxxxxxxxx famously described the American constitutional landscape as representing a model of “tolerant society”,101 characterized by
97 “Taken literally, the phrases ‘hate crimes’ or ‘hate motive’ can be misleading. Many crimes which are motivated by hatred are not categorized as hate crimes. Murders, for instance, are often motivated by hatred, but these are not ‘hate crimes’ unless the victim’s protected characteristics were targeted. Con- versely, a crime where the perpetrator does not feel ‘hate’ towards the particular victim can still be consid- ered a hate crime. Hate is a very specific and intense emotional state, which may not properly describe most hate crimes … Rather, the perpetrator is motivated by their stereotypes, preconceived ideas or intolerance towards a particular group of people and the protected characteristic(s) they share”. Office for Democratic Institutions and Human Rights, ‘Hate Crime Laws: A Practical Report’ (2nd edn, OSCE 2022) 17
<xxxxx://xxx.xxxx.xxx/xxxxx/x/xxxxxxxxx/0/0/000000.xxx> accessed 9 January 2023. Although the quoted paragraph is notably focused on hate crimes, the argument also applies, clearly, to hate speech. On the distinction between hate crimes and hate speech, see Xxxxxx (n 3) 9.
98 European Commission, ‘Communication on Extending the List of EU Crimes to Hate Speech and Hate Crime’ (n 81) 7.
99 “Unlimited tolerance must lead to the disappearance of tolerance. If we extend unlimited tolerance even to those who are intolerant, if we are not prepared to defend a tolerant society against the onslaught of the intolerant, then the tolerant will be destroyed, and tolerance with them. – In this formulation, I do not imply, for instance, that we should always suppress the utterance of intolerant philosophies … But we should claim the right even to suppress them … We should therefore claim, in the name of tolerance, the right not to tolerate the intolerant”. Xxxx Xxxxxx, The Open Society and Its Enemies, vol I: The Spell of Plato (Routledge 1945) 226.
100 Pitruzzella and Pollicino (n 59) 54.
101 Xxx X Xxxxxxxxx, The Tolerant Society: Freedom of Speech and Extremist Speech in America (Oxford University Press 1988).
such an inherent primacy of the First Amendment102 that “the free speech idea … remains one of [the US’] foremost cultural symbols”.103 Indeed, as US constitutional law rejects any form of “content” or “viewpoint discrimination”,104 meaning any legislation impos- ing restrictions and limitations or punishing speech based on the content or viewpoint expressed by the speaker, the idea of adopting hate speech regulation is generally consid- ered to be at odds with the First Amendment.105
In fact, contemporary US jurisprudence on free speech took its first steps at the end of the 1910s, when the Supreme Court had to deal with a series of cases concerning the Espionage Act 1917. At first, based on the “bad tendency test”,106 the justices had upheld a number of convictions under the statute concerning cases of individuals advocating against the participation of the US in World War I. Subsequently, however, the SCOTUS changed drastically its approach. Thus, in 1919, Xxxxxxx v United States abandoned the bad tendency test in favour of the “clear and present danger test”,107 while Xxxxxx v United States bears one of the most well-known excerpts of US free speech history, that is, Justice Xxxxxx’ dissenting opinion containing the notorious metaphor of free speech as a “free marketplace of ideas”:
But when men have realized that time has upset many fighting faiths, they may come to believe even more than they believe the very foundations of their own conduct that the ultimate good desired is better reached by free trade in ideas – that the best test of truth is the power of the thought to get itself accepted in the competition of the market, and that truth is the only ground upon which their wishes safely can be carried out. That, at any rate, is the theory of our Constitution … I think that we should be eternally vigilant against attempts to check the expression of opinions that we loathe and believe to be fraught with death, unless they so imminently threaten immediate interference with the lawful and pressing purposes of the law that an immediate check is required to save the country.108
These words, today, are engraved in the American mindset: Xxxxxx’ position, originally expressed in dissent, eventually became predominant.
Thus, according to the US constitutional tradition, truth is considered to be more likely to prevail through open discussion than through the adoption of legal measures aiming at curtailing and eradicating falsehoods outright.109 This applies, of course, to almost any form of “toxic” speech. Clearly, the metaphor of speech as a free marketplace of ideas is
102 “Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances”.
103 Bollinger (n 101) 7. On the cultural and legal significance of free speech in the US, as well as on its uniqueness within the international landscape, see Xxxxxxxxx Xxxxxxx, ‘The Exceptional First Amendment’ in Xxxxxxx Xxxxxxxxx (ed), American Exceptionalism and Human Rights (Princeton University Press 2005).
104 See supra, §2.2.1.
105 See, ex multis, Xxxxxxxxxxx v Ohio (n 5); RAV v City of St. Xxxx (n 12); Matal v Tam 582 US (2017).
106 That is, speech could be subjected to regulation (including criminal prosecution) when it had the tendency to cause or incite illegal activity. For an overview of the development of the bad tendency test under the Espionage Act 1917, see Xxxxxxxx X Xxxxx, ‘The Origins of the Bad Tendency Test: Free Speech in Wartime’ (2002) 2002 Supreme Court Review 411.
107 Xxxxxxx v United States 249 US 47 (1919).
108 Xxxxxx v United States 250 US 616 (1919) 630.
109 Xxxxxxxxx (n 15) 1534.
inspired by neoclassical economics, according to which, in a market economy, (rational) consumers are drawn to choose the products that are best suitable to their needs and in- terests so that, at the aggregate level, the best product will end up being the predominant one within the market. Similarly, in the marketplace of ideas, truth and the best opinions, thoughts, and ideologies for society will end up being chosen by the vast majority of (rational) individuals.110
Therefore, the response to phenomena like hate speech should not be the adoption of legal measures to restrict its utterance and spread but, rather, the protection of speech itself and the fostering of “more speech”. In fact, limiting speech through law would be counterproductive, as it could easily backfire.111 As argued by Justice Xxxxxxxx in his concurring opinion for the case of Xxxxxxx v. California, “order cannot be secured merely through fear of punishment for its infraction” because “fear breeds repression” and “re- pression breeds hate”: therefore, “the path of safety lies in the opportunity to discuss freely supposed grievances and proposed remedies, and … the fitting remedy for evil counsels is good ones”.112 As a result, it is essential to avoid coercing silence through law, which Xxxxxxxx considers to be the expression of “the argument of force in its worst form”.113
Nonetheless, the mainstream US liberal approach towards hate speech and its relation- ship with free speech and the First Amendment have been put into question by several American scholars, not fully content with the choice of granting equal protection to all speech, including that expressing, to quote Xxxxxxx Xxxxx, the “thoughts that we hate”.114 These authors, many of whom take a critical race theory approach to hate speech,115 have most notably highlighted the inherent power dynamics116 entailed by it and have stressed that such power dynamics often prevent members of minorities or marginalized or dis- criminated groups from being able to counter racist and hate speech through “more speech”:
The idea that talking back is safe for the victim or educative for the racist simply does not correspond with reality. It ignores the power dimension to racist remarks, forces minori- ties to run very real risks, and treats a hateful attempt to force the victim outside the human
110 “Thus ideas and opinions compete with each other, and each of us has the possibility to evaluate them, weigh them in a discussion, and then choose the ones we prefer. As rational consumers of ideas, we will choose the best among many. Just as poor products are expelled from the market due to lack of demand and good products have success determined by the growth of demand for them, good ideas should prevail and bad ideas should be marginalized by market competition”. Pitruzzella and Pollicino (n 59) 33.
111 Xxxxxx X Xxxxxx, ‘The Meaning of the “Marketplace of Ideas” in First Amendment Law’ (2019) 24 Communication Law and Policy 437, 438.
112 Whitney v California 274 US 357 (1927) 375.
113 ibid 376.
114 Xxxxxxx Xxxxx, Freedom for the Thought That We Hate (Basic Books 2008).
115 Xxxxxxx Xxxxxxx and Xxxx Xxxxxxxxx, Critical Race Theory: An Introduction (3rd edn, New York University Press 2017); Xxxx X Xxxxxxx and others (eds), Words That Wound: Critical Race Theory, As- saultive Speech, And The First Amendment (Westview Press 1993).
116 Xxxxxxx (n 34); Xxxxxxx Xxxxxxx, ‘Words That Wound: A Tort Action for Racial Insults, Epithets, and Name-Calling’ (1982) 17 Harvard Civil Rights-Civil Liberties Law Review 133.
community as an invitation for discussion. Even when successful, talking back is a bur- den.117
Critical race theory authors also stress how hate speech directly affects the psychological and physical well-being of its targets, who are generally at a higher risk of isolation, men- tal illness and psychosomatic diseases (including depression, high blood pressure, or strokes), and can lead to addiction to alcohol and drugs.118 Additionally, hate speech can also represent, in their opinion, a danger for society as a whole, namely because discrim- ination represents itself “a breach of the ideal of egalitarianism, that ‘all men are equal’ and each person is an equal moral agent”.119
2.3.2. The militant approach: the case of Europe
The liberal and “tolerant” approach of the US with respect to “the thoughts we hate” does not represent a common standard across the world. As described in section 2.1, the pro- hibition of hate speech is in fact foreseen by international human rights law, both at the global and regional level, and many jurisdictions, such as European countries but also Canada, Australia, Japan, South Africa, as well as many South American states, have indeed enacted forms of restriction of such phenomena.120
These jurisdictions thus follow a more “militant” approach, as they put in place measures and limitations to the absolute enjoyment of the fundamental right to free speech and freedom of expression with the goal of actively ensuring the actual protection of core democratic and constitutional principles.121 In this respect, the European perspective on hate speech represents one of the clearest and most notable examples of such a “militant” strategy and has thus been frequently approached by comparative law as the main term of comparison with US First Amendment jurisprudence on the subject: a comparison which, however, has often had to face the risks of an inherent incommunicability between the two systems,122 a sort of legal “lost in translation”.
The main rationale behind the “militant” approach of Europe can be found first and foremost within the case law of the ECtHR which, in the 2003 judgment of Gunduz v Turkey, emphasized that
tolerance and respect for the equal dignity of all human beings constitute the foundations of a democratic, pluralistic society. That being so, as a matter of principle it may be con- sidered necessary in certain democratic societies to sanction or even prevent all forms of
117 Xxxxxxx Xxxxxxx and Xxxx Xxxxxxxxx, Must We Defend Nazis? Why the First Amendment Should Not Protect Hate Speech and White Supremacy (New York University Press 2018) 69.
118 ibid 9–10; Xxxxxxx Xxxxxxx and Xxxx Xxxxxxxxx, ‘Four Observations about Hate Speech’ (2009) 44 Wake Forest Law Review 353, 362.
119 Xxxxxxx (n 116) 140.
120 Xxxxxxxxx (n 15); Xxxxx and Xxxxxxxx (n 1); Spigno (n 1). See infra, §4.
121 Xxxx Xxxxxxxxxxx, ‘Militant Democracy and Fundamental Rights, I’ (1937) 31 The American Polit- ical Science Review 417.
122 Xxxx Xxxxxx, ‘Wild-West Cowboys versus Cheese-Eating Surrender Monkeys: Some Problems in Comparative Approaches to Hate Speech’ in Xxxx Xxxx and Xxxxx Xxxxxxxxx (eds), Extreme Speech and Democracy (Oxford University Press 2009); Xxxxx Xxxxx, ‘Hate Speech: A Comparison between the Eu- ropean Court of Human Rights and the United States Supreme Court Jurisprudence’ (2012) 25 Regent University Law Review 107.
expression which spread, incite, promote or justify hatred based on intolerance (including religious intolerance).123
According to such reasoning, which the ECtHR has repeatedly confirmed in subsequent case law,124 hate speech poses a threat to the foundations of paramount constitutional values and principles, namely those connected to the protection of democracy and of plu- ralism, and for this reason states parties to the Council of Europe may well decide to adopt measures against its spread – including criminal actions – without this constituting a vio- lation of the right to freedom of expression and information as protected by Article 10 ECHR. As a matter of fact, because hate speech affects the possibility for its targets to actively participate in the public debate, it is considered to represent a threat itself to the full protection of the freedom of expression of discriminated groups as well as of the public’s right to freedom of information, intended as a right to receive and impart plural- istic and diverse information.125
The incompatibility of hate speech with the constitutional framework and the demo- cratic value system of the Council of Europe was recently confirmed by the Committee of Ministers in its already mentioned Recommendation No. R (2022) 16, the Preamble to which argues that
hate speech negatively affects individuals, groups and societies in a variety of ways and with different degrees of severity, including by instilling fear in and causing humiliation to those it targets and by having a chilling effect on participation in public debate, which is detrimental to democracy.126
This approach, besides, is also echoed by EU institutions. Namely, the Commission’s Communication on extending the list of EU crimes to hate speech and hate crimes states that these phenomena “are a threat to democratic values, social stability and peace”,127 that they weaken “the mutual understanding and respect for diversity on which pluralistic and democratic societies are built”128 and that they negate the affected individuals’ right to participate in the political or social life, which represents a core principle on which the Union itself is founded.129
In this respect, the “militant” viewpoint of the framework of the Council of Europe is in stark contrast to the “tolerant” one of the United States. Whereas the former perceives hate speech as an assault on the democratic tenets of society, including equality and dig- nity but also freedom of expression itself, the latter considers it as an inevitable facet of the paramount value of free speech and sees any attempt at regulation as an impermissible violation of the First Amendment. In other words, while hate speech regulation on the
123 Gunduz v Turkey [2003] ECtHR 35071/97, ECHR 2003-XI [40].
124 See, ex multis, Erbakan v Turkey [2006] ECtHR 59405/00 [56]; Xxxxx v Belgium [2009] ECtHR 15615/07 [64]; Xxxxxxx v France [2021] ECtHR 45581/15 [84].
125 Xxxxxx Xxxxxxxxx, ‘Fake News, Internet and Metaphors (to Be Handled Carefully)’ (2017) 1 Rivista di Diritto dei Media 23; Pitruzzella and Pollicino (n 59) 91.
126 Committee of Ministers of the Council of Europe, ‘CM/Rec(2022)16’ (n 71), Preamble.
127 European Commission, ‘Communication on Extending the List of EU Crimes to Hate Speech and Hate Crime’ (n 81) 9.
128 ibid 1.
129 ibid 7.
Western side of the Atlantic is treated as a threat to free speech, on the Eastern side of the Ocean it is presented as a tool for the promotion of the equal enjoyment of freedom of expression and information itself in conditions of equality.
The protection of core societal values, of democracy, and of freedom of expression, protected not only as an individual tool of personal autonomy and self-expression but also (and especially) as a collective instrument for the fostering of democracy itself,130 thus represents the essence of the rationale behind the European restrictive strategy against hate speech and signals a mindset strongly oriented towards the promotion of constitu- tional-driven principles.
At the same time, however, hate speech is also perceived as being inherently harmful to the personal lives of the individuals affected. Namely, recent case law from the ECtHR, mostly dealing with forms of anti-LGBTQIA+ speech, has increasingly underlined how such forms of expression represent an assault on persons’ right to the protection of private and family life as enshrined within Article 8 of the Convention, in conjunction with Ar- ticle 14 on the prohibition of discrimination.131 In particular, the Court noted that hateful comments affect the targets’ “psychological well-being and dignity”,132 which represent essential components of the right protected by Article 8. Quite interestingly, those cases have even suggested that contracting states may be subject to positive obligations to guar- antee that individuals are protected against such assaults133 and that, while the choice concerning the legal measures to be adopted lies within states’ margin of appreciation, “effective deterrence against grave acts where essential aspects of private life are at stake requires efficient criminal-law provisions”.134
The provision of legal restrictions on hate speech in Europe is thus motivated by the aim of protecting a number of constitutionally relevant values and principles which are considered to be particularly worthy of protection under the ECHR and CFREU funda- mental rights systems. These interests pertain both to the collective sphere and to the individual sphere. Hate speech regulation, indeed, aims at preventing the personal harms that can affect the single persons who are contingently targeted by the hateful speech, the harms that affect their group of membership as a whole, and the harms that hate speech produces to society as a whole.135
130 On the multiple functions of freedom of expression, both from an individualistic and collective per- spective, see among others Xxxxxxxxx (n 15) 1530–1536.
131 Beizaras and Levickas v Lithuania [2020] ECtHR 41288/15; Association Accept and Others v Ro- mania [2021] ECtHR 19237/16. With respect to the first case, see Xxxxxxx Xxxxxxxx, ‘A Picture of a Same- Sex Kiss on Facebook Wreaks Havoc: Xxxxxxxx and Levickas v. Lithuania’ (Strasbourg Observers, 7 Feb- ruary 2020) <xxxxx://xxxxxxxxxxxxxxxxxxx.xxx/0000/00/00/x-xxxxxxx-xx-x-xxxx-xxx-xxxx-xx-xxxxxxxx-
wreaks-havoc-beizaras-and-levickas-v-lithuania/> accessed 16 January 2023.
132 Xxxxxxxx and Levickas v Lithuania (n 131) para 117.
133 “Positive obligations on the State are inherent in the right to effective respect for private life under Article 8, these obligations may involve the adoption of measures even in the sphere of the relations of individuals between themselves … The Court reiterates its finding that comments that amount to hate speech and incitement to violence, and are thus clearly unlawful on their face, may in principle require the States to take certain positive measures”. ibid 110, 125.
134 ibid 110. Likewise, Association Accept and Others v Romania (n 131) para 101.
135 In this respect, the European approach resembles in many ways that proposed by the critical race theory in the US. See supra, §2.3.1.
Although recognizing that freedom of expression also covers those utterances that “of- fend, shock or disturb”,136 the European viewpoint is that hate speech is not simply a form of expression that “offends” its targets. Rather, hate speech is perceived as a phenomenon that is intrinsically at odds with the democratic functioning of society, as it violates and debases at its core the equal dignity of its victims: an act which has detrimental effects both on single persons and on the social tissue. In this sense, the sensitivity of the Old Continent resonates, curiously, with the words of US author Xxxxxx Xxxxxxx:
Dignity … is precisely what hate speech laws are designed to protect – not dignity in the sense of any particular level of honor or esteem (or self-esteem), but dignity in the sense of a person’s basic entitlement to be regarded as a member of society in good standing, as someone whose membership of a minority group does not disqualify him or her from ordinary social interaction. That is what hate speech attacks, and that is what laws sup- pressing hate speech aim to protect.137
2.4. Hate speech and the Internet
2.4.1. Free speech and information in the digital age
Freedom of expression in the twenty-first century has undergone significant transfor- mations following the digital revolution, which has made widely available new technol- ogies “that make it easy to copy, modify, annotate, collate, transmit, and distribute content by storing it in digital form”,138 and following the rise of the “algorithmic society”,139 which features most notably the advent of social media platforms and the increasing use of AI systems as a means of speech governance. The rise and consolidation of the Internet, in particular, has deeply affected the way individuals experience and enjoy freedom of expression and freedom of information as human rights.
136 Xxxxxxxxx v the United Kingdom (n 45).
137 Xxxxxx Xxxxxxx, The Harm in Hate Speech (Harvard University Press 2012) 105.
138 Xxxx X Xxxxxx, ‘Digital Speech and Democratic Culture: A Theory of Freedom of Expression for the Information Society’ (2004) 79 New York University Law Review 1, 6.
139 Xxxx X Xxxxxx, ‘Free Speech in the Algorithmic Society: Big Data, Private Governance, and New School Speech Regulation’ (2018) 51 U.C. Xxxxx Law Review 1149. On the impact of artificial intelligence on freedom of expression, see Xxxxxxxx De Xxxxxxxx and Xxxxxx Xxxx, ‘Artificial Intelligence and Freedom of Expression’ in Xxxxxxx Xxxxxxxxxxx and Xxxxxx Xxxxxxxxx (eds), Artificial Intelligence and Human Rights (Oxford University Press 2023).
140 Xxxxxx Xxxxxxx, The Wealth of Networks: How Social Production Transforms Markets and Freedom
(Yale University Press 2006).
not an exclusive prerogative of professionals anymore, as individual users of the Internet, operating in a more informal and cooperative manner, can today contribute to “peer-pro- duce” information themselves.141
This – rather optimistic – viewpoint on the Internet as a tool with an incredibly expan- sive potential for free speech also emerged in the historical US Supreme Court decision of Xxxx v ACLU (1997).142 Indeed, in finding the recently enacted Communications De- cency Act (CDA),143 which introduced measures to protect minors from “indecent” and “patently offensive” digital communications (i.e., pornography), unconstitutional under the First Amendment because excessively vague and restrictive, the SCOTUS explicitly recognized the Internet as a new fundamental avenue of the free marketplace of ideas:
The dramatic expansion of this new marketplace of ideas contradicts the factual basis of this contention. The record demonstrates that the growth of the Internet has been and continues to be phenomenal. As a matter of constitutional tradition, in the absence of evidence to the contrary, we presume that governmental regulation of the content of speech is more likely to interfere with the free exchange of ideas than to encourage it. The interest in encouraging freedom of expression in a democratic society outweighs any theoretical but unproven benefit of censorship.144
Nonetheless, it is also worth noticing that such a process of de-centralization of the means of mass communication has in turn led to another type of scarcity, that is, that of the attention of audiences.145 Indeed, because of the democratization and multiplication of the sources of content, audiences are generally not capable of processing the information overload characterizing the Internet. The inevitable consequence of this process has been the substitution of the old, traditional, gatekeepers of information (newspapers, editors, television broadcasters, radios etc.) with the new “Internet information gatekeepers”,146 that is, precisely, those “large, multinational social media platforms that sit between tra- ditional nation states and ordinary individuals”147 that select and filter the contents to be included upon and disseminated through their digital infrastructures.
These corporations act as intermediaries between the producers and the receivers of information, structuring the provision of content based on the needs and interests of In- ternet users themselves.148 Content moderation, broadly intended as the set of practices and measures adopted to govern the dissemination of speech through a specific Internet
141 “In liberal democracies, the primary effect of the Internet runs through the emergence of the net- worked information economy. We are seeing the emergence to much greater significance of nonmarket, individual and cooperative peer-production efforts to produce universal intake of observations and opinions about the state of the world and what might and ought to be done about it”. ibid 271.
142 Reno v American Civil Liberties Union 521 US 844 (1997).
143 Communications Decency Act 1996.
144 Reno v ACLU (n 142) 885. On the US approach towards freedom of expression on the Internet, with a focus on the matter of intermediary liability, see infra, §4.4.
145 Xxxxxx, ‘Digital Speech and Democratic Culture: A Theory of Freedom of Expression for the Infor- mation Society’ (n 138) 7; Xxxxxxx Xxxxxxx, Computational Power: The Impact of ICT on Law, Society and Knowledge (Routledge 2021).
146 Xxxxx X Xxxxxxx, Regulating Speech in Cyberspace: Gatekeepers, Human Rights and Corporate Responsibility (Cambridge University Press 2015).
147 Xxxxxx, ‘Free Speech in the Algorithmic Society’ (n 139) 1151.
148 See, ex multis, Xxxxxxx X Xxxxx, Xxxxxxx: The Secret Rules That Govern Our Digital Lives (Cam- bridge University Press 2019).
infrastructure, is in fact the actual commodity offered by platforms, as it allows them to offer “a better experience of all this information and sociality”.149 Besides, from a practi- cal point of view, these actors generally employ algorithms largely based on machine- learning systems,150 the functioning of which acts as a “black box”151 for users (and, of- tentimes, for programmers themselves).152 This clearly raises questions about the quality and diversity of information users are exposed to.
Most notably, the migration of the information market to the online infrastructures of privately-owned platforms has led to the consolidation of content management practices, based on the use of AI, that are focused on ensuring the maximization of users’ engage- ment and fidelity towards the platforms themselves, mostly through the profiling of cus- tomers and the consequent customization of the information transmitted. This way, the new gatekeepers of information contribute to the construction of a digital space that has been effectively defined by Xxxx Xxxxxxxx as the “Daily Me”.153 However, on the one hand, the engagement-oriented governance of online speech, as well as the “Daily Me”, can affect the quality of journalistic sources and of the media and the press in general, inevitably pushed to adjust to the algorithms created by private oligopolists governing the Internet.154 On the other hand, the customization of online content impacts the possibility for individuals to being truly exposed to pluralistic information, as Internet users end up being locked within echo xxxxxxxx and filter bubbles.155 The result of this is also, in turn,
149 Xxxxxxxx Xxxxxxxxx, Custodians of the Internet: Platforms, Content Moderation, and the Hidden De- cisions That Shape Social Media (Yale University Press 2018) 13. As highlighted by Xxxxxxxx, “in the face of dramatic increases in communications options, there is an omnipresent risk of information overload”, so much so that “filtering, often in the form of narrowing, is inevitable in order to avoid overload and impose some order on an overwhelming number of sources of information”. Xxxx X Xxxxxxxx, #Republic: Divided Democracy in the Age of Social Media (Princeton University Press 2017) 63. Additionally, Xxxxxx and Land note: “Moderation of uncomfortable speech … is part and parcel of the service that social media companies offer”. Xxxxxxx Xxxxxx and Xxxxx Xxxx, ‘Hate Speech on Social Media: Content Moderation in Context’ (2021) 52 Connecticut Law Review 1029, 1054.
150 Cambridge Consultants, ‘Use of AI in Online Content Moderation’ (Ofcom 2019)
<xxxxx://xxx.xxxxx.xxx.xx/xxxxxxxx-xxx-xxxx/xxxxxx-xxxxxxxx/xxxxxx-xxxxxxx-xxxxxxxxxx> accessed 30 August 2023; Xxxxxxxx Xxxxxx and Xxxxxx Xxxxxxxx, ‘The Impact of Algorithms for Online Content Filter- ing or Moderation. “Upload Filters”’ (European Parliament 2020) JURI Committee PE 657.101.
151 Xxxxx Xxxxxxxx, The Black Box Society: The Secret Algorithms That Control Money and Information
(Harvard University Press 2015).
152 Xxxxx Xxxxxxx, ‘How the Machine “Thinks”: Understanding Opacity in Machine Learning Algo- rithms’ (2016) 3 Big Data & Society 2053951715622512. On the use of AI in the context of content mod- eration (specifically, moderation of hate speech) see infra, §5.3.
153 Sunstein (n 149).
154 Xxxxx Xxxx, ‘Freedom of the Media and Artificial Intelligence’ (Global Conference for Media Free- dom, 16 November 2020) <xxxxx://xxx.xxxxxxxxxxxxx.xx.xx/xxxxx-xxxxx/xxxxxx/xxxx/xxxxxx_xxxxxxxxxxx- enjeux_developpement/human_rights-droits_homme/policy-orientation-ai-ia-en.pdf> accessed 2 August 2022.
155 Xxx Xxxxxxx, The Filter Bubble: What the Internet Is Hiding From You (Penguin 2011); Xxxxxxxx (n 149); Xxxxx Xxxxxxxx, New Laws of Robotics: Defending Human Expertise in the Age of AI (The Xxxxxxx Press of Harvard University Press 2020); Xxxxxxxx Xxxxxxxxxxx, Xxxxxx Xxxxxxxxx and Xxxxxxx Xxxxxxxxxxx, Parole e Potere: Libertà d’Espressione, Hate Speech e Fake News (Egea 2017) 68; Xxxxxx Xxxxxxx and others, ‘The Echo Chamber Effect on Social Media’ (2021) 118 Proceedings of the National Academy of Sciences of the United States of America e2023301118.
a polarization of the public and political debate, characterized by the rise of disinfor- mation, hate speech, and digital populist narratives.156
Moreover, the ambiguous nature of the Internet, and its potential aptness to raise new challenges, and even threats, to human rights and democratic values and principles, has been underscored in many judgments of the ECtHR, which, on the topic of the Internet, has indeed taken a view which is very different from that of the SCOTUS.157 Though aware that the Internet offers “essential tools for participation in activities and discussions concerning political issues and issues of general interest”,158 the Strasbourg Court has nonetheless underscored how new digital forms of communications might in fact be the source of unprecedented dangers. Thus, for instance, in KU v Finland, in finding that Finland had not taken sufficient measures to ensure the protection of the right to private and family life of a minor whose identity had been stolen to create a profile on an adult online dating website, the ECtHR held that freedom of expression on the Internet must in some cases yield to other legitimate imperatives such as the prevention of disorder or crime and the protection of the rights and freedoms of others.159 In Editorial Board of Xxxxxxx Xxxx and Shtekel v Ukraine, the Strasbourg judges compared the Internet to the printed media, arguing that the former entails a higher risk of harm to the exercise and enjoyment of human rights and freedoms, notably the right to respect for private life, and may, therefore, call for policy measures more restrictive of freedom of expression.160
Also, in Xxxxx v Switzerland, the Court’s Grand Chamber declared that the Internet in- creases journalists’ duties in providing “reliable and precise” news exactly because, in the contemporary world, where individuals are faced with information overload, compli- ance with journalistic ethics has become fundamental to guarantee the public’s right to being informed.161 In other words, the Internet has made journalists even more responsi- ble for their essential role as the “public watchdogs”.162
The concerns of the Strasbourg Court are also shared by the institutions of the EU. Apart from the case law of the CJEU which, aware of the increased risks connected to the digital sphere, has significantly expanded the liability of Internet service providers (ISPs) for the dissemination of illegal information of the Internet since the beginning of the 2010s,163 the adoption of a number of legislative acts showcases the Union’s
156 Xxxxxx Xxxxxxxxx and Xxxxxxxx De Xxxxxxxx, ‘Constitutional Law in the Algorithmic Society’ in Am- non Xxxxxxxx and others (eds), Constitutional Challenges in the Algorithmic Society (Cambridge Univer- sity Press 2021).
157 On the different approaches taken by the two courts with respect to the enjoyment of fundamental rights on the Internet, and specifically with respect to the enjoyment of freedom of expression in the digital sphere, see most notably Xxxxxx Xxxxxxxxx, Judicial Protection of Fundamental Rights on the Internet: A Road Towards Digital Constitutionalism? (Xxxx 2021) 51–98.
158 Xxxxx Xxxxxxxx v Turkey [2012] ECtHR 3111/10, ECHR 2012 [54].
159 KU v Finland [2008] ECtHR 2872/02, ECHR 2008 [49].
160 Editorial Board of Pravoye Delo and Shtekel v Ukraine [2011] ECtHR 33014/05, ECHR 2011 [63].
161 Xxxxx v Switzerland [2007] ECtHR [GC] 69698/01, ECHR 2007-V [103–104].
162 See, ex multis, Observer and Guardian v the United Kingdom [1991] ECtHR 13585/88, Series A 216 [59]; Xxxxxxx v Denmark [1994] ECtHR [GC] 15890/89, Series A 298 [31].
163 Xxxxxx Xxxxxxxxx and Xxxxxxxx De Xxxxxxxx, ‘A Constitutional-Driven Change of Heart: ISP Liability and Artificial Intelligence in the Digital Single Market’ in Xxxxxxxx Xxxxxxxx Xxxxxxx (ed), The Global
preoccupations with respect to the possibility of “bad” and harmful information being disseminated through the Internet.164 The European Commission has itself stressed re- peatedly the inherent challenges that the online setting entails for the well-being of de- mocracy. Most notably, although recognizing that the digital revolution has brought more opportunities for civic engagement, making access to information and participation in public life and the democratic debate easier, the Commission has stressed that it has also “opened up new vulnerabilities”, affecting inter alia the integrity of elections, the protec- tion of free and plural media, and the fight against disinformation and information ma- nipulation.165
2.4.2. Main characters of online hate speech
The ambiguous nature of the Internet as both an enabler of freedom of expression and as a cause of enhanced risks for the protection of other fundamental rights and democratic values is especially relevant when it comes to the topic of online hate speech. Indeed, in this respect, the European Commission has precisely declared:
The increase in internet and social media usage has also brought more hate speech online over the years … emotions and vulnerabilities have been increasingly used, including in public debate for political gain, to disseminate racist and xenophobic statements and at- tacks, amplified in many cases by social media.166
The increased risks connected to freedom of expression online are in fact quite relevant when it comes to the dissemination of hate speech content, as the specific characters of
Community Yearbook of International Law and Jurisprudence 2018 (Oxford University Press 2019); Gio- xxxxx Xx Xxxxxxxx, ‘The Rise of Digital Constitutionalism in the European Union’ (2021) 19 International Journal of Constitutional Law 41. With respect to the relationship between online freedom of expression and the protection of intellectual property see, ex multis, Joined Cases C-236/08, C-237/08 and C-238/08, Google France SARL and Google Inc v Xxxxx Xxxxxxx Malletier SA, Google France SARL v Viaticum SA and Luteciel SARL and Google France SARL v Centre national de recherche en relations humaines (CNRRH) SARL and Others [2010] ECLI:EU:C:2010:159; Case C-324/09, L’Oréal SA and Others v eBay International AG and Others [2011] ECLI:EU:C:2011:474; Case C-70/10, Scarlet Extended SA v Société belge des auteurs, compositeurs et éditeurs SCRL (SABAM) [2011] ECLI:EU:C:2011:771; Case C-360/10, Belgische Vereniging van Auteurs, Componisten en Uitgevers CVBA (SABAM) v Netlog NV [2012] ECLI:EU:C:2012:85. With respect to the protection of privacy rights, and specifically the right to be for- gotten, see the seminal judgment Case C-131/12, Google Spain SL and Google Inc v Agencia Española de Protección de Datos (AEPD) and Xxxxx Costeja Xxxxxxxx [2014] ECLI:EU:C:2014:317. With regard to the protection of individuals from defamation, see Case C-18/18, Xxx Xxxxxxxxxxx-Piesczek v Facebook Ireland Limited [2019] ECLI:EU:C:2019:821. For a more in-depth account of the evolution of the approach to ISP liability in the case law of the CJEU, see infra, §3.4.2.
164 See, most notably, AVMSD Refit Directive; Regulation (EU) 2021/784 of the European Parliament and of the Council of 29 April 2021 on addressing the dissemination of terrorist content online, OJ L 172/79; Regulation (EU) 2022/2065 of the European Parliament and of the Council of 19 October 2022 on a Single Market for Digital Services and amending Directive 2000/31/EC (Digital Services Act), OJ L 277/1. See more infra, §3.4.3.2.
165 European Commission, ‘Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions On the European Democracy Action Plan’ COM(2020) 790 final 2.
166 European Commission, ‘Communication on Extending the List of EU Crimes to Hate Speech and Hate Crime’ (n 81) 2.
online communication are held to have contributed significantly to its quantitative rise in the Internet ecosystem.167
Significant concerns have also been raised by the UN Special Rapporteur on minority issues who, in his 2021 Recommendations on “Hate speech, social media and minorities”, defined the scale of hate speech targeting minorities on social media “overwhelming”.168 The diffusion of the hate speech phenomenon over the Internet has reportedly increased significantly especially in the aftermath of the breakout of the COVID-19 pandemic, tar- geting mainly, but not only, individuals of Asian descent.169
Many aspects contribute to making hate speech a particularly challenging phenomenon in the context of the Internet. Amongst these, at least four main features characterizing online hate speech, as opposed to its offline counterpart, have been underscored by liter- ature and have therefore been considered to raise new, significant challenges: perma- nence, itinerancy, anonymity, and the inherently cross-jurisdictional character of Internet content.170
“Permanence” refers to the ability of hateful content to thrive online and to be easily circulated, also thanks to the use of hyperlinking tools. Permanence often depends signif- icantly on the architecture of platforms involved: thus, for instance, X’s conversational structure, based on trending topics, can enable hate speech to spread quickly and widely.171 Such a feature is especially relevant not only because it enhances the harm
167 Xxxxxx Xxxxxxx and others, ‘Dynamics of Online Hate and Misinformation’ (2021) 11 Scientific Re- ports 22083; European Commission, TIPIK, and Spark Legal Network, Study to Support the Preparation of the European Commission’s Initiative to Extend the List of EU Crimes in Article 83 of the Treaty on the Functioning of the EU to Hate Speech and Hate Crime: Final Report (Publications Office 2021)
<xxxxx://xxxx.xxxxxx.xx/xxx/00.0000/00000> accessed 3 February 2023 Annex VII.
168 Xxxxxxx xx Xxxxxxxx, ‘Recommendations Made by the Forum on Minority Issues at Its Thirteenth Session on the Theme “Hate Speech, Social Media and Minorities”’ (United Nations 2021) A/HRC/46/58 para 4. As a matter of fact, determining the exact scale of the phenomenon of hate speech on the Internet is not an easy task for various reasons, including the alleged under-reporting of the phenomenon to authorities and the consequent need to refer to platforms’ transparency reports which, although giving some precious insights into the dynamics of online hatred, often lack important qualitative data. See Xxxxxxx Xxxxxxxx, ‘The European Commission’s Code of Conduct for Countering Illegal Hate Speech Online’ (TWG 2019)
<xxxxx://xxx.xxxx.xx/xxxxxxxxxxx/xxxxxxxx/Xxxxxxxx.xxx> accessed 22 January 2023. As a result, as stressed by Xxxxxx, there is still, at the state of the art, limited literature assessing systematically the scale of the phenomenon of online hate speech: see Xxxxxxxxx X Xxxxxx, ‘Online Hate Speech’ in Joshua A Xxxxxx and Xxxxxxxxx Xxxxxxx (eds), Social Media and Democracy: The State of the Field, Prospects for Reform (Cambridge University Press 2020). Nonetheless, its diffusion across the Internet has been under- scored within the study to support the Commission’s proposal of extending art 83(1) TFEU, highlighting the “pan-European” dimension that the issue has taken: see European Commission, TIPIK, and Spark Legal Network (n 167) Annex VII 13.
169 United Nations, ‘Countering COVID-19 Hate Speech’ (United Nations Secretary-General, 2020)
<xxxxx://xxx.xx.xxx/xx/xx/xxxx/000000> accessed 15 December 2021; Xxxxxxx Xxxxxxx and C Ravin- xxxxxxx Xxxxxxxx, ‘Combating Hate Speech Using an Adaptive Ensemble Learning Model with a Case Study on COVID-19’ (2021) 185 Expert Systems with Applications 115632. Cf. also Meta, ‘Community Standards Enforcement: Hate Speech’ (Transparency Center) <xxxxx://xxxxxxxxxxxx.xxxx.xxx/xx- ports/community-standards-enforcement/hate-speech/facebook/> accessed 28 April 2024.
170 Xxxxxx Xxxxxxxxxxx and others, Countering Online Hate Speech (UNESCO Publishing 2015).
171 ibid 13–14.
inflicted on the targeted persons by making it more difficult to remove hate speech con- tents, thus amplifying significantly their de-humanizing and discriminatory effects,172 but also because their longevity contributes to the development of what Leiter defined as “cyber-cesspools”, that is, “places in cyberspace – chat rooms, websites, blogs, and often the comment sections of blogs – which are devoted in whole or in part to demeaning, harassing, and humiliating individuals: in short, to violating their ‘dignity’”.173
The feature of “itinerancy”, instead, consists of the ability of online content to be easily moved across the cyber-space. This way, “when content is removed, it may find expres- sion elsewhere, possibly on the same platform under a different name or on different online spaces”: even websites, in case they are shut down, can be immediately reopened by using less stringent web-hosting providers or by reallocating them in countries where hate speech tolerance is much higher.174
In this respect, the feature of itinerancy is strictly intertwined with that of permanence, as they both contribute to render the removal of online hate speech content much more difficult. Moreover, itinerancy, combined with permanence, can also contribute to mak- ing it easier for “poorly formulated thoughts that would have not found public expression and support in the past” to “land on spaces where they can be visible to large audi- ences”.175
Anonymity represents both a fundamental asset of online freedom of expression and the cause of significant challenges. Indeed, on the one hand, the possibility of expressing one’s views without disclosing one’s personal identity represents an important tool of democracy, as it protects the speaker from backlash from private and public actors:176 thus, anonymity on the Internet can represent an important tool for the enjoyment of free- dom of expression especially within illiberal democracies. On the other hand, anonymity
172 “The potential permanency of content made available online is also a relevant consideration when quantifying the nature and extent of the harms caused … Content remains traceable and largely retrievable after its original dissemination to an unprecedented extent when the dissemination takes place online … This means that there is a danger that victims of hate speech will continuously, or at least repeatedly, be confronted by the same instance of hate speech after their original articulation”. McGonagle (n 60) 32.
173 Xxxxx Xxxxxx, ‘Cleaning Cyber-Cesspools: Google and Free Speech’ in Xxxx Xxxxxxx and Xxxxxx X Xxxxxxxx (eds), The Offensive Internet: Privacy, Speech, and Reputation (Harvard University Press 2010) 155.
174 Xxxxxxxxxxx and others (n 170) 14.
175 ibid.
176 Xxxx Xxxxxxxx, ‘Internet and the Right of Anonymity’ in Xxxxxx Xxxxxxxxx (ed), Proceedings of the conference Regulating the Internet, Belgrade, 2010 (Center for Internet Development 2011); Xxxxxxx Xxxxx, ‘Anonimato, Responsabilità, Identificazione: Prospettive Di Diritto Comparato’ (2014) 2 Il diritto dell’in- formazione e dell’informatica 171; Xxxxxx Xxxx Xxxxxxxx, ‘Anonimato, Responsabilità e Trasparenza Nel Quadro Costituzionale Italiano’ (2014) 2 Il diritto dell’informazione e dell’informatica 207; Xxxxxx Xxxxxx, New Media and Freedom of Expression: Rethinking the Constitutional Foundations of the Public Sphere (Xxxx 2019) 202–204. In the US, the right to anonymous speech is considered to be generally protected under the First Amendment following XxXxxxxx v Ohio Elections Commission 514 US 334 (1995).
increases the risk of dissemination of illegal and harmful content not only because it makes enforcement of content regulation more burdensome,177 but also, and perhaps even more so, because it can lead individuals to feel hidden, and therefore secure, when up- loading such materials.
In fact, most Internet users do not have the technological tools, nor the know-how to attain full anonymity.178 Nonetheless, the anonymity perceived by users of the Internet can disinhibit them significantly and thus contribute to the rise of toxic content.179 In this respect, Xxxxxx argues:
Anonymity frees people to defy social norms. When individuals believe, rightly or wrongly, that their acts won’t be attributed to them personally, they become less con- cerned about social conventions. Research has shown that people tend to ignore social norms when they are hidden in a group or behind a mask. Social psychologists call this condition deindividuation. People are more likely to act destructively if they do not per- ceive the threat of external sanction … People are more inclined to act on prejudices when they think they cannot be identified.180
It has correctly been noted that anonymity is, in truth, not always sought by purveyors of hate speech. In fact, many of them actively disclose their identity by making their names and surnames public as their main goal is precisely “to attract attention and consensus”, whereas “acting anonymously would not provide recognition in the community in which they are active”.181 This holds true, especially, when hate is used as a political tool to gather followers.
Be that as it may, anonymity contributes sensitively to the increase of spontaneous and/or low-profile forms of hate speech. According to Xxxxxx, additionally, the tendency of anonymity to encourage and promote the dissemination of hate speech is often further intensified by the physical separation between speaker and target, as the distance makes the consequences of such utterances seem as if they are remote and affecting indistinct, and thus dehumanized, persons.182 In other words, anonymity and physical distance, re- sulting in the invisibility of the target of hate speech and of its consequences, affect the capability of Internet users to exercise sympathy towards their digital interlocutors, thus
177 Xxxxxxxxxxx X Xxx Xxxxxxx, ‘Internet Hate Speech: The European Framework and the Emerging American Haven’ (2005) 62 Washington and Xxx Xxx Review 781, 783.
178 Xxxxxx Xxxxxxx, ‘The Challenges Surrounding the Regulation of Anonymous Communication Pro- vision in the United Kingdom’ (2016) 56 Computers & Security 151. In fact, the main hurdle that anonym- ity entails when it comes to the enforcement of hate speech regulation is that, because of the massive amount of unlawful content being posted on the Internet, public resources and finances are often insufficient to prosecute and identify all authors of such content: see Xxxxxxxx Xxxxxxxx, L’Odio Online: Xxxxxxxx Xxxxxxx e Ossessioni in Rete (Xxxxxxxxx Xxxxxxx 2016) 95.
179 “The fast sharing of hate speech through the digital word is eased by the online disinhibition effect, as the presumed anonymity on the internet and sense of impunity reduce people’s inhibition to commit such offences … The internet provides a channel for increased and easily shared hate speech online. Perpetrators of hate speech online are triggered and disinhibited by a sense of anonymity and impunity on the internet, which increases the risk that they continue commit such offences”. European Commission, ‘Communica- tion on Extending the List of EU Crimes to Hate Speech and Hate Crime’ (n 81) 2, 16.
180 Xxxxxxxx Xxxxx Citron, Hate Crimes in Cyberspace (Harvard University Press 2014) 58. Cf. Xxxxxx- xxx Xxxxx, ‘What Is so Special about Online (as Compared to Offline) Hate Speech?’ (2018) 18 Ethnicities 297.
181 Xxxxxxxx (n 40) 202.
182 Citron (n 180) 59.
favouring psychological processes of “moral disengagement” by which they are able to avoid “the constraint of negative self-sanctions for conduct that violates one’s moral standards”183 and that generally contributes to shaping human beings’ moral agency.184
2.4.2.4. Cross-jurisdictional nature of online content
The cross-jurisdictional character of online hate speech is problematic for at least two reasons. First, together with permanence and itinerancy, it enhances significantly the neg- ative effects of such content, mainly because it amplifies enormously its reach and thus helps hate groups widen their audiences, especially to countries facing similar political or social situations.185 Second, it raises important issues as regards international cooperation between jurisdictions. This second aspect is especially problematic precisely because the specific sensitivities of jurisdictions regarding hate speech regulation can be very differ- ent, as showcased by the gap described above between the European and US ap- proaches.186
In this respect, the notorious LICRA v Yahoo! judicial saga is perhaps the most notable and symbolic example of the legal challenges entailed by the ability of Internet content to move across traditional state borders. 187 The episode concerned, namely, the auction- ing of Nazi memorabilia upon websites which were stored on Yahoo!’s servers, located in the US, but were accessible worldwide. Because, however, the sale of such items was illegal and punished as a criminal offence under the French Criminal Code, the Paris Tri- bunal de Grande Instance (TGI) issued an order against Yahoo!, requiring it to adopt all means necessary to dissuade from and to block consultation of the abovementioned web- sites, as well as to pre-emptively inform Internet users of all risks involved in the consul- tation of such websites.188
As the order affected not only the subsidiary French company, but also the mother company, based in California, because of the location of the servers hosting those unlaw- ful auctions, Yahoo!, arguing that the order represented an unacceptable interference on
183 Xxxxxx Xxxxxxx, Moral Disengagement: How People Do Harm and Live with Themselves (Worth Publishers, Macmillan Learning 2016) 1.
184 See Xxxxx Xxxxxxxxx, ‘Il “Lato Oscuro Della Rete”: Odio e Pornografia Non Consensuale. Ruolo e Responsabilità Delle Piattaforme Social Oltre La Net Neutrality’ (2021) 2 La Legislazione Penale 254, 260. 185 European Commission, ‘Communication on Extending the List of EU Crimes to Hate Speech and
Hate Crime’ (n 81) 16.
186 Because of such a gap, Xxxxxxxxxxx lamented the risk of the US attracting “hate mongers” by offer- ing them a “safe haven”. Xxxxx X XX Xxxxxxxxxxx, ‘A Haven for Hate: The Foreign and Domestic Implica- tions of Protecting Internet Hate Speech under the First Amendment’ (2001) 75 Southern California Law Review 1493. Similarly Xxxxxx: “Hate groups have found a haven in the United States for their Internet sites because the Supreme Court has significantly limited the government’s ability to prohibit the distribution of racist, provocative materials”. Xxxxxxxxx Xxxxxx, ‘Hate in Cyberspace: Regulating Hate Speech on the In- ternet’ (2001) 38 San Xxxxx Xxx Review 817, 838.
187 Xxxx X Xxxxxxxxxx, ‘Yahoo and Democracy on the Internet’ (2001) 42 Jurimetrics 261; Xxxx X Xxxxxxxxx, ‘A Return to Lilliput: The LICRA v. Yahoo! Case and the Regulation of Online Content in the World Market’ (2003) 18 Berkeley Technology Law Journal 1191; Xxxxxxxxx, Judicial protection of funda- mental rights on the Internet (n 157) 37–39; Xxxxx Xxxxxxx, Internet e Libertà Di Espressione: Prospettive Costituzionali e Sovranazionali (Aracne 2019) 166–167.
188 TGI Paris (22 May 2000) RG 00/05308, Ligue internationale contre le racisme et l’antisémitisme et Union des étudiants juifs de France v Yahoo!, Inc et Yahoo! France.
its First Amendment rights, referred the case to the US District Court for Northern Cali- fornia. The Court concluded indeed that the Parisian decision should not be enforced in the US, noting that “the French order’s content and viewpoint-based regulation of the web pages and auction site … clearly would be inconsistent with the First Amendment if man- dated by a court in the United States”.189 The District Court’s decision was, nonetheless, subsequently reversed by the Court of Appeals of the Ninth Circuit190 which acknowl- edged, on the one hand, that the French order would only prevent French users, and not US citizens, from accessing the discussed websites and, on the other hand, that refusing to enforce it would lead the US First Amendment to apply extraterritorially: a result quite controversial and debatable as potentially in contrast with the sovereignty of other coun- tries.191
The LICRA v Yahoo! episode thus confirms the additional difficulties entailed by the contemporary regulation of online hate speech at the intersection with the issue of digital sovereignty against the Internet landscape.192
2.4.3. The role of algorithmic content moderation and curation
A significant aspect that requires attention when discussing the phenomenon of online hate speech is also represented by the impact that content governance practices have on its spread.
The set of these practices, from a terminological point of view, can be broadly included within the notion of “content moderation”, which is defined, lato sensu, as “the govern- ance mechanisms that structure participation in a community to facilitate cooperation and prevent abuse”.193 Such a notion thus entails an extremely wide range of techniques such as the exclusion of unwanted members from the community; norm-setting; and organiza- tion of the information flow. However, within this ample group, a distinction may be made between systems of “hard moderation” (moderation stricto sensu) and systems of “soft moderation” (curation). Whereas the former consist of decisions concerning the re- moval of content violating the law or a platform’s terms and conditions and, consequently, the measures to be adopted against the accounts violating those rules, the latter govern the way content is presented to users, and thus consist of decisions concerning, rather, the design and architecture of a website, as well as of those techniques put in place to present
189 Yahoo! Inc v La Ligue Contre Le Racisme Et L’Antisemitisme 169 FSupp2d 1181 (NDCal 2001) 1192.
190 Yahoo! Inc v La Ligue Contre Le Racisme Et L’Antisemitisme 379 F3d 1120 (9th Cir 2004); Yahoo!
Inc v La Ligue Contre Le Racisme Et L’Antisemitisme 433 F3d 1199 (9th Cir 2006).
191 Yahoo! Inc v La Ligue Contre Le Racisme Et L’Antisemitisme (n 190) 1221–1222.
192 Xxxxxxxx Xxxxxxx and Xxxxxx Xxxxxx, ‘What Does the Notion of “Sovereignty” Mean When Refer- ring to the Digital?’ (2019) 21 New Media & Society 2305; Xxxxx Xxxxx and Xxxxxxxx Xxxxx, ‘Digital Sov- ereignty’ (2020) 9 Internet Policy Review 1; Xxxxxxx Xxxxxxx, ‘The Fight for Digital Sovereignty: What It Is, and Why It Matters, Especially for the EU’ (2020) 33 Philosophy & Technology 369.
193 Xxxxx Xxxxxxxxxxx, ‘The Virtues of Moderation’ (2015) 17 Yale Journal of Law and Technology 42, 47.
users with tailored and customized information.194 As mentioned above, today these ac- tivities are based consistently on the use of algorithmic systems, with important conse- quences with regard to the governance of hate speech on the Internet.195
First, with respect to stricto sensu content moderation, these systems are subject to significant margins of error,196 with a high risk of legitimate content being unwarrantedly removed or, conversely, of hate speech content escaping detection.197 Margins of error are inherent to any form of online content moderation, but are especially significant when the type of “information bad”198 to be detected requires a significant amount of contextual elements to be taken into account, as is the case of hate speech.199 Automated systems of hate speech detection often fail indeed to grapple with the intention behind a post or be- hind the use of a specific word, and thus wrongly categorize a specific piece of content.200 Additionally, automated systems can replicate, often involuntarily, human biases and prejudice, often leading to a discriminatory enforcement of moderation strategies, with the collateral effect of removing oftentimes content produced by minority, discriminated, or marginalized communities.201 This silencing effect, far from contributing to the fight against the phenomenon of hate speech, has precisely the effect of replicating the dynam- ics of domination and subordination it entails.202
Second, the way content is algorithmically curated within the online digital sphere is also extremely relevant. Content curation plays indeed an essential role in determining what is actually seen and what remains hidden on the Internet. This is mostly done, today, through the implementation of recommender systems,203 which collect and process user data to develop a profile reflecting their interests, likes and dislikes and subsequently
194 Xxxxxx Xxxxx, Xxxxxx Xxxxx and Xxxxxxxxx Xxxxxxxxxx, ‘Algorithmic Content Moderation: Tech- nical and Political Challenges in the Automation of Platform Governance’ (2020) 7 Big Data & Society 2053951719897945, 3; Xxxx Xxxxxx and others, ‘Artificial Intelligence, Content Moderation, and Free- dom of Expression’ (TWG 2020) <xxxxx://xxx.xxxx.xx/xxxxxxxxxxx/xxxxxxxx/XX-Xxxxxx-Xxx-Xxxxxxx- Feb-2020.pdf> accessed 13 December 2021. Xxx Xx refers to “negative speech control”, to indicate the “removing and taking down [of] disfavored, illegal, or banned content, and [the] punishing or removing [of] users”, whereas he defines as “affirmative speech control” the act of “choosing what is brought to the attention of the user”. Xxx Xx, ‘Will Artificial Intelligence Eat the Law? The Rise of Hybrid Social-Or- xxxxxx Systems’ (2019) 119 Columbia Law Review 2001, 2014.
195 Xxxxxx Xxxx, ‘Moderazione Automatizzata e Discriminazione Algoritmica: Il Caso dell’Hate Speech’ in Xxxxx Xxxx, Xxxxxxx Xxxxxxxxx and Xxxxxx Xxxxxxxxxxx (eds), La Internet Governance e le Sfide della Trasformazione Digitale (Editoriale Scientifica 2022); Xx Xxxxxxxx and Xxxx (n 139).
196 Xxxxxx Xxxxx, ‘Governing Online Speech: From “Posts-as-Trumps” to Proportionality and Proba- bility’ (2021) 121 Columbia Law Review 759. See more infra, §5.4.
197 Sartor and Loreggia (n 150) 45.
198 ibid 17.
199 On the contextual elements to take into account when evaluating when a specific utterance amounts to hate speech, see Xxxxx (n 48).
200 Machines, indeed, although endowed with extraordinary computational and syntactic capacities, are often still rather dysfunctional as far as semantic understanding is concerned. Xxxxxxx Xxxxxxx, The Fourth Revolution: How the Infosphere Is Reshaping Human Reality (Oxford University Press 2014); Durante (n 145). The understanding of the semantic meaning of online content is particularly complex when it comes to multi-modal forms of expression such as, for example, memes: see infra, §5.3.4.1.
201 See infra, §5.3.4.2.
202 See infra, §2.5.1.
203 Xxxxxx Xxxxxx, Xxxxxxxxxxxx Xxxxxx and Xxxxxxx Xxxxxxx, ‘Recommender Systems and Their Ethical Challenges’ (2020) 35 AI & Society 957.
compute a similarity score between that profile and the content items published online so as to be able to suggest relevant content to the Internauts.204 Automated content curation, however, is driven notably by the purpose of maximizing the engagement of users. There- fore, the content promoted is not necessarily the best content available. Since highly con- troversial pieces of information tend to trigger people’s emotions and, therefore, tend to spark reactions and draw interest, it is often those items that recommender systems tend to offer to users. This is the case, for instance, of disinformation as well as of hate speech. Because these forms of communication are often designed in such a way as to excite the feelings and capture the attention of audiences, recommender systems can often be moved to contribute to their spread.205 Besides, details about the algorithmic functioning of platforms’ recommender systems are usually not disclosed due to proprietary concerns, so that there is a lack of transparency both for the public and for research purposes on this point: the meaning itself of “relevance”, that is, the precise methodology associated to the understanding of what is the “best” content for their clients, is in fact often not clear, nor do platforms tend to indicate what they mean by it.206
Far from being a mere theoretical issue, the “negative externalities” of the use of – biased – automated content moderation and curation systems have reportedly had signif- icant repercussions in recent years. The most notable – and tragic – example is represented by the genocide of the Rohingya population in Myanmar, which reached its apex in 2016- 2017, with the perpetration of military violences against the minority group. In that in- stance, Facebook, one of the most popular and important sources of information in the country, came under fire for its failure, on the one hand, to detect and remove hate speech utterances inciting to violence against the Rohingya people and, on the other hand, for the automated removal of content posted by Xxxxxxxx activists denouncing publicly the vio- lences perpetrated against them.207 The algorithms used by Facebook were considered, in fact, to be actively responsible for the dissemination and virality of hatred against the persecuted group.208 This triggered, ultimately, the initiation of coordinated lawsuits
204 Xxxxxx and others (n 194).
205 Xxxxx Xxxxxx Xxxxxxx, Ubi Social, Xxx Xxx: Fondamenti Costituzionali Dei Social Network e Profili Giuridici Della Responsabilità Dei Provider (Xxxxxx Xxxxxx 2018) 188; Xxxxxx and others (n 194); Xxxxxxxxx (n 184) 264.
206 Xxxxxx and others (n 194). The lack of transparency as regards the processes of customization of content and the targeting of users represents a significant issue at the intersection between the right to freedom of expression and information and the right to privacy and data protection and demonstrate how, in the context of the “algorithmic age”, the protection of these fundamental interests has undergone a pro- cess of convergence: on this aspect, see namely Xxxxxxxx De Xxxxxxxx, Digital Constitutionalism in Eu- rope: Reframing Rights and Powers in the Algorithmic Society (Cambridge University Press 2022). See also Xx Xxxxxxxx and Xxxx (n 139) 81–82.
207 Xxxxx (n 148) 128–129. As highlighted by Xx Xxxxxxxx and Xxxxxxxx, hate speech detection systems of online social media platforms are more than often not sufficiently (or not at all) trained to deal with non- Western languages, such as African or Asian languages. In these cases, the margin of error increases sig- nificantly. Xxxxxxxx De Xxxxxxxx and Xxxxxx Xxxxxxxx, ‘Platform Governance at the Periphery: Moderation, Shutdowns and Intervention’ in Xxxxx Xxxxx and others (eds), Perspectives on Platform Regulation. Con- cepts and Models of Social Media Governance Across the Globe (Nomos 2021).
208 Xxxx Xxxxx, ‘A Genocide Incited on Facebook, With Posts From Myanmar’s Military’ The New York Times (15 October 2018) <xxxxx://xxx.xxxxxxx.xxx/0000/00/00/xxxxxxxxxx/xxxxxxx-xxxxxxxx-
against Meta aiming at seeking redress for its platform’s negligence in combatting the diffusion of dangerous and violent narratives.209
2.5. Anti-discrimination perspectives on hate speech: a substantive equality approach
2.5.1. Hate speech as domination: some takeaways from speech act theory
As highlighted throughout the previous sections, hate speech, both offline and online, can affect significantly the fundamental rights of target individuals and groups, as well as society as a whole. For instance, hate speech can excite the audiences it reaches and pro- voke them to perpetrate acts of violence and/or of discrimination against the members of groups traditionally subject to marginalization and victimization. Additionally, hate speech can cause direct effects on the well-being of those people, who may suffer im- portant psychological and psychosomatic damage. More in general, however, hate speech affects the dignity of targeted subjects as human beings, by denying their equal standing in society and relegating them to further conditions of isolation.
In other words, hate speech represents an instrument for perpetuating traditional dy- namics of power and domination characterizing the relationship between different seg- ments of the population. In this respect, the philosophical branch of speech act theory, first inaugurated by Xxxx Xxxxxxxx Austin210 and by his pupil Xxxx Xxxxxx Xxxxxx,211 can offer some relevant insights.212
According to Xxxxxx, there are cases where utterances can be “performative”, meaning that “there is something which is at the moment of uttering being done by the person
genocide.html> accessed 26 January 2023; Xxx Xxxxx, ‘U.N. Investigators Cite Facebook Role in Myan- mar Crisis’ Reuters (12 March 2018) <xxxxx://xxx.xxxxxxx.xxx/xxxxxxx/xx-xxxxxxx-xxxxxxxx-xxxxxxxx- idUSKCN1GO2PN> accessed 26 January 2023; Xxxxxxx Xxxxx, ‘Meta Urged to Pay Reparations for Fa- cebook’s Role in Rohingya Genocide’ (TechCrunch, 29 September 2022)
<xxxxx://xxxxxxxxxx.xxx/0000/00/00/xxxxxxx-xxxxxx-xxxxxxxx-xxxxxxxx-xxxxxxxxxxx/> accessed 26 Janu- ary 2023. See also the dedicated report by Amnesty International: Amnesty International, ‘The Social Atrocity: Meta and the Right to Remedy for the Rohingya’ (Amnesty International 2022) ASA 16/5933/2022 <xxxxx://xxx.xxxxxxx.xxx/xx/xx-xxxxxxx/xxxxxxx/0000/00/XXX0000000000XXX- LISH.pdf> accessed 26 January 2023.
209 Xxxxxxxxx Xxxxxxxxx, ‘Rohingya Refugees Sue Facebook for $150 Billion over Myanmar Violence’ Reuters (8 December 2021) <xxxxx://xxx.xxxxxxx.xxx/xxxxx/xxxx-xxxxxxx/xxxxxxxx-xxxxxxxx-xxx-xxxx- book-150-billion-over-myanmar-violence-2021-12-07/> accessed 26 January 2023; Xxx Xxxxx, ‘Roh- xxxxx Xxx Facebook for £150bn over Myanmar Genocide’ The Guardian (6 December 2021)
<xxxxx://xxx.xxxxxxxxxxx.xxx/xxxxxxxxxx/0000/xxx/00/xxxxxxxx-xxx-xxxxxxxx-xxxxxxx-xxxxxxxx-xx- uk-legal-action-social-media-violence> accessed 26 January 2023.
210 Xxxx X Xxxxxx, How to Do Things with Words: The Xxxxxxx Xxxxx Lectures Delivered at Harvard University in 1955 (Clarendon Press, Oxford University Press 1962).
000 Xxxx X Xxxxxx, ‘What Is a Speech Act?’ in Xxxxxxx Xxxxx (ed), Philosophy in America (Xxxxx and Xxxxx 1965); Xxxx X Xxxxxx, ‘Austin on Locutionary and Illocutionary Acts’ (1968) 77 The Philosophical Review 405; Xxxx X Xxxxxx, Expression and Meaning: Studies in the Theory of Speech Acts (Cambridge University Press 1979).
212 On the relationship between speech act theory and hate speech, see most notably Xxxxxxxxxx Xx Xxxx, Hate Speech e Discriminazione: Un’analisi Performativa Tra Diritti Umani e Teorie Della Libertà (Mucchi Editore 2020).
uttering”.213 Performative utterances are distinguished from “constative utterances”, where no additional act is performed apart from the act of speaking. For example, when saying “He is running”, the speaker merely describes a situation which is happening ex- ternally and upon which, therefore, they do not actively intervene; whereas, when saying “I apologize”, the speaker actually performs an action, that is, that of apologizing. Ac- cording to Austin, therefore, the distinction between constative and performative utter- ances equals to that between saying and doing.214 In other words, there are cases where to speak is, in fact, to do. Besides, Xxxxxx actually warns that in most cases constative utterances also entail performative results, so that the actual barrier between speech and action is not always that clear: speech is more often than not an act.215
Based on this premise, Xxxxxx further develops a taxonomy by distinguishing between locutionary acts, illocutionary acts, and perlocutionary acts:
We first distinguished a group of things we do in saying something, which … we summed up by saying we perform a locutionary act, which is roughly equivalent to uttering a certain sentence with a certain sense and reference, which again is roughly equivalent to ‘meaning’ in the traditional sense. Second, we said that we also perform illocutionary acts such as informing, ordering, warning, undertaking …, i.e. utterances which have a certain (conventional) force. Thirdly, we may also perform perlocutionary acts: what we bring about or achieve by saying something, such as convincing, persuading, deterring, and even, say, surprising or misleading.
In practice, locutionary acts consist of the material acts “of” speaking (thus including constative utterances). These acts generally entail a certain conventional force such as to transform the sentence into an act of “doing” something: in this sense, the illocutionary act is precisely that act which is put in place “in” saying something. Finally, perlocution- ary acts refer to the material consequences of speaking. For instance, the sentence “Shoot her!” represents a locutionary act in the sense that enunciating it represents per se an act; it is at the same time an illocutionary act because it has a conventional force in that it entails the act of ordering someone to do something (conventional consequence), i.e., to shoot a person; finally, it represents a perlocutionary act if that sentence is capable of persuading the person receiving that order, thus leading them to shoot (material conse- quence).216
Xxxxxx, in continuing Xxxxxx’x work, actually criticized the distinction between locu- tionary acts and illocutionary acts, arguing that “the meaning of the sentence, which is supposed to determine the locutionary act, is already sufficient to fix a certain range of illocutionary act”, so that it is not possible to “distinguish between meaning and force, because force is already part of the meaning of the sentence”.217 Conversely, Xxxxxx ar- gues that the distinction between the illocutionary and the perlocutionary is essential, as
213 Austin (n 210) 60.
214 ibid 47.
215 “Perhaps indeed there is no great distinction between statements and performative utterances”. ibid
52.
000 Xxxx X Xxxxxx, ‘X.X. Xxxxxx (1911-1960)’ in Xxxxxxxx Xxxxxxx Xxxxxxxxx and Xxxxx Xxxx (eds), A
Companion to Analytic Philosophy (Xxxxxxxxx 2001) 220–221; Xx Xxxx (n 212) 120.
217 Xxxxxx, ‘X.X. Xxxxxx’ (n 216) 221.
it is essential for the purposes of identifying the capability of speech of performing per se
an act “regardless of the subsequent effects on the hearers”.218
The distinction between illocutionary and perlocutionary acts is not uninfluential in the debate over the harms of hate speech and, consequently, over the regulation of hate speech itself. As a matter of fact, hate speech can amount to a perlocutionary act: for instance, this is the case where the speaker is able to convince their audiences and to push them to physically and materially commit acts of violence or discrimination against per- sons based on protected features. At the same time, however, it has been argued that hate speech represents an illocutionary act, as it is capable, “in” being uttered, of putting in place a direct act of subordination of the targeted subjects.219
In other words, even in those cases where it does not lead audiences to take material actions against the people and groups it aims to attack, hate speech is nonetheless capable of performing an illocutionary act by which its simple existence entails the establishment of a dominator-dominated relationship between social groups and demographics:
Hate speech is a kind of … oppressive speech: letters “persecute” and “degrade” … with assault-like hate speech; Nazi editorials “incite” and “promote” hatred against Jews with propaganda-like hate speech. But there may be other kinds of … oppressive speech: a court says slaves are “incapable of performing civil acts”, are “things, not persons”; a proprietor says “Whites Only”. Speech like this is not, or not solely, assault-like or prop- aganda-like … Its point is to enact, or help enact, a system of … oppression: it authorita- tively ranks a certain group as inferior, deprives them of powers and rights, legitimates discrimination against them. Speech that does these things has, perhaps, the illocutionary force of subordination.220
An illocutionary approach to hate speech thus reveals the inherent potential for harmful- ness of the phenomenon, regardless of its direct “material” consequences, as the utterance of hate speech discourses constitutes an act of subordination per se.221 Of course, not all hate speech acts are identical, as their conventional force, and therefore their capability to constitute subordination, also depends on a variety of extra-verbal and contextual
218 ibid.
219 Xxx Xxxxxxx, Xxxxx Xxxxxxxxx and Xxxxxx Xxxxxxxx, ‘Language and Race’ in Xxxxxxx Xxxxxxx and Xxxxx Xxxxx Xxxx (eds), The Routledge Companion to Philosophy of Language (Routledge 2012); Xxx Xxxx- ton, ‘Beyond Belief: Pragmatics in Hate Speech and Pornography’ in Xxxxxx Xxxxxx and Xxxx Xxxx XxXxxxx (eds), Speech & Harm: Controversies Over Free Speech (Oxford University Press 2012); Xxxxxx Xxxxxx, ‘Subordinating Speech’ in Xxxxxx Xxxxxx and Xxxx Xxxx XxXxxxx (eds), Speech & Harm: Con- troversies Over Free Speech (Oxford University Press 2012); Xxx Xxxxxxx, ‘The Authority of Hate Speech’ in Xxxx Xxxxxxx, Xxxxxx Xxxxx and Xxxxx Xxxxxx (eds), Oxford Studies in Philosophy of Law, vol 3 (Oxford University Press 2018). According to XxxXxxxxx, a similar role in promoting subordination (of women) is performed by pornography: Xxxxxxxxx X XxxXxxxxx, ‘Pornography as Defamation and Discrimination’ (1991) 71 Boston University Law Review 793.
220 Xxxxxxx, Xxxxxxxxx and Xxxxxxxx (n 219) 759.
221 The direct impact of hate speech as an illocutionary act having the power of perpetrating long-stand- ing relations of domination, subordination, and marginalization is well portrayed by Xxxx X Xxxxxxx with respect to cross burning, a symbolic gesture typical of the Ku Klux Klan: “All of this is what we see when a cross burns on a suburban lawn. The cross is chosen because it carries with it in an instant 400 years’ worth of terror. However we analyze the speech of cross burning the embodied experience of life under a reign of terror must inform the conversation”. Xxxx X Xxxxxxx, ‘Dissent in a Crowded Theater’ (2019) 72 SMU Law Review 441, 454.
elements, as well as upon the position of authority of the speaker.222 Nevertheless, the conventional force that hate speech has in structuring society and in building a hierarchy between different demographics represents an essential aspect that the law must take into account.
As has been noted,223 the distinction between illocutionary and perlocutionary is seem- ingly reflected by and helps explain the different legal approaches taken, most notably, by the US and by Europe. Whereas, following Xxxxxxxxxxx v Ohio, hate speech in the US may be subject to limitation only when it “is directed to inciting or producing immi- nent xxxxxxx action and is likely to incite or produce such an action”224 so that the focus is, clearly, on the “material” consequences of the speech act (perlocutionary), European jurisdictions tend to extend the scope of hate speech regulation, so as to prohibit more generally speech acts that have the effect, for the simple fact of being uttered, of dehu- manizing and attacking the dignity of people as (equal) members of society.
The European approach, therefore, ultimately aims to remedy the structural power dy- namics of domination and subordination that characterize the relationship between seg- ments of the population, as these dynamics poison fundamental and constitutional demo- cratic values and affect directly the freedom of the groups targeted by hate speech.225 It is thus no coincidence that the European Commission, in its Communication on extending the list of EU crimes to hate crimes and hate speech, expressly argued for a common regulatory approach as those phenomena lead “to the devaluation of and threat to the human dignity of a person or a group”, namely by negating “their equal footing as mem- bers of the society, including their right to participate in the political or social life”.226
2.5.2. Substantive equality as a lodestar for hate speech governance
2.5.2.1. The concept of substantive equality
Interpreting hate speech as an illocutionary act, inherently capable of perpetrating and perpetuating societal dynamics of domination and subordination, and thus identifying hate speech regulation as a possible tool to remedy the resulting imbalances between de- mographics, leads to conclude that such forms of regulation ultimately aim, at least in the European context, at fostering and protecting the principle of equality, interpreted not so much under its formal acceptation but, rather, under its “substantive” one.
222 Maitra (n 219).
223 Di Rosa (n 212).
224 Xxxxxxxxxxx v Ohio (n 5) 447.
225 The term “freedom” is hereby intended, following the neo-republican meaning of the term, as “non- domination” (by others). Such an interpretation of freedom, which is different from the libertarian one focusing on “non-interference” (by the state) is inherently egalitarian, requiring, according to Xxxxxx, the pursuit of structural egalitarianism in society: “For all practical purposes, the goal which we set for our- selves in espousing the republican ideal of freedom is the promotion of equally intense non-domination. The general presumption can be that non-domination will not be furthered unless there is an increase in the equality with which the intensity of non-domination is enjoyed”. Xxxxxx Xxxxxx, Republicanism: A Theory of Freedom and Government (Clarendon Press, Oxford University Press 1997) 116.
226 European Commission, ‘Communication on Extending the List of EU Crimes to Hate Speech and Hate Crime’ (n 81) 7.
Whereas the concept of formal equality, grounded in the Aristotelian postulate that – unless there is an objective reason not to do so – similar cases should be treated alike and different cases should be treated differently (equal treatment principle), tends to apply symmetrically to all individuals, irrespective of their gender, ethnic background, sexual orientation, gender identity, age, (dis)ability, etc.,227 substantive equality takes into ac- count that “disadvantage persists, and this disadvantage tends to be concentrated in groups with a particular status, such as women, people with disabilities, ethnic minorities and others”.228 As a result, substantive equality requires that such disadvantages, inherent within society and often the product of historical forms of discrimination, marginaliza- tion, and victimization, are directly confronted by the law, often through asymmetric fea- tures.
In fact, the concept of substantive equality is not unitary across jurisdictions nor across literature. In this respect, Xxxxxx Xxxxxxx identifies at least three conceptions which may be referred to under the umbrella term of “substantive equality”. The first approach fo- cuses on results rather than on treatment (equality of results), meaning that the law, in- stead of treating all individuals the same way, takes affirmative steps and “preferential” treatments in order to actively distribute benefits in a fairer way: a practical example of such a system is the adoption of quota systems for occupational purposes (e.g., reserving a specific percentage of work positions to women).229 The second approach is that focus- ing on “equality of opportunity”, meaning that the law, rather than redistributing in a top- down fashion all benefits, should make efforts to ensure that all individuals are put in the same condition by removing pre-existing disadvantages – thus, the metaphor is that of the competitors of a race, who must be all brought to the same starting point: once this goal has been attained, individuals should be treated equally.230 The third approach focuses on the promotion of the fundamental core of the right to equality, identified in the principle of human dignity.231
In opposition to such perspectives, that reduce the notion of substantive equality to one, specific, meaning, Xxxxxxx argues for a “four-dimensional concept”,232 which has the advantage of allowing for a more holistic approach in responding to the real social wrongs connected to inequality and addressing its many facets. The first point consists of redressing the disadvantages to which certain groups and categories are subjected, tack- ling the detrimental consequences attached to a specific social status (redistributive di- mension). Second, enforcing substantive equality requires addressing stigma, stereotyp- ing, and humiliation, which have the effect of denying the humanity of targeted individ- uals: by responding to such actions, the law can protect victims’ societal “recognition”,
227 Xxxxxx Xxxxx and Xxxxxxxx Xxxxxx, EU Anti-Discrimination Law (2nd edn, Oxford University Press 2012) 5–6.
228 Xxxxxx Xxxxxxx, ‘Substantive Equality Revisited’ (2016) 14 International Journal of Constitutional Law 712, 712.
229 ibid 720–723.
230 ibid 723–724.
231 ibid 724–727.
232 ibid 727. See also Xxxxxx Xxxxxxx, ‘Emerging from the Shadows: Substantive Equality and Article 14 of the European Convention on Human Rights’ (2016) 16 Human Rights Law Review 273, 281–284.
which “refers to the central importance of inter-personal affirmation to [the] sense of who we are”233 (recognition dimension). Third, substantive equality should focus on promot- ing social inclusion and on making sure that disadvantaged individuals are given a polit- ical voice (participative dimension). Fourth, substantive equality must respect and accom- modate differences among humans, meaning that “existing social structures must be changed to accommodate difference rather than requiring members of out-groups to con- form to the dominant norm” (transformative dimension).234
The essence of inequality is the misanthropic notion … that some are intrinsically more worthy than others, hence justly belong elevated over them, because of the group of which they are (or are perceived to be) a member. The substance of each inequality, hence the domain in which it operates as a hierarchy, is distinctive to each one, but it is hierarchy that makes it an inequality.236
Both positions offer, nonetheless, important insights into how a substantive equality ap- proach can invest the discourse over hate speech governance, both in the online and in the offline dimension. On the one hand, if substantive equality, as stated by XxxXxxxxx, aims at addressing those social inequalities that rest upon historical hierarchies of groups and individuals, and if hate speech as an illocutionary act has the power of creating, struc- turing, and creating domination and subordination dynamics, then hate speech regulation can (and should) represent a direct instrument to address those social hierarchies.
On the other hand, Xxxxxxx’x architecture of the principle of substantive equality can offer important indications for the purposes of creating a roadmap for hate speech gov- ernance. Most notably, an effective approach to such a phenomenon should focus not only on tackling, and punishing, the stigma, stereotyping, and humiliation hate speech entails (recognition dimension), but should also ensure the full protection and fostering of the fundamental rights – including, namely, freedom of expression and, in general, all funda- mental rights and liberties that are conditional for the participation in the public and po- litical life – of targeted groups and categories of people (participative dimension). In other words, a substantive equality approach to hate speech governance equally entails a
233 Xxxxxxx (n 228) 730–731.
234 ibid 733. “For example, working hours have always been patterned on the assumption that childcare takes place outside the labor market. Women who wish to participate in the paid labor market must conform to this paradigm, either by forgoing having children, or leaving their children with paid child-carers or family members. Substantive equality aims to change such institutions so that participative parenting is possible for both mothers and fathers in the labor market. Similarly, the built environment must be adapted to accommodate the needs of disabled people, and dress codes and holidays must accommodate ethnic and religious minorities”.
235 Xxxxxxxxx X XxxXxxxxx, ‘Substantive Equality Revisited: A Reply to Xxxxxx Xxxxxxx’ (2016) 14 International Journal of Constitutional Law 739, 740.
236 Catharine A XxxXxxxxx, ‘Substantive Equality: A Perspective’ (2011) 96 Minnesota Law Review 1, 12.
“negative” facet, consisting of the prohibition and punishment of hate speech acts, and a “positive” facet, consisting of the promotion of the voices of minorities and of historically dominated groups.
A substantive equality approach to hate speech governance appears to be quite consistent with today’s European multi-level system of human rights protection. However, in this respect, it is important to acknowledge that the European approach to the right to equality and non-discrimination has undergone significant developments since the turn of the new millennium. In fact, at least in the beginning, the right to non-discrimination, both under Article 14 ECHR and under EU law, was rather subject to a formalistic interpretation.
Most notably, equality law scholars lamented for many years the ECtHR’s tendency to treat Article 14 as a “Cinderella provision”.237 Indeed, the principle of non-discrimina- tion was only applied de juncto with other rights set forth within the Convention. In other words, the right to equality did not have, in the interpretation of the Court, an equal stand- ing with other rights but had, rather, a “parasitic” nature as it simply prohibited discrimi- nation in the enjoyment of other rights.238 Additionally, the ECtHR was criticized for failing to develop an approach to equality capable of recognizing and considering as a relevant factor the systematic subjection of a certain group to disadvantage, discrimina- tion, exclusion, and oppression.239
More recently, however, Strasbourg case law on Article 14 ECHR has significantly evolved, progressively acknowledging the insufficiency of previous approaches to non- discrimination and thus accepting, albeit often implicitly, multiple features resonating with the principle of substantive equality.240 Namely, the Court has begun accepting that equal treatment before the law may ultimately have the effect of causing forms of indirect discrimination and that, therefore, there may be cases where contracting states are
237 Xxxx X’Xxxxxxx, ‘Cinderella Comes to the Ball: Art 14 and the Right to Non-Discrimination in the ECHR’ (2009) 29 Legal Studies 211.
238 Fredman (n 232) 273.
239 In his partly dissenting opinion for the 2002 judgement of Xxxxxxxxx v Bulgaria, Judge Xxxxxxx commented significantly criticized the ECtHR “colour-blind” approach: “I consider it particularly disturb- ing that the Court, in over fifty years of pertinacious judicial scrutiny, has not, to date, found one single instance of violation of the right to life (Article 2) or the right not to be subjected to torture or to other degrading or inhuman treatment or punishment (Article 3) induced by the race, colour or place of origin of the victim … Frequently and regularly the Court acknowledges that members of vulnerable minorities are deprived of life or subjected to appalling treatment in violation of Article 3; but not once has the Court found that this happens to be linked to their ethnicity. Kurds, coloured, Muslims, Roma and others are again and again killed, tortured or maimed, but the Court is not persuaded that their race, colour, nationality or place of origin has anything to do with it. Misfortunes punctually visit disadvantaged minority groups, but only as the result of well-disposed coincidence” Anguelova v Bulgaria [2002] ECtHR 38361/97, ECHR 2002-IV, Partly Dissenting Opinion of Judge Xxxxxxx [2-3].
240 X’Xxxxxxx (n 237); Xxxxxxx (n 232).
required to actively treat individuals differently, taking positive actions to remove societal hurdles, when this is necessary to address situations of objective unfairness.241
This progressive shift from a formalistic to a more substantive protection of the right to non-discrimination can also be traced within the case law concerning, namely, anti- LGBTQIA+ hate speech. Indeed, a relatively small but highly significant development emerges if one compares the 2012 judgment of Xxxxxxxxx and others v Sweden242 with the already mentioned decisions of Xxxxxxxx and Levickas v Lithuania and Association Accept and others v Romania243. In the former case, the ECtHR addressed the legitimacy of the criminal sanctions enacted by Sweden against a group of people who had entered a high school and distributed leaflets – leaving many of them in pupils’ lockers – contain- ing serious accusations against homosexual people and associating homosexuality with HIV/AIDS and paedophilia. The Court, on that occasion, had recognized for the first time that criminal persecution of anti-LGBTQIA+ speech could be consistent with Article 10 ECHR. Nevertheless, the decision did not argue in favour of a criminalization of such a phenomenon across states that are party to the Council of Europe.244
Conversely, although both Xxxxxxxx and Levickas and Association Accept implicitly recognize states a wide margin of appreciation with respect to such criminalization, they nonetheless stress, as has already been mentioned above,245 the need to comply with the “positive obligation to secure the effective enjoyment of these rights and freedoms under the Convention”, arguing that “this obligation is of particular importance for persons … belonging to minorities, because they are more vulnerable to victimisation”.246 The focus on the actual existence of “positive obligations” to support those groups that are at risk of victimization, rather than on the mere acceptability of implementing measures against forms of hate speech, represents an important step further and, arguably, an implicit recognition of the ultimate goal of hate speech regulation of confronting structural hier- archies of power in society and thus of promoting forms of substantive equality.247 Such a perspective was later confirmed at the beginning of 2023 in Valaitis v Lithuania.248
241 For instance, in the case of Xxxxxxxxx and XxXxxx v Italy, which concerned the refusal of Italian authorities to grant a residence permit for family reasons to a New Zealander citizen who was in a same- sex relationship with an Italian citizen, based on the fact that the two were not married, the Court underlined that, because at the time of the facts Italy did not provide for the recognition of same-sex marriage nor same-sex civil unions, “by deciding to treat homosexual couples … in the same way as heterosexual couples who had not regularized their situation the State infringed the applicants’ right not to be discriminated against on grounds of sexual orientation in the enjoyment of their rights under Article 8 of the Convention”. Xxxxxxxxx and McCall v Italy [2016] ECtHR 51362/09 [98].
242 Xxxxxxxxx and others v Sweden [2012] ECtHR 1813/07, ECHR 2012.
243 Xxxxxxxx and Levickas v Lithuania (n 131); Association Accept and Others v Romania (n 131).
244 Xxx Xxxxxxx, ‘Punire l’omofobia: (Non) Ce Lo Chiede l’Europa. Riflessioni Sulle Incertezze Giuri- sprudenziali e Normative in Tema Di Hate Speech’ (2015) 1 GenIUS 54.
245 See supra, §2.3.2.
246 Xxxxxxxx and Levickas v Lithuania (n 131) para 108.
247 Besides, a substantive equality approach, namely in its participative dimension, seemingly emerges in the reference to the chilling effect of hate speech on targeted groups made in Committee of Ministers of the Council of Europe, ‘CM/Rec(2022)16’ (n 71).
248 Xxxxxxxx v Lithuania [2023] ECtHR 39375/19. In that case, however, the Court found that Lithuania had not, in fact, violated the applicant’s rights under Article 13 (right to an effective remedy), precisely
Simultaneously, the EU approach to equality and non-discrimination has also under- gone significant developments. As is well known, the European Communities were, orig- inally, mainly focused on the promotion of economic and market interests, so that the notion of equality was initially interpreted under a strict formalistic acceptation. Indeed, non-discrimination was inherently seen as being “instrumental for the economic purpose of free movement of people, services, goods, and capital” and thus “primarily serve[d] economic integration and [was] therefore naturally nonprescriptive in substance”.249 Sub- sequently, however, the EU has turned more and more towards a human rights- and con- stitutional-oriented paradigm: in particular, the Court of Justice has played an essential role in the evolution of anti-discrimination law.250
Thus, for instance, the 1974 judgment of Sotgiu251 already recognized that apparently neutral provisions and rules can have the effect of leading to unfair consequences when applied to different demographics, concluding that rules regarding equality of treatment “forbid not only overt discrimination by reason of nationality but also all covert forms of discrimination which, by the application of other criteria of differentiation, lead in fact to the same result”.252 Hence, the Luxembourg judges introduced the concept of what would later be identified and defined by EU equality directives as “indirect discrimination”.253 As has been noted, the concept itself of indirect discrimination is, at its core, representa- tive of an inherently substantive goal of EU anti-discrimination law as in many cases it may be necessary, in order to avoid liability for indirect discriminatory practices, to ac- tively accommodate group differences, so that “a limited duty of preventive positive ac- tion is … implicit in the prohibition of indirect discrimination”.254
because, following the previous holding of Beizaras and Levickas, authorities had in fact fulfilled their positive obligation to protect homosexual people from hate speech.
249 Xxxx Xx Xxx, ‘Substantive Formal Equality in EU Non-Discrimination Law’ in Xxxxxx Xxxxxxxxx (ed), The European Union as Protector and Promoter of Equality (Springer 2020) 247.
250 As a matter of fact, scholars have highlighted that, although the ECtHR has traditionally taken the leading role in the development of human rights principles within Europe, the right to equality and non- discrimination represents an exception, as the CJEU has historically set landmark principles. See Xxxxxxx Xxxxxxx, ‘Non-Discrimination, the European Court of Justice and the European Court of Human Rights: Who Takes the Lead?’ in Xxxxxx Xxxxxxxxx (ed), The European Union as Protector and Promoter of Equality (Springer 2020) 138.
251 Case C-152/73, Xxxxxxxx Xxxxx Xxxxxx v Deutsche Bundespost [1974] ECLI:EU:C:1974:13.
252 ibid 11.
253 Council Directive 2000/43/EC of 29 June 2000 implementing the principle of equal treatment be- tween persons irrespective of racial or ethnic origin (Racial Equality Directive), OJ L 180/22 art 2(2)(b); Council Directive 2000/78/EC of 27 November 2000 establishing a general framework for equal treatment in employment and occupation (General Framework for Equal Treatment Directive), OJ L 303/16 art 2(2)(b); Council Directive 2004/113/EC of 13 December 2004 implementing the principle of equal treat- ment between men and women in the access to and supply of goods and services (Directive on Gender Equality in Goods and Services), OJ L 373/37 art 2(b); Directive 2006/54/EC of the European Parliament and of the Council of 5 July 2006 on the implementation of the principle of equal opportunities and equal treatment of men and women in matters of employment and occupation (recast) (Recast Equal Treatment Directive), OJ L 204/23 art 2(1)(b).
254 Xxxx Xx Xxx, ‘The European Court of Justice and the March towards Substantive Equality in Euro- pean Union Anti-Discrimination Law’ (2020) 20 International Journal of Discrimination and the Law 62,
71. In the case of Xxxxxxx, an internal rule of a private undertaking, G4S, prohibited the undertaking’s employees to wear an Islamic headscarf, leading to the dismissal of Ms Xxxxxxx who refused to comply
Moreover, in addition to having helped introduce the notion of indirect discrimination, the CJEU has also addressed the matter of positive actions, explicitly recognized as legit- imate under the equality directives.255 In this respect, in Milkova, where the referring court had brought up questions concerning the appropriateness of legislation favouring the em- ployment of disabled people, the Court underscored the consistency of the adoption of such measures with the general goal of the directives themselves, arguing as follows:
Thus, such a distinction in favour of people with disabilities contributes to achieving the aim of Directive 2000/78 … that is to say, the fight against discrimination, in the present case, based on disability as regards employment and occupation … The purpose of Article 7(2) of Directive 2000/78 is to authorise specific measures aimed at effectively eliminat- ing or reducing actual instances of inequality affecting people with disabilities, which may exist in their social lives and, in particular, their professional lives, and to achieve substantive, rather than, formal equality by reducing those inequalities.256
Admittedly, as highlighted by Xx Xxx, EU law and the CJEU still lay upon a bedrock of formality with respect to the right to equality. Nevertheless, the CJEU has built upon such a bedrock a significant body of case law through which it has been able to associate with it important substantive equality goals.257 Thus, overall, the multiplicity of values con- nected to the principle and the promotion of substantive equality seems to be consistent with CJEU case law and with the EU human rights model.
A substantive equality approach to hate speech governance thus appears to be fully compatible not only with the ECHR framework, but also with that of the EU. With respect to this point, moreover, the policy documents delivered by the European Commission on this matter seem to go precisely in that direction. These include, in particular, the Com- munication on the European democracy action plan258 and, even more, the already men- tioned Communication on extending the list of EU crimes to hate speech and hate crime. Indeed, the Commission has proven to be especially invested in the need to address the direct silencing effect of hate speech, the utterance of which often results in members of discriminated groups refraining from engaging in public debate precisely because of the
with such a rule. As the prohibition was meant to showcase the neutrality of G4S, the CJEU held that the referring court should evaluate if, in the case at hand, it would have been possible for G4S to offer Ms Xxxxxxx a post not involving any visual contact with customers. In other words, the Court concluded that the undertaking should have taken, where possible, positive actions to avoid the discriminatory effects of the internal rule. Case C-157/15, Xxxxxx Xxxxxxx and Centrum voor gelijkheid van kansen en voor rac- ismebestrijding v G4S Secure Solutions NV [2017] ECLI:EU:C:2017:203 [43]. Thus Xxxxx and Xxxxxx: “The rule against indirect discrimination ... represents an attempt to provide a greater degree of substantive equality, in particular equality of opportunity”. Xxxxx and Xxxxxx (n 227) 142–143.
255 Racial Equality Directive art 5; General Framework for Equal Treatment Directive art 7; Directive on Gender Equality in Goods and Services art 6; Recast Equal Treatment Directive art 3.
256 Case C-406/15, Xxxxx Xxxxxxx v Izpalnitelen direktor na Agentsiata za privatizatsia i sledprivati- zatsionen kontrol [2017] ECLI:EU:C:2017:198 [46–47] (emphasis added).
257 “It is no exaggeration to state that the Court of Justice has retooled formal EU equality law towards substantive equality aims, redefining piecemeal the overarching purpose of EU equality law in the process. Its practical effects in real life may well frustrate the engaged observer or activist, but non-discrimination law can never shape the course of society on its own. What should be acknowledged from a legal perspec- tive, however, is that the pragmatic flexibility of the CJEU in furthering substantive equality goes hand in hand with judicial discretion. Substantive equality stands for outcomes”. De Vos (n 254) 82.
258 European Commission, ‘Communication on the European Democracy Action Plan’ (n 165).
hatred they are afraid to being subjected to, and on the need to promote, therefore, what Xxxxxxx defined as the participative dimension of substantive equality.259
2.5.3. Hate speech governance and substantive equality in the world of bits
As argued above, the principle and value of substantive equality has become increasingly relevant within the European multi-level human rights protection system, and has also invested, even if implicitly, the debate concerning the governance of hate speech. This has important and significant impact on the regulation and governance of the phenomenon in the context of the Internet.
Regulation of content in the “world of bits”260 necessarily requires to be adapted to the new triangular scheme characterizing freedom of expression today, where the dynamics of speech regulation do not invest anymore only the relationship between the individual speaker and the state, but have to deal with a third new actor: the private corporate owners of digital infrastructures, that is, ISPs, including namely social media and social network platforms.261 This has led many jurisdictions to move from “old-school” approaches to speech regulation, generally employing forms of control over individual speakers and publishers – including the adoption of criminal penalties, civil damages, and injunctions against them – to “new-school” techniques, which instead exercise forms of control that are aimed precisely at those private owners of digital infrastructures, often by providing for forms of liability for the presence of unlawful content upon them.262 As will be high- lighted throughout the next Chapter, this has been, precisely, the privileged approach of the EU with respect to online content regulation throughout the last decade and, espe- cially, from the middle 2010s onwards.
Providing for increased forms of legal liability and accountability for ISPs with respect to the presence of illegal and harmful content on the Internet represents, indeed, an essen- tial instrument to promote a safer digital sphere, as, from a technological point of view, these actors are generally better equipped than state authorities for the purposes of en- forcing the respect of rules by Internet users. In most cases, thanks to the use of AI sys- tems and algorithms for content moderation, ISPs are even capable of taking proactive and preventive measures against the dissemination of specific items and can thus contrib- ute enormously to limit the existence and spread of unwarranted content. Besides, as men- tioned above, ISPs, notably social media and social network platforms, tend to adopt au- tonomously rules and measures meant specifically to improving users’ experiences by protecting them from exposure to unpleasant material.263
259 European Commission, ‘Communication on Extending the List of EU Crimes to Hate Speech and Hate Crime’ (n 81) 7, 9–10.
260 Xxxxxx Xxxxxxxxx, ‘Judicial Protection of Fundamental Rights in the Transition from the World of Atoms to the World of Bits: The Case of Freedom of Speech’ (2019) 25 European Law Journal 155.
261 Xxxx X Xxxxxx, ‘Free Speech Is a Triangle’ (2018) 118 Columbia Law Review 2011.
262 Xxxx X Xxxxxx, ‘Old-School/New-School Speech Regulation’ (2014) 127 Harvard Law Review 2296, 2298.
263 Xxxxxxxxx (n 149); Xxxxxx and Land (n 149). See infra, §5.2.
Nonetheless, the obvious drawbacks of such an approach should not be ignored. Vest- ing in practice private corporations with the power to govern individuals’ freedom of online expression inherently raises significant questions and concerns as regards the pro- tection of such a fundamental right and pillar of democratic society.264 In this respect, Xxxxx Xxxx, former Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, has warned against the risks connected to the rise of a “platform law”,265 that is, the set of rules privately defined by the providers of interme- diary services (notably, hosting services) within their terms and conditions. Thus, the in- creased para-constitutional role played by ISPs has led to the recent rise of calls for the development of new forms of “digital constitutionalism”.266
Furthermore, the use of automated systems for content moderation is still often subject to significant error rates, especially when it comes to targeting forms of “toxic” or “hate” speech, the existence of which generally requires a qualitative assessment of the contex- tual background of the specific utterance. Notably, research has shown how hate speech detection systems can adversely impact precisely those speakers who are particularly vul- nerable to being victimized by such a phenomenon.267 “New-school” speech regulation systems could have the effect of encouraging significantly the use of these tools and, therefore, of enhancing the risks for errors and biased results.
A substantive equality approach to hate speech governance, however, requires ad- dressing directly these issues. Most notably, the participative dimension of substantive equality, which is aimed precisely at promoting and giving strength to the voices of those individuals that are systematically targeted by hate speech, is incompatible with the si- lencing impact that automated systems of content detection and moderation can have, paradoxically, precisely on them. Such an inconsistency raises important challenges to the governance of the hate speech phenomenon at the intersection of AI fairness268 in the context of the European Union and of Europe in general.269
264 See infra, §3.2.2.
265 Xxxxx Xxxx, ‘Report of the Special Rapporteur on the Promotion and Protection of the Right to Freedom of Opinion and Expression’ (Human Rights Council 2018) A/HRC/38/35 para 1.
266 Xxxxxxx X Xxxxx, ‘Digital Constitutionalism: Using the Rule of Law to Evaluate the Legitimacy of Governance by Platforms’ (2018) 4 Social Media + Society 2056305118787812; Xxxxx (n 148); Xxxxxxxxx, Judicial protection of fundamental rights on the Internet (n 157); Xxxxxxxx De Xxxxxxxx, ‘From Constitu- tional Freedoms to the Power of the Platforms: Protecting Fundamental Rights Online in the Algorithmic Society’ (2019) 11 European Journal of Legal Studies 65; Xxxxxxxx De Xxxxxxxx, ‘Democratising Online Content Moderation: A Constitutional Framework’ (2020) 36 Computer Law & Security Review 105374. See more infra, §5.5.
267 European Union Agency for Fundamental Rights, Bias in Algorithms: Artificial Intelligence and Discrimination (Publications Office 2022) 49–72 <xxxxx://xxxx.xxxxxx.xx/xxx/00.0000/00000> accessed 3 February 2023.
268 With respect to the relationship between AI (namely, machine-learning) and the promotion of sub- stantive equality in the context of the EU, see most notably Xxxxxx Xxxxxxx, Xxxxx Xxxxxxxxxxx and Xxxxx Xxxxxxx, ‘Bias Preservation in Machine Learning: The Legality of Fairness Metrics under EU Non-Discrim- ination Law’ (2020) 123 West Virginia Law Review 735.
269 See infra, §5.4.
The present Chapter has addressed the definition of what hate speech is under the law, focusing both on the international and European framework, and has offered some in- sights into the rationales connected to the adoption (or rejection) of legal measures aimed at limiting and punishing the utterance and spread of the phenomenon.
Most notably, the previous sections have highlighted what the harms of hate speech can be both in the offline and in the online context and have stressed the deep connection between hate speech and the persistence of societal dynamics of power, domination, dis- crimination, and subordination. For these reasons, the Chapter has argued that the para- mount goal of law, in addressing such a phenomenon, should be that of offering a remedy to such dynamics, namely by promoting and fostering the values of substantive equality (namely under its participative dimension).
The next Chapter, in analysing the developments in EU law as regards the regulation of online content and, especially, the liability of ISPs for the presence and spread of hate speech in the context of the Internet, will take precisely this perspective, arguing for a substantive equality-oriented approach to speech governance.
3.
Hate Speech and Intermediary Liability: The European Framework
Summary: 3.1. Introduction. – 3.2. Internet intermediaries and the triangular model of online speech regulation. – 3.2.1. Internet intermediaries. – 3.2.2. New-school speech regulation and constitutional challenges. – 3.3. Interme- diary liability and hate speech: case law from the ECtHR – 3.3.1. The case of Delfi AS v Estonia. – 3.3.2. The legacy of Delfi. – 3.3.2.1. MTE and Xxxxx.xx v Hungary. – 3.3.2.2. Subsequent developments. – 3.4. Intermediary liability and hate speech: the framework of the EU. – 3.4.1. Intermediary (non)liability at the turn of the millennium: the e-Commerce Directive. – 3.4.2. Judicial activ- ism of the Luxembourg Court. – 3.4.3. A new phase for the EU. – 3.4.3.1. The “new season” of content moderation regulation. – 3.4.3.2. The new sectoral frame- work on illegal content. – 3.4.3.3. The Code of Conduct on Illegal Hate Speech. – 3.5. The Digital Services Act. – 3.5.1. The Digital Services Package. – 3.5.2. The rules on the liability of providers of intermediary services. – 3.5.3. The new due diligence obligations for a transparent and safe online environment.
– 3.5.3.1. Provisions applicable to all providers of intermediary services. – 3.5.3.2. Provisions applicable to providers of hosting services. – 3.5.3.3. Provisions appli- cable to providers of online platforms. – 3.5.3.4. Obligations for providers of very large online platforms and of very large online search engines to manage systemic risks. – 3.5.3.5. Standards, codes of conduct, and crisis protocols. – 3.5.4. DSA and hate speech moderation. – 3.5.4.1. Applicability of the DSA to hate speech moderation. – 3.5.4.2. Hate speech moderation and equality in the DSA. – 3.6. Con- clusions.
Having explored in Chapter 2 the main features characterizing the phenomenon of hate speech both offline and online, and having thus highlighted the main rationales and goals that may guide the law in regulating, and even banning, hate speech, the present Chapter delves into the evolution of the intermediary liability regime for third-party content within the European context and the effects of such evolution on the governance of hate speech. As anticipated in Chapter 2, recent legislative approaches towards the governance of speech in the digital landscape have turned increasingly towards forms of “new-school” speech regulation, building on the new triadic dynamics of speech on the Internet. Both within the ECHR and EU systems, the legal framework has in this respect undergone important developments since the turn of the millennium.
First of all, Section 3.2 examines the notion of “Internet intermediaries” and further investigates the effects of their rise in the context of the regulation of speech on the Inter- net, highlighting some concerns and challenges particularly relevant under the lens of constitutional and human rights law.
Section 3.3 addresses major ECtHR case law on intermediary liability with a specific eye on hate speech, focusing namely on the landmark judgment of Xxxxx AS v Estonia (§3.3.1) and its legacy (§3.3.2): the Section discusses, notably, how the ECtHR case law has in this respect established a rather exceptional approach towards intermediary liability for third-party hate speech content as opposed to other types of unlawful material.
Section 3.4, instead, addresses the extraordinary evolution of the EU framework, mov- ing from its original liberal phase – symbolized by the e-Commerce Directive – (§3.4.1), investigating the active role of the CJEU in adapting the interpretation of the Directive in the light of the evolving technological paradigm (§3.4.2), and, finally, offering an over- view of the most recent legislative trends characterizing EU policy strategies on content moderation from the end of the 2010s (§3.4.3). In this respect, the work critically assesses the characters of the developing framework, including the challenges arising from a con- stitutional perspective.
Section 3.5 explores the latest, and possibly most relevant, piece of the developing EU framework on content moderation. The Digital Services Act, finally adopted in October 2022, operates a general and horizontally applicable reform of the system established in 2000 by the e-Commerce Directive. This section, in particular, explores the context of the adoption of the Regulation, part of a twofold package together with the Digital Markets Act (§3.5.1), and describes its content, focusing upon the intermediary liability regime (§3.5.2) and upon the new and complex set of rules on providers’ due diligence obliga- tions “for a transparent and safe online environment” (§3.5.3), while also investigating the relationship between the new Act and the challenge of hate speech moderation (§3.5.4). The Digital Services Act represents in many ways a revolutionary piece of leg- islation complementing the EU body of laws on online speech governance, notably by introducing a “horizontal” framework that sets a baseline discipline for all providers of intermediary services. The Section critically analyses the content of the new Regulation and discusses the implications connected to the adoption of such a legislative model. Moreover, the problematic relationship between the new Regulation and the governance of hate speech represents a core thread of the subsection, highlighting in particular the interpretive issues arising from the adoption of a general and abstract notion of “illegal content” and, therefore, the role that may well be played by complementary sectoral in- struments that could be adopted in the future.
Finally, Section 3.6 contains some conclusions and serves as a bridge for the remainder of the work, underlining most notably the challenges represented by the relationship be- tween the DSA and non-EU legal frameworks – including both those of Member States and those of extra-EU jurisdictions – and the need for any tools complementary to the DSA to ensure the promotion of the right to substantive equality in the application of the
new framework, especially vis-à-vis the increasing resort to AI systems for content mod- eration.
3.2. Internet intermediaries and the triangular model of online speech regulation
3.2.1. Internet intermediaries
The expression “Internet intermediary” represents an umbrella term encompassing many providers of services. A well-known definition provided by the OECD clarifies that their role is to “bring together or facilitate transactions between third parties on the Internet”, namely by “giv[ing] access to, host[ing], transmit[ting] and index[ing] content products and services originated by third parties on the Internet” or by “provid[ing] Internet-based services to third parties”.1 Since a characteristic feature of intermediaries is that of being positioned among a number of parties between whom the specific content, service or product is exchanged, content producers are excluded from such a category – although, clearly, hybrid cases also exist.2
At the same time, intermediaries include a variety of actors, such as access providers, data processing and web hosting providers, search engines and online portals, e-com- merce intermediaries, Internet payment systems, and “participative networking plat- forms”.3 Although, admittedly, part of the literature on the subject identifies, from a tech- nical point of view, the notion of access providers with that of Internet service providers, thus considering ISPs as that specific sub-group of Internet intermediaries that allow re- cipients to access the Internet materially, the present work, in line with existing legal scholarship,4 tends to refer to ISPs more broadly, as including, within the scope of the term, the generality of Internet intermediaries. “ISPs” and “intermediaries”, therefore, will generally be adopted as synonymic terms.
Besides, it is worth mentioning that, in the specific context of EU law, recent legisla- tion has clarified the scope of the relevant terms used. Most notably, EU law refers to “information society services” when dealing with “any service normally provided for re- muneration, at a distance, by electronic means and at the individual request of a recipient
1 Xxxxxx Xxxxxx, ‘The Economic and Social Role of Internet Intermediaries’ (OECD 2010) 9
<xxxxx://xxx.xxxx-xxxxxxxx.xxx/xxxxxxx/xxxxx/0xxx00xxx0xx-xx> accessed 13 April 2023.
2 Think, for instance, of a newspaper portal that also offers readers the opportunity to comment on news and exchange views.
3 Perset (n 1) 9. See also Xxxxxxx XxxXxxxxx and others, Fostering Freedom Online: The Role of Internet Intermediaries (UNESCO Publishing 2014) 19–20; Xxxxxx X Xxxxxxxxx and Xxxx Xxx, ‘How Online Content Providers Moderate User-Generated Content to Prevent Harmful Online Communication: An Analysis of Policies and Their Implementation’ (2020) 12 Policy & Internet 184, 186.
4 Xxxxxx Xxxxxxxxx, Xxxxx Xxxxxxx and Xxxxxxxx De Xxxxxxxx, Internet Law and Protection of Funda- mental Rights (Bocconi University Press 2022); Xxxxxxxxxxxx Xxxxxx and Xxxxxxx Xxxxxxx (eds), The Re- sponsibilities of Online Service Providers (Springer 2017).
of services”.5 These include, for example, interpersonal communications services, soft- ware applications stores, as well as what the Digital Services Act defines as “intermediary services”, that is, mere-conduit, caching, and hosting services. As will be highlighted be- low, the Digital Services Act also defines “online platforms” as a special category of providers of hosting services having the goal of disseminating the content provided by users.6
As highlighted in Chapter 2,7 intermediaries raise important challenges to the govern- ance of freedom of expression and speech across the digital landscape, as the structure of the services they offer to recipients, as well as their transnational reach, are able to affect
– and encourage – the spread and dissemination of illegal and harmful content, including hate speech. A clear example of this is represented by the already discussed LICRA v Yahoo! case,8 where, in fact, Yahoo! did not actively sell Nazi memorabilia but was, ra- ther, the intermediary allowing for the transactions to take place. Therefore, it should not come as a surprise that policies and laws addressing the governance of speech online, including the dissemination of illegal and harmful content such as hate speech, have moved from a paradigm focused on the relationship between the state and the individual to an approach aimed, conversely, at regulating the action of intermediaries themselves. Moreover, depending on the type of service provided, intermediaries play different roles in the dissemination of content and, thus, of hate speech as well. In this respect, among Internet intermediaries, increasing importance has been acquired by hosting providers, offering recipients the possibility to store information provided by them, and, most nota- xxx, by social media and social networking sites.
A product of the birth and expansion of the so-called “Web 2.0”,9 social media build on the creation and exchange of user-generated content (UGC) by the recipients of those services themselves.10 Social networking sites can be seen as representing a sub-set of social media, characterized by the inherent goal of transposing and translating into the digital sphere the relational networks defining society.11 In other words, the goal of social
5 Directive (EU) 2015/1535 of the European Parliament and of the Council of 9 September 2015 laying down a procedure for the provision of information in the field of technical regulations and of rules on Information Society services (codification), OJ L 241/1 art 1, para 1, lett (b).
6 See infra, §3.5.2.
7 See supra, §2.4.1, §2.5.3.
8 See supra, §2.4.2.4.
9 Xxx X’Xxxxxx, ‘What Is Web 2.0: Design Patterns and Business Models for the Next Generation of Software’ (2007) 1 Communications & Strategies 17.
10 Xxxxxx and Xxxxxxxx thus include in the notion of social media a variety of service providers, includ- ing: collaborative projects which enable the joint and simultaneous creation of content by many end-users (e.g., Wikipedia); blogs; content communities, whose goal is to share content between users (e.g., YouTube); social networking sites, which allow users to connect through the creation of personal infor- mation profiles accessible to friends and colleagues; virtual game worlds; virtual social worlds. See Xxxxxxx X Xxxxxx and Xxxxxxx Xxxxxxxx, ‘Users of the World, Unite! The Challenges and Opportunities of Social Media’ (2010) 53 Business Horizons 59.
11 Indeed, the concept itself of “social network” finds its origins in the work of Australian anthropologist Xxxx Xxxxxxx Xxxxxx who, in 1954, argued: “Each person is, as it were, in touch with a number of other people, some of whom are directly in touch with each other and some of whom are not. Similarly each person has a number of friends, and these friends have their own friends; some of any one’s person’s friends
networking sites is to host and favour digital social bonds, namely through web-based services allowing people to build a public or semi-public online profile, to articulate a list of other users with whom they share a connection, and to extend their connections with other individuals that are party to the system.12
3.2.2. New-school speech regulation and constitutional challenges
The specific features characterizing social media in general and social networking sites in particular, aimed notably at hosting and disseminating content generated by their re- cipients across the Internet, render them particularly exposed to the risk of enhancing the presence of unwarranted material within the digital landscape.13 Such a risk is further augmented by the reliance – which is growing exponentially – upon automated and algo- rithm-driven strategies of content moderation and content curation. The latter being ori- ented towards the maximization of the capture of recipients’ interests and attention, they might in fact end up bringing to the fore highly controversial content, which is more likely to trigger debates and, therefore, user engagement.14
As a result, regulation in Europe has increasingly become focused upon vesting such intermediaries with duties and responsibilities aimed at promoting a safer online space. These new-school forms of speech regulation15 build upon the specific features charac- terizing contemporary speech governance dynamics, as opposed to older, traditional mod- els of regulation of freedom of expression. As a matter of fact, whereas “the twentieth century featured a dualist or dyadic system of speech regulation”, where the relevant players were the nation-states and the speakers, the latter being subjected to the rules set by the former, “the twenty-first-century model is pluralist, with many different players” and has thus been compared by Xxxx Xxxxxx to a triangle whose new, third corner consists of Internet-infrastructure companies.16
know each other, others do not. I find it convenient to talk of a social field of this kind as a network. The image I have is of a set of points some of which are joined by lines … We can of course think of the whole of social life as generating a network of this kind”. Xxxx Xxxxxxx Xxxxxx, ‘Class and Committees in a Norwegian Island Parish’ (1954) 7 Human Relations 39, 43.
12 Xxxxx X Xxxx and Xxxxxx X Xxxxxxx, ‘Social Network Sites: Definition, History, and Scholarship’ (2007) 13 Journal of Computer-Mediated Communication 210, 211.
13 With respect to hate speech, for instance, Citron and Xxxxxx observed as early as in 2011: “The great- est increase in digital hate has occurred on social media sites. Examples include the How to Kill a Beaner video posted on YouTube, which allowed players to kill Latinos while shouting racial slurs, and the Face- book group Kick a Ginger Day, which inspired physical attacks on students with red hair. Facebook has hosted groups such as Hitting Women, Holocaust Is a Holohoax, and Join if you hate homosexuals”. Dan- ielle Xxxxx Citron and Xxxxx Xxxxxx, ‘Intermediaries and Hate Speech: Fostering Digital Citizenship for Our Information Age’ (2011) 91 Boston University Law Review 1435, 1437.
14 Xxxx Xxxxxx and others, ‘Artificial Intelligence, Content Moderation, and Freedom of Expression’ (TWG 2020) 15 <xxxxx://xxx.xxxx.xx/xxxxxxxxxxx/xxxxxxxx/XX-Xxxxxx-Xxx-Xxxxxxx-Xxx-0000.xxx> ac-
cessed 13 December 2021.
15 See supra, §2.5.3.
16 Xxxx X Xxxxxx, ‘Free Speech Is a Triangle’ (2018) 118 Columbia Law Review 2011, 2013–2014. See also, with specific respect to the case of Delfi AS v Estonia (see infra, §3.3.1.), Xxxxxx Xxxxx, ‘The Respon- sibility of Internet Portal Providers for Readers’ Comments. Argumentation and Balancing in the Case of Delfi AS v. Estonia’ in Xxxxx Xxxxxxxx, Xxxxx Xxxxx and Xxxxx Xxxxx (eds), The Rule of Law in Europe: Recent Challenges and Judicial Responses (Springer 2021) 207.
In the contemporary context, traditional tools have indeed become insufficient when it comes to the enforcement of public strategies. Conversely, the private owners of digital infrastructures where speech flourishes today, through their moderation practices pow- ered by their technical and economic capacity as well as by the availability of large quan- tities of data at their disposal, are in general better positioned to actively control, govern, and regulate the uploading of content to the Internet. Therefore, online platforms have been famously described as the “new governors” of speech in the digital landscape.17 Building on this, governments, especially in Europe, have increasingly begun to adopt forms of public-private cooperation or co-optation with a view to pushing intermediaries to do their bidding as much as possible.18
Clearly, the adoption of such strategies for the governance of online speech is not without consequences from a constitutional and human rights law perspective, namely because it directly entails the result of vesting private actors with the task of supervising over the freedom of expression of the recipients of their services. In particular, new- school strategies often xxxxxx forms of “collateral censorship”, which arises “whenever a nation-state puts pressure on digital-infrastructure companies to block, take down, and censor content by end users”.19 In many cases, this entails a significant drawback for the protection of freedom of expression, as intermediaries may choose to adopt moderation strategies that are particularly stringent so as to avoid any risk of liability for UGC and third-party content. In general, the delegation to private actors of speech surveillance tasks represents a significant challenge to the promise of a democracy-oriented Internet. Requiring intermediaries to “patrol” the Internet, indeed, implies giving them the duty – and power – to strike a balance between the (constitutional) interests at stake, namely freedom of expression, on the one hand, and the pursuit of public policies, on the other hand. Such a private enforcement of public interests – which is, besides, operated in prac- tice through the adoption and implementation (also through automated systems of mod- eration) of private terms and conditions of service – inevitably conflates with private business-oriented interests.
Issues regarding the human rights sustainability of platform governance models, espe- cially in the light of the principles of democracy and legitimacy,20 are in many cases in- dependent of the adoption of forms of new-school speech regulation models, as platforms tend to adopt governance and content moderation strategies irrespective of the imposition upon them of regulatory obligations. Content moderation is, in fact, an integral part of the
17 Xxxx Xxxxxxx, ‘The New Governors: The People, Rules, and Processes Governing Online Speech’ (2017) 131 Harvard Law Review 1598.
18 Xxxxxx (n 16) 2019–2021. Thus, “new-school regulation often emphasizes ex ante prevention rather than ex post punishment, and complicated forms of public/private cooperation”: Xxxx X Xxxxxx, ‘Old- School/New-School Speech Regulation’ (2014) 127 Harvard Law Review 2296, 2306.
19 Xxxxxx (n 16) 2030.
20 Xxxxxxx X Xxxxx, ‘Digital Constitutionalism: Using the Rule of Law to Evaluate the Legitimacy of Governance by Platforms’ (2018) 4 Social Media + Society 2056305118787812; Xxxxxx Xxxxx-Xxxxx, ‘Global Platform Governance: Private Power in the Shadow of the State’ (2019) 72 SMU Law Review 27; Xxxxxx Xxxxxxx and Xxxxx Xxxxxxxx Xxxxxx, ‘Democratic Legitimacy in Global Platform Governance’ (2021) 45 Telecommunications Policy 102152.
service offered to users and, therefore, an integral part of those intermediaries’ business models.21 In fact, in some cases, platforms have tentatively strived to address themselves such issues, as demonstrated for example by Meta’s choice to create an (at least allegedly) independent Oversight Board, mainly composed of notorious international academics, activists, and politicians specialized in digital rights and freedom of expression, whose main task is that of ensuring the adequacy of Meta platforms’ content moderation prac- tices and their consistency with fundamental democratic principles.22
Nonetheless, it is inevitable that the adoption of regulatory strategies enhancing inter- mediary liability in the field of content moderation feeds those concerns and thus in- creases the constitutional challenges of platform governance. This holds true in all sectors pertaining to the regulation of online freedom of expression but is especially relevant in the context of hate speech moderation, the operationalization of which represents a par- ticularly sensitive activity in the light of the need to consider all relevant contextual as- pects and of the concrete risks of discriminatory and biased outcomes driven by the sig- nificant implementation of dedicated AI systems.
The following sections will explore how the legal framework on intermediary liability has evolved in Europe since the turn of the millennium and inquire how such develop- ments can impact the governance of hate speech across digital platforms.
3.3. Intermediary liability and hate speech: case law from the ECtHR
3.3.1. The case of Delfi AS v Estonia
With respect to intermediary liability in the European context, especially with respect to the liability of ISPs for the failure to remove third-party hate speech, the ECtHR has de- livered some significant case law. Namely, in the notorious decision of Xxxxx AS v Esto- nia,23 the Grand Chamber of the Strasbourg Court upheld the decision of the Estonian Supreme Court to sentence a news portal to compensate damages for having failed to remove third-party comments that were of a “clearly unlawful nature”.24
The applicant was an information portal that had published an article concerning a business company, SLK, triggering the audience to publish in the comments section a significant number of anonymous insults and defamatory and offending remarks. After several weeks, SLK requested Xxxxx AS to remove such comments and claimed contex- tually compensation for damages. Upon such notice, Xxxxx had, in fact, immediately re- moved those comments, but refused to pay compensation. Eventually, the Estonian Su- preme Court concluded that, because Delfi usually put in place some forms of moderation
21 Xxxxxxxx Xxxxxxxxx, Custodians of the Internet: Platforms, Content Moderation, and the Hidden De- cisions That Shape Social Media (Yale University Press 2018).
22 On the Facebook Oversight Board see, notably, Xxxx Xxxxxxx, ‘The Facebook Oversight Board: Cre- ating an Independent Institution to Adjudicate Online Free Expression’ (2020) 129 Yale Law Journal 2418; Xxxxx Xxxx and Xxxxxxx Xxxxxxx, ‘Meta’s Oversight Board: A Review and Critical Assessment’ (2023) 33 Minds and Machines 261. See more infra, §5.2.1.2.
23 Delfi AS v Estonia [2015] ECtHR [GC] 64569/09, ECHR 2015.
24 ibid 140.
practices, it could not be considered as acting as a merely neutral, automatic, and passive actor and, therefore, it should be considered liable for the damages caused by the presence of the defamatory comments. The Supreme Court thus confirmed the County Court’s award of 5,000 kroons (approximately 320 euros) in favour of SLK’s majority share- holder as compensation for non-pecuniary damages. Xxxxx, as a result, filed an application to the ECtHR arguing that the award represented an infringement of its right to freedom of expression as enshrined within Article 10 ECHR.
The ECtHR, however, upheld the Estonian Supreme Court’s award. On the one hand, it accepted the characterization of Delfi – under Estonian law and consistent case law – as a publisher that offered its media services for economic purposes, rather than as a merely passive hosting provider.25 On the other hand, the Strasbourg judges concluded that the measure imposed, that is, the sentencing to the payment of non-pecuniary dam- ages for a sum of approximately 320 euros, was in fact proportionate and thus “necessary in a democratic society” as required by Article 10, paragraph 2, ECHR. In order to operate such an assessment the Court stressed, inter alia, the role played by the medium used for the establishment of the degree of responsibility of a journalistic actor such as Xxxxx: citing notably the previous judgment of Editorial Board of Xxxxxxx Xxxx and Shtekel v Ukraine,26 the judges argued that “the risk of harm posed by content and communications on the Internet to the exercise and enjoyment of human rights and freedoms … is certainly higher than that posed by the press”.27 In the light of such a risk, a business-oriented news portal provider should take extra care to ensure that no such content, including hate speech, is spread across its infrastructures. The Grand Chamber’s decision, in practice, recognized for the first time as consistent with the ECHR framework on freedom of ex- pression the possibility for a state to hold the provider of a computer service accountable for the failure to remove immediately third-party comments.
As a matter of fact, the choice of the ECtHR to accept the Estonian Supreme Court’s argument that Xxxxx should be considered as a content provider – rather than as a hosting provider – is itself rather debatable,28 as the moderation practices of the news portal do not appear to be of such a relevant entity as to warrant the conclusion that it is, in fact, the direct purveyor of the content produced by users. More in general, however, the ulti- mate outcome of Delfi, opening de facto the doors to the possibility for governments to punish providers of online services for third-party content (namely, third-party hate
25 ibid 128–129.
26 Editorial Board of Pravoye Delo and Shtekel v Ukraine [2011] ECtHR 33014/05, ECHR 2011. See
supra, §2.4.1.
27 Delfi AS v Estonia (n 23) para 133.
28 With respect to the previous Chamber decision – which was basically confirmed by the Grand Cham- ber – see among others Xxxx Xxxxxxxx, ‘Qualification of News Portal as Publisher of Users’ Comment May Have Far-Reaching Consequences for Online Freedom of Expression: Delfi AS v. Estonia’ (Strasbourg Observers, 25 October 2013) <xxxxx://xxxxxxxxxxxxxxxxxxx.xxx/0000/00/00/xxxxxxxxxxxxx-xx-xxxx-xxxxxx- as-publisher-of-users-comment-may-have-far-reaching-consequences-for-online-freedom-of-expression- delfi-as-v-estonia/> accessed 26 April 2023. However, in this respect, Xxxxxx Xxxxx justifies the Court’s conclusion by arguing that “the commenting environment was … an integral part of [Xxxxx’x] commercial activity”. See Xxxxxx Xxxxx, ‘Intermediary Liability for Online User Comments under the European Con- vention on Human Rights’ (2017) 17 Human Rights Law Review 665, 676.
speech), raised concerns about the inherent risk it entailed of promoting forms of private and collateral censorship.29
In this respect, the decision of the majority was criticized by judges Xxxx and Xxxxxxxxx in their joint dissenting opinion, where they argued that the approval of a liability system requiring “constructive knowledge on active Internet intermediaries”30 may well repre- sent a significant hurdle to the enjoyment of online freedom of expression in Europe, because it may ultimately lead to “deliberate overbreadth; limited procedural protections
… and shifting of the burden of error costs”, as “the entity in charge of filtering will err on the side of protecting its own liability, rather than protecting freedom of expression”.31 Additionally, the judgment was also criticized for its apparent failure to develop an ECtHR case law consistent and coherent with the EU framework, in particular with re- spect to Directive 2000/31/EC, i.e., the “e-Commerce Directive”, and related CJEU case
3.3.2. The legacy of Delfi
Also following the concerns and criticisms raised by the Delfi judgment, subsequent EC- tHR cases went on to develop a case law which, although maintaining the Grand Cam- ber’s decision as a valid and applicable precedent, clarified nonetheless the extent to which providers of online services may in fact be held liable for third-party content, nar- rowing down sensitively the scope of applicability of that decision.
3.3.2.1. MTE and Xxxxx.xx v Hungary
In the case of MTE and Xxxxx.xx v Hungary,33 the ECtHR had to face a case similar to Delfi. The facts concerned the self-regulatory body of Hungarian Internet content provid- ers, MTE, and an Internet news portal, Xxxxx.xx, which had published pieces criticizing harshly two real estate management websites, owned by the same company, basically accused of scamming consumers. This led, once again, to triggering readers into publish- ing anonymous or pseudonymous comments against the company. Eventually, the Hun- xxxxxx Xxxxx awarded the company operating the websites compensation for the damages suffered for failure to promptly remove those user comments, even though, in fact, both MTE and Xxxxx.xx had immediately taken them down as soon as the lawsuit had been brought against them.
29 See, among others, Xxxx Xxxxxxxx, ‘Delfi AS v. Estonia: Grand Chamber Confirms Liability of Online News Portal for Offensive Comments Posted by Its Readers’ (Strasbourg Observers, 18 June 2015)
<xxxxx://xxxxxxxxxxxxxxxxxxx.xxx/0000/00/00/xxxxx-xx-x-xxxxxxx-xxxxx-xxxxxxx-xxxxxxxx-xxxxxxxxx-xx- online-news-portal-for-offensive-comments-posted-by-its-readers/> accessed 26 April 2023; Xxxx Xxxxxxx, ‘The Liability of an Online Intermediary for Third Party Content: The Watchdog Becomes the Monitor: Intermediary Liability after Delfi v Estonia’ (2016) 16 Human Rights Law Review 163, 172; Xxxxx Xxx- sini, ‘Fundamental Rights and Private Enforcement in the Digital Age’ (2019) 25 European Law Journal 182, 192.
30 Xxxxx AS v Estonia (n 23) joint dissenting opinion of judges Xxxx and Tsotsoria para 1.
31 ibid 2.
32 See infra, §3.4.1.
33 Magyar Tartalomszolgáltatók Egyesülete and Xxxxx.xx Zrt v Hungary [2016] ECtHR 22947/13.
In a manner similar to Delfi, the ECtHR accepted the national courts’ conclusion that the applicants, under the Hungarian Civil Code, could be reasonably treated as content providers (rather than as intermediaries) with respect to third-party anonymous or pseu- xxxxxxxx comments.34 Additionally, throughout its entire decision, the Strasbourg Court cited Xxxxx rather frequently, thus confirming the validity of the Grand Chamber’s deci- sion as a landmark precedent. Most notably, in MTE, the Fourth Section stated:
The Court reiterates in this regard that although not publishers of the comments in the traditional sense, Internet news portals must, in principle, assume duties and responsibil- ities. Because of the particular nature of the Internet, those duties and responsibilities may differ to some degree from those of a traditional publisher, notably as regards third-party contents.35
Nonetheless, with respect to its outcome, MTE departed significantly from Delfi, as it recognized that the applicants’ right to freedom of expression had in fact been breached in violation of Article 10 ECHR.
Indeed, in the case of MTE, the Court operated a thorough assessment of all relevant contextual elements, as well as of the content itself of the applicants’ publications and of the third-party anonymous contents, and eventually concluded that the imposition of lia- bility upon MTE and Xxxxx.xx was not at all proportionate to the purposes sought, so that the measure could not be recognized as “necessary in a democratic society”.36 For in- stance, the ECtHR stressed that, in the case at hand, at least the first applicant, MTE, was not a business actor, as it was, in fact, a self-regulatory body representing ISPs; whereas the second applicant, Xxxxx.xx, should enjoy additional protection as a press outlet since it “provided forum for the exercise of expression rights, enabling the public to impart information and ideas”.37 Moreover, the Court considered that the article published could not “be considered to be devoid of a factual basis or provoking gratuitously offensive comments”.38
Even more interestingly, the ECtHR held that it should reach a different conclusion from that expressed in Delfi because, “although offensive and vulgar, … the incriminated comments did not constitute clearly unlawful speech; and they certainly did not amount to hate speech or incitement to violence”.39 In other words, the ECtHR distinguished the two cases not so much based on an inquiry of the position and role of the intermediary in the dissemination of the content impugned but, rather, based on the type of the illegal content spread. By focusing on this specific aspect, that is, the severity of the comments themselves, the ECtHR was able to uphold the Grand Chamber’s previous decision while taking a decision responsive to the many concerns and criticisms that had followed Xxxxx. Such distinguishing between the two cases, nevertheless, appears to be slightly far- fetched and forced, precisely because it shifts the focus of attention from the critical and
34 ibid 51.
35 ibid 62.
36 Convention for the Protection of Human Rights and Fundamental Freedoms 1950 art 10, para 2.
37 MTE and Xxxxx.xx v Hungary (n 33) para 61.
38 ibid 72.
39 ibid 64.
technical assessment of the degree of liability and accountability of the intermediary in- volved towards the evaluation of the nature of the third-party content discussed. Besides, in doing so, the Court attaches different liability regimes based on the classification of the content as “clearly unlawful speech” or “hate speech”: however, in doing so, it does not clearly define the criteria differentiating such clearly unlawful speech from merely offensive speech.
Overall, MTE thus appears to showcase a more careful approach on the part of the Court of Strasbourg, especially if compared to its landmark precedent in Delfi. Indeed, the ECtHR, by finding that the Hungarian courts had violated the applicants’ right to freedom of expression, implicitly warned that measures entailing an enhanced liability of providers of online services should only be taken in rather serious and extreme situations. Nonetheless, although aimed at narrowing in general terms the acceptability of interme- diary liability across the Internet, MTE still confirms, with specific respect to the coun- tering of hate speech, the Court’s conviction that such content is of such a foul nature as to allow for an increased severity in governmental repressive actions. In other words, though giving impulse to a new strand of case law which, while confirming Delfi, tends to be more lenient towards the rights and liberties of Internet actors and more attentive to the risks connected to the imposition of liability for third-party content, MTE did not ex- tend such leniency to those cases where the Court believes that forms of hate speech have indeed been uttered. This is clearly confirmed by the Court’s conclusions, according to which
in cases where third-party user comments take the form of hate speech and direct threats to the physical integrity of individuals, the rights and interests of others and of the society as a whole might entitle Contracting States to impose liability on Internet news portals if they failed to take measures to remove clearly unlawful comments without delay, even without notice from the alleged victim or from third parties.40
3.3.2.2. Subsequent developments
Subsequent case law from the ECtHR confirmed the conclusion reached in MTE, thus upholding the validity of the Delfi precedent with respect to hate speech while taking, nonetheless, a rather cautious approach towards the protection of freedom of expression under Article 10 ECHR.41
In Xxxx v Sweden,42 the ECtHR’s Third Section had to deal with a case of defamation concerning the publication of a blogpost – and the consequent uploading of an anonymous comment – upon the website of a small non-profit organization. Considering those
40 ibid 91. A critical take on such a direction taken in MTE is expressed, namely, by Xxxxxxxxx Xxxxxx- xxxxxx, ‘MTE v Hungary: A New ECtHR Judgment on Intermediary Liability and Freedom of Expression’ (2016) 11 Journal of Intellectual Property Law & Practice 582.
41 See, in this respect, Xxxx Xxxxxxxx, ‘Blog Symposium “Strasbourg Observers Turns Ten” (2): The Court’s Subtle Approach of Online Media Platforms’ Liability for User-Generated Content since the “Delfi Oracle”’ (Strasbourg Observers, 10 April 2020) <xxxxx://xxxxxxxxxxxxxxxxxxx.xxx/0000/00/00/xxx-xxxxxx- subtle-approach-of-online-media-platforms-liability-for-user-generated-content-since-the-delfi-oracle/> accessed 6 May 2023.
42 Xxxx v Sweden (dec) [2017] ECtHR 74742/14.
In view of the above, and especially the fact that the comment, although offensive, did not amount to hate speech or incitement to violence and was posted on a small blog run by a non-profit association which took it down the day after the applicant’s request and nine days after it had been posted, the Court finds that the domestic courts acted within their margin of appreciation and struck a fair balance between the applicant’s rights under Article 8 and the association’s opposing right to freedom of expression under Article 10.44
Similarly, in Xxxxxxx v Norway,45 the applicant argued that her right to respect for private life had been infringed by the failure of the provider of an Internet news portal and forum to remove anonymous comments alleging that she had rather unethically convinced an elderly widow to leave her most of her inheritance in her will. Having failed to obtain compensation before Norwegian courts, Ms. Xxxxxxx filed an application before the Stras- xxxxx Court which, however, once again dismissed the complaint of violation of Article 8 by acknowledging that the anonymous comments, while certainly defamatory, did not amount to hate speech.46 Domestic courts thus “acted within their margin of appreciation when seeking to establish a balance between the applicants’ rights under Article 8 and the news portal and host of the debate forums’ opposing right to freedom of expression under Article 10”.47
Both Xxxx and Xxxxxxx thus confirmed the strand of case law inaugurated by Xxxxx and perfected by MTE, according to which, ultimately, intermediary liability for third-party content should generally be limited to particularly serious cases so as to avoid dispropor- tionate restrictions of those actors’ freedom of expression under Article 10,48 those
43 ibid 25–26.
44 ibid 37.
45 Xxxxxxx v Norway [2019] ECtHR 43624/14.
46 ibid 69.
47 ibid 75.
48 In this respect, see also the case of Jezior v Poland, where the ECtHR, with regard to a local politician upon whose local forum offensive and defamatory comments – but not hate speech – had been published against his competitor, held that the Polish courts’ findings against the applicant had represented a dispro- portionate restriction of his freedom of expression under art 10 ECHR: “La Cour estime que, à la suite de l’application cumulative des mesures susmentionnées à son encontre, le requérant a subi une sanction susceptible d’avoir un effet inhibiteur sur quelqu’un qui, comme lui-même en l’espèce, administrait à titre
particularly serious cases being identified in instances of hate speech or incitement to violence.
In the meantime, the 2021 judgment of Standard Verlagsgesellschaft mbH v Austria (no. 3)49 addressed the different, although related, subject of the duty of ISPs – in this case, once again, a news portal allowing readers to post their comments and opinions – to provide information concerning the identity of users having published defamatory con- tent anonymously. While rejecting the interpretation that such comments should be inter- preted as journalistic sources, and thus rejecting the direct consequence that the identity of those users should be covered and protected by the guarantees related to journalistic secrecy,50 the ECtHR concluded nevertheless that ordering the applicant to disclose in- formation about the identity of its recipients would hamper the news portal’s freedom of expression under Article 10 ECHR. Indeed, the Court underscored that “an obligation to disclose the data of authors of online comments could deter them from contributing to debate and therefore lead to a chilling effect among users posting in forums in general” and that this would, indirectly, also affect “the applicant company’s right as a media com- pany to freedom of the press”.51 Any court, prior to issuing such an order, should thus operate a careful balance between the fundamental rights involved (even though domestic courts may enjoy a significant degree of discretion in this respect): something which Aus- trian courts had however failed to do.
The Standard case, therefore, represents another important tile in the ECtHR case law on intermediary liability, by extending the reach of the value of ISPs’ freedom of expres- sion to also include a right to the anonymity of the recipients of their services. The Stras- xxxxx Court, however, in line with Delfi and MTE, held once again that such a favourable finding may not apply to hate speech, incitement to violence, or other “clearly unlawful content”:
entièrement gracieux un blog sur Internet sur des sujets importants pour la collectivité. Sur ce point, la Cour rappelle avoir dit que l’imputation d’une responsabilité relativement à des commentaires émanant de tiers peut avoir des conséquences négatives sur l’espace réservé aux commentaires d’un portail Internet et produire un effet dissuasif sur la liberté d’expression sur Internet … En conclusion, la Cour estime que les juridictions nationales ayant statué dans la procédure diligentée à l’encontre du requérant en vertu de la loi sur les élections locales n’ont pas ménagé un juste équilibre entre le droit à la liberté d’expression de l’intéressé et celui, concurrent, de B.K. au respect de sa réputation en tant que candidat aux élections locales. Leurs décisions s’analysant en une ingérence disproportionnée dans le droit à la liberté d’expres- sion du requérant n’étaient donc pas nécessaires dans une société démocratique”. Jezior v Poland [2020] ECtHR 31955/11 [60–61].
49 Standard Verlagsgesellschaft Mbh v Austria (no 3) [2021] ECtHR 39378/15. With respect to this decision, see among others, Xxxx Xxxxxxxxxxxx, ‘Standard Verlagsgesellschaft MBH v. Austria (No. 3): Is the ECtHR Standing up for Anonymous Speech Online?’ (Strasbourg Observers, 25 January 2022)
<xxxxx://xxxxxxxxxxxxxxxxxxx.xxx/0000/00/00/xxxxxxxx-xxxxxxxxxxxxxxxxxxx-xxx-x-xxxxxxx-xx-0-xx-xxx- ecthr-standing-up-for-anonymous-speech-online/> accessed 6 May 2023; Xxxxxx Xxxx, ‘L’anonimato degli utenti quale forma mediata della libertà di stampa: Il caso Standard Verlagsgesellschaft mbH c. Austria’ (2022) 1 Rivista di Diritto dei Media 291.
50 “In the instant case, the Court concludes that the comments posted on the forum by readers of the news portal, while constituting opinions and therefore information in the sense of the Recommendation, were clearly addressed to the public rather than to a journalist”. Standard Verlagsgesellschaft Mbh v Austria (no. 3) (n 49) para 71.
51 ibid 74.
However, even a prima facie examination requires some reasoning and balancing. In the instant case, the lack of any balancing between the opposing interests … overlooks the function of anonymity as a means of avoiding reprisals or unwanted attention and thus the role of anonymity in promoting the free flow of opinions, ideas and information, in particular if political speech is concerned which is not hate speech or otherwise clearly unlawful. In view of the fact that no visible weight was given to these aspects, the Court cannot agree with the Government’s submission that the Supreme Court struck a fair bal- ance between opposing interests in respect of the question of fundamental rights.52
In its 2023 Grand Chamber judgment for the Xxxxxxx v France53 case, the majority also confirmed the legitimacy under Article 10 ECHR of the imposition of a criminal pecuni- ary penalty upon a local politician for failing to promptly remove third-party hate speech comments that had been posted under a post published on his Facebook “wall”: namely, those comments targeted the applicant’s political opponent and his partner, as well as the Muslim community as a whole. The applicant had been convicted under French law as a “producer”, that is, as the person who had “taken the initiative of creating an electronic communication service for the exchange of opinions and pre-defined topics”.54
The judgment is especially interesting from at least two points of view. First, it con- firmed, under the ECHR framework, the possibility of holding as liable, in a manner sim- ilar to a hosting provider, an individual (especially if that individual is a politician in the context of an electoral campaign) who has failed to promptly remove third-party content from their own individual Facebook “wall”: a finding which is striking not only because it expands the scope of third-party content liability so as to encompass also the holders of a social networking account, but also because of the high regard traditionally granted by the ECtHR to political freedom of expression.55 Second, the applicant was held liable even though, in the case at hand, the authors of the impugned comments were not anon- ymous and had, in fact, also been sentenced to the payment of a fine and to the compen- sation of damages.
Nonetheless, the ECtHR’s Grand Chamber held that France’s interference with the applicant’s freedom of expression was proportionate and necessary in a democratic soci- ety, arguing, inter alia, as follows:
52 ibid 95 (emphasis added). Additionally, the ECtHR had previously stressed how “the comments made about the plaintiffs … although offensive and lacking in respect, did not amount to hate speech or incitement to violence … nor were they otherwise clearly unlawful (compare and contrast Xxxxx …)”. ibid 89.
53 Xxxxxxx v France [2023] ECtHR [GC] 45581/15, ECHR 2023. For a comment, see Xxxxxxx Xxxx, ‘Strong on Hate Speech, Too Strict on Political Debate: The ECtHR Rules on Politicians’ Obligation to Delete Hate Speech on Facebook Page’ (Verfassungsblog, 25 May 2023) <xxxxx://xxxxxx- xxxxxxxxx.xx/xxxxxx-xx-xxxx-xxxxxx-xxx-xxxxxx-xx-xxxxxxxxx-xxxxxx/> accessed 1 June 2023; Xxxxxx Xxxx, ‘Carattere Eccezionale Dell’“Hate Speech” e Nuove Forme Di Responsabilità per Contenuti Xx Xxxxx Nella Giurisprudenza EDU. Nota a X.Xxx, Xxxxxxx x. Xxxxxxx, 15 Maggio 2023’ (2023) 6 Osservatorio Costitu- zionale 238. The decision of the Grand Chamber had already been preceded by a Chamber judgment in Xxxxxxx v France [2021] ECtHR 45581/15. On the first decision, see Xxxxxx Xxxxxxxxxxxx, ‘Responsabilità Del Politico per Commenti Altrui Su Facebook: Conforme Alla Convenzione Europea La “Tolleranza Zero” Nei Casi Di Messaggi d’odio’ (2021) 3 Rivista di Diritto dei Media 311.
54 Xxxxxxx v France (n 53) para 38.
55 The judgment itself refers to previous case law addressing the importance and role of political speech in the public debate, while clarifying the duties, obligations, and limits it should comply with: see ibid 146– 153.
The Court would, moreover, reiterate that in cases where third-party user comments take the form of hate speech, the rights and interests of others and of society as a whole may entitle Contracting States to impose liability on the relevant Internet news portals, without contravening Article 10 of the Convention, if they fail to take measures to remove clearly unlawful comments without delay, even without notice from the alleged victim or from third parties (see Xxxxx AS …). Even though the applicant’s situation cannot be compared to that of an Internet news portal …, the Court sees no reason to hold otherwise in the present case.56
Overall, ECtHR case law concerning the liability and responsibilities of Internet interme- diaries for third-party content has undergone important developments after the landmark decision of Delfi. Indeed, the Court has showcased a renewed concern for the collateral effects that imposing such forms of liabilities and duties might entail for freedom of ex- pression and has thus become progressively more lenient towards ISPs and attentive to their needs. This approach, besides, is in line with the Council of Europe Committee of Ministers’ Recommendation No. R (2018) 2 on the roles and responsibilities of Internet intermediaries, according to which states should ensure “that intermediaries are not held liable for third-party content which they merely give access to or which they transmit or store”, although they may hold them “co-responsible … if they do not act expeditiously to restrict access to content or services as soon as they become aware of their illegal na- ture, including through notice-based procedures”.57
At the same time, the ECtHR has maintained a rather rigid approach towards the dis- semination of hate speech content through the Internet by explicitly recognizing that such a phenomenon may (and should) require the adoption of more stringent measures from states and, consequently, from ISPs themselves. Besides, also in this respect, such a spe- cific consideration towards hate speech is again reflected by the Committee of Ministers, whose Recommendation No. R (2022) 16, recognizing the fundamental role of Internet intermediaries in countering the phenomenon, mentions that states should namely require them “to respect human rights, including the legislation on hate speech, to apply the prin- ciples of human rights due diligence throughout their operations, and to take measures in line with existing frameworks and procedures to combat hate speech”58 and should estab- lish by law that intermediaries “must take effective measures to fulfil duties and respon- sibilities not to make accessible or disseminate hate speech that is prohibited under crim- inal, civil or administrative law”.59 Arguably, such a trend is, moreover, consistent with
56 ibid 140.
57 Committee of Ministers of the Council of Europe, ‘Recommendation No. R (2018) 2 of the Commit- tee of Ministers to Member States on the Roles and Responsibilities of Internet Intermediaries’ (Council of Europe 2018) CM/Rec(2018)2 Appendix, para 1.3.7.
58 Committee of Ministers of the Council of Europe, ‘Recommendation No. R (2022) 16 of the Com- mittee of Ministers to Member States on Combating Hate Speech’ (Council of Europe 2022) CM/Rec(2022)16 Appendix, para 18.
59 ibid 22. The Recommendation also adds: “Important elements for the fulfilment of this duty include: rapid processing of reports of such hate speech; removing such hate speech without delay; respecting pri- vacy and data-protection requirements; securing evidence relating to hate speech prohibited under criminal law; reporting cases of such criminal hate speech to the authorities; transmitting to the law-enforcement services, on the basis of an order issued by the competent authority, evidence relating to criminal hate speech; referring unclear and complex cases requiring further assessment to competent self-regulatory or
the developing attitude of the ECtHR with regard to hate speech governance that has been described in Chapter 2: it appears, indeed, that the Court is moving progressively towards an increasingly restrictive perspective on this phenomenon,60 as highlighted, inter alia, in decisions such as Xxxxxxxx and Levickas,61 Association Accept,62 and Xxxxxxxx.63
In conclusion, it is possible to identify at least three stages in the evolution and devel- opment of the ECtHR case law on intermediary liability and hate speech. The first stage consists of the landmark judgment of Xxxxx AS v Estonia, where the Grand Chamber con- firmed that the imposition of liability upon Internet intermediaries for the dissemination of clearly unlawful content is fully consistent with Article 10 ECHR, thus opening the doors for the establishment of intermediary liability for third-party illegal content. The second stage is represented by the decision of MTE and Xxxxx.xx v Hungary, where the Court clarified that a distinction ought to be made between clearly unlawful content – notably hate speech and incitement to violence – triggering the liability of intermediaries, and other illegal materials, thus making hate speech a rather exceptional case in this re- spect.
The third stage, finally, is represented by the body of subsequent judgments which, in line with Delfi and MTE, confirmed such a differentiation in treatment, on the one hand by limiting the scope of intermediary liability to selected and limited cases, where the speech uttered was recognized as being not merely offensive but amounting in fact to hate speech (e.g., Xxxx v Sweden, Xxxxxxx v Norway), and, on the other hand, by adopting an increasingly strict and severe approach towards hate speech: namely, it concluded that an order may be issued against an intermediary to disclose information about anonymous users (Standard Verlagsgesellschaft v Austria) and that natural persons, notably politi- cians, may be liable for third-party comments posted on their personal social media “walls” (Xxxxxxx v France).
Such developments confirm the deep aversion of the ECHR system from the phenom- enon of hate speech. The definition of such a diverse and rather more stringent and severe treatment of hate speech, as opposed to other types of illegal content, is seemingly the expression of a clear orientation and agenda of the ECtHR, rather than of a strictly legal and technical reflection, and has been criticized also in light of its possible inconsistency with other legal frameworks, including that of the EU. Besides, the ECtHR has not so clearly identified the parameters and borders of what is to be considered “clearly unlawful speech” and hate speech, thus giving rise, for the future, to a concrete risk for uncertainty.
co-regulatory institutions or authorities; and foreseeing the possibility of implementing, in unclear and complex cases, provisional measures such as deprioritisation or contextualization”.
60 See supra, §2.5.2.2.
61 Xxxxxxxx and Levickas v Lithuania [2020] ECtHR 41288/15.
62 Association Accept and Others v Romania [2021] ECtHR 19237/16.
63 Xxxxxxxx v Lithuania [2023] ECtHR 39375/19.
3.4. Intermediary liability and hate speech: the framework of the EU
The legal regime concerning intermediary liability for third-party content underwent a parallel and different evolution within the EU framework. The following subsections aim to explore these developments, highlighting in particular the shift from a liberal approach, predominant in the early 2000s, to a progressively more interventionist one, typical of the current historical period, which has led, ultimately, to the adoption in October 2022 of the Digital Services Act.
3.4.1. Intermediary (non)liability at the turn of the millennium: the e-Com- merce Directive
At the turn of the millennium, inspired by the liberal and techno-optimistic approach of the US, the EU adopted Directive 2000/31/EC, so-called “e-Commerce Directive” (ECD),64 whose provisions came to represent the normative baseline for the Union’s ap- proach towards ISP liability in the twenty years to come. Namely, the ECD, following the model of the notorious Section 230 of the US CDA, introduced a “safe harbour” framework.
Section 230 exempts intermediaries from liability for transmitting or hosting illegal third-party content, even when the latter constitute criminal conducts,65 while establish- ing, at the same time, that liability shall not arise even in those cases where providers of computer services engage, actively, in moderation activities aimed at reducing the spread of content they deem illegal, harmful, or, in general, unacceptable (so-called “Good Sa- maritan clause”).66 Similarly, the ECD offers intermediaries a shield from liability, upon condition that those providers of intermediary services, namely, mere-conduit,67 cach- ing,68 and hosting services,69 comply with certain rules. At the same time, the ECD
64 Directive 2000/31/EC of the European Parliament and of the Council of 8 June 2000 on certain legal aspects of information society services, in particular electronic commerce, in the Internal Market ('Directive on electronic commerce’), OJ L 178/1. For an overview of the ECD, see namely Xxxxxx Xxxxxxx (ed), The New Legal Framework for E-Commerce in Europe (Xxxx 2005); Xxxxxxxx X Xxxxxxxxxxx, ‘The Immunity of Internet Intermediaries Reconsidered?’ in Xxxxxxxxxxxx Xxxxxx and Xxxxxxx Xxxxxxx (eds), The Respon- sibilities of Online Service Providers (Springer 2017); Xxxx X Xxxx, Xxxxxxxxx Xxxxxxxxx and Xxxxxxx Xxxxxxx, Cross-Border Dissemination of Online Content (Nomos 2020) 169–220.
65 For an overview of Section 230 CDA, see, among others, Xxxx Xxxxxxx, ‘An Overview of the United States’ Section 230 Internet Immunity’ in Xxxxxxxxx Xxxxxx (ed), Oxford Handbook of Online Intermediary Liability (Oxford University Press 2020). See infra, §4.4.2.
66 The adoption of the Good Samaritan clause was sparked, namely, by the decision rendered by the New York Supreme Court in Xxxxxxxx Xxxxxxx, Inc. v. Prodigy Services Co., 23 Media L. Rep. 1794 N.Y. Sup. Ct. 1995).
67 ECD art 12.
68 ibid 13. Mere-conduit services consist of the transmission in a communication network of information provided by a recipient of the service, or the provision of access to a communication network.
69 ibid 14. Caching services consist of the transmission in a communication network of information provided by a recipient of the service where the provider stores that information in an automatic, interme- diate and temporary manner for the sole purpose of making more efficient or more secure the information’s onward transmission to other recipients upon request.
prohibits EU Member States from imposing upon such providers any duty to conduct general monitoring activities aimed at assessing the presence of such illegal content or activities.70
Most notably, whereas in the case of mere-conduit and caching service providers im- munity fundamentally depends on the provider not modifying the information transmitted nor interfering with the transmission thereof, providers of hosting services (that is, ser- vices consisting “of the storage of information provided by a recipient of the service”)71 are indirectly72 required to establish notice and take down mechanisms, as the ECD rules that those providers shall maintain the exemption from liability as long as they do not have “actual knowledge of illegal activity or information and, as regards claims for dam- ages, [are] not aware of facts or circumstances from which the illegal activity or infor- mation is apparent”73 and as long as, “upon obtaining such knowledge or awareness”, they act “expeditiously to remove or disable access to the information”.74 In other words, a notice to hosting providers concerning the presence of illegal content or the commission of illegal activities through their services would trigger their responsibility to take down those items, or to disable access to them, on penalty of incurring liability for such contents or activities. This strategy mirrors the one adopted by the US Digital Millennium Copy- right Act (DMCA)75 with regard to copyright infringement.76
In light of such a provision, indeed rather favourable towards the position of hosting providers, it may be easily understood why Delfi has been criticized also because of its apparent lack of coordination with the legal framework of the EU and, thus, for its poten- tial capability of leading to the creation of an ECtHR case law at odds with the ECD. Indeed, if the Delfi case had been dealt with by the CJEU, the outcome might have been rather different, notably because, being in fact the applicant a hosting provider – at least with respect to the anonymous third-party comments – liability for such comments under the ECD should only have arisen in case Xxxxx had failed to respond promptly to SLK’s notices. Conversely, the ECtHR considered that Xxxxx should have actively removed the hate speech content even prior to receiving those complaints.77 Although it is true that
70 ibid 15.
71 ibid 14(1).
72 Xxxxx Xxxx Xxxxxxxxxxx, ‘Liability of Intermediary Service Providers in the EU Directive on Elec- tronic Commerce’ (2002) 19 Santa Clara Computer and High Technology Law Journal 111, 123–124; Ale- xxxxxxx Xxxxxxxxx, ‘From “Notice and Takedown” to “Notice and Stay Down”: Risks and Safeguards for Freedom of Expression’ in Xxxxxxxxx Xxxxxx (ed), Oxford Handbook of Online Intermediary Liability (Ox- ford University Press 2020).
73 ECD art 14(1)(a).
74 ibid 14(1)(b).
75 Digital Millennium Copyright Act 1998.
76 Xxxxxxxx De Xxxxxxxx, Digital Constitutionalism in Europe: Reframing Rights and Powers in the Algorithmic Society (Cambridge University Press 2022) 45.
77 In this respect, see most notably Xxxxxxx (n 29) 167–169. See also Xxxxxx Xxxxxxxxx and Xxxxx Xxx- sini, ‘Free Speech, Defamation and the Limits to Freedom of Expression in the EU: A Comparative Anal- ysis’ in Xxxxxx Xxxxx and Xxx Xxxxxxxxxxx (eds), Research Handbook on EU Internet Law (Xxxxxx Xxxxx Publishing 2014).
“the two Courts work in different jurisdictions and operate with different semantics”,78 many commentators argued that there seemingly was, nevertheless, the concrete risk of creating two parallel and disjointed legal traditions within the same European continent: a risk which, although partly resolved by the leniency showcased by the Strasbourg judges in their subsequent judgments, may still be valid with regard to hate speech content.
Besides, the EU choice to foresee ample liability exemptions favourable to ISPs was in great part motivated, like in the US, by the will not to suffocate the economic and libertarian potential of the Internet, which was, at the time, still in its infancy. It is unde- niable, however, that such an approach towards intermediary liability has had in the fol- lowing years some important political and social consequences. Indeed, the resulting legal regime led most notably to entrusting online platforms with the power to autonomously decide whether to remove or block vast amounts of content: a choice often driven first and foremost by business interests.
The decision to remove illegal or harmful content, including hate speech, was thus mainly left to the discretion of private actors, without any significant safeguards for indi- vidual rights and democratic principles, such as the protection of users’ right to freedom of expression in conditions of (substantive) equality. Additionally, the identification of what was to be considered as illegal and, therefore, subject to moderation, came to rely in great part upon providers’ own – privately enacted – terms of services.79 Private stand- ards, in other words, progressively came to define the contours of what should and should not be subject to punitive measures. Against this backdrop, values such as the rule of law and the due process of law are clearly at stake.80
Moreover, the extraordinary success and spread of digital technologies and of the In- ternet, and thus of the increased capacities and role of ISPs themselves, have progres- sively led the scholarly literature, the CJEU and, eventually, the lawmakers of the EU to rethink the strategy to be followed with respect to intermediary liability. The following subsections will investigate precisely these developments in EU digital policies.
3.4.2. Judicial activism of the Luxembourg Court
The changing technological and societal landscape first triggered some important judicial reactions from the CJEU, which attempted, namely, to adapt the ECD framework to con- temporary needs. With a view to overcoming the inertia of the EU lawmaker, the CJEU took a creative, if not manipulative,81 approach towards the interpretation of the
78 Xxxxx Xxxxxx, ‘The Liability of Internet Intermediaries and the European Court of Human Rights’ in Xxxxxxx Xxxxxxx and Xxxxxx Xxxxxx (eds), Fundamental Rights Protection Online: The Future Regulation of Intermediaries (Xxxxxx Xxxxx Publishing 2020) 268.
79 Xxxxxxx Xxxxxx and Xxxxx Xxxx, ‘Hate Speech on Social Media: Content Moderation in Context’ (2021) 52 Connecticut Law Review 1029. See infra, §5.2.
80 See infra, §5.4.3.
81 Xxxxxx Xxxxxxxxx, Judicial Protection of Fundamental Rights on the Internet: A Road Towards Digital Constitutionalism? (Xxxx 2021) 13.
Directive’s provisions on intermediary liability by clarifying the boundaries of the safe harbour system set therein.
The CJEU’s “judicial activism”,82 with regard to intermediary liability, was most no- tably propelled by a series of cases involving the protection of intellectual property and copyright rights, where the Luxembourg Court came to interpret the provisions of the ECD, including Article 14 on hosting providers, in light of Recital 42:
The exemptions from liability established in this Directive cover only cases where the activity of the information society service provider is limited to the technical process of operating and giving access to a communication network over which information made available by third parties is transmitted or temporarily stored, for the sole purpose of mak- ing the transmission more efficient; this activity is of a mere technical, automatic and passive nature, which implies that the information society service provider has neither knowledge of nor control over the information which is transmitted or stored.83
Moving from such wording, the CJEU held in the landmark judgment of Google France84 that a necessary precondition for the applicability of the liability exemption under Article 14 is, precisely, that the hosting provider has acted in a merely technical, automatic and passive way, thus precluding the enjoyment of the prerogatives set by the safe harbour system to all those intermediaries that intervened actively in the organisation of the third- party contents: in other words, only “neutral” ISPs could benefit from the ECD’s favour- able provisions.85
With the assessment concerning the neutrality of ISPs being left to the discretion of national courts when applying the Directive, the CJEU tried to clarify what should be the key elements, aspects, and features under consideration when making such an evaluation. Thus, Google France excluded, for instance, that simply requiring the payment of a fee for the provision of referencing services should be considered sufficient to prove the non- neutrality of a provider and thus to deprive it of the exemption from liability set within the ECD.86 However, other elements could contribute to such a conclusion, including the provider’s active role in drafting a commercial message associated with the incriminated links and the active establishment and selection of keywords to be associated with such
82 Xxxxxxxx De Xxxxxxxx, ‘The Rise of Digital Constitutionalism in the European Union’ (2021) 19 International Journal of Constitutional Law 41, 49.
83 ECD rec 42 (emphasis added).
84 Joined Cases C-236/08, C-237/08 and C-238/08, Google France SARL and Google Inc v Xxxxx Vuit- ton Malletier SA, Google France SARL v Viaticum SA and Luteciel SARL and Google France SARL v Centre national de recherche en relations humaines (CNRRH) SARL and Others [2010] ECLI:EU:C:2010:159. In this decision, the CJEU addressed the issue of the liability of the provider of a referencing service with respect to the unlawful exploitation of keywords by third parties infringing trade- marks. More specifically, Google had been sued in France by the owners of distinctive signs who com- plained that, by selecting keywords identical to trademarks, users were seeing advertisements for counter- feit or imitation products alongside original products. See Xxxxxxxxx, Xxxxxxx and Xx Xxxxxxxx (n 4) 76–77. 85 Google France (n 84) 114–125. According to Xxx Xxxxx, the CJEU’ conclusions in this respect were mistaken, as “the actual content of the recital … clearly points to mere conduit and caching providers, and the discussion about these two services is continued in recital 43 … and recital 44”. Xxxxxxx Xxx Xxxxx, ‘Online Service Providers and Liability: A Plea for a Balanced Approach’ (2011) 48 Common Market Law
Review 1455, 1482.
86 Google France (n 84) 116.
links.87 Thus, the “non-neutral” character of an intermediary should be assessed by look- ing at further elements other than the simple request for compensation: rather, the ISP should take an active role in the actual promotion of certain products, services, or con- tents.
In L’Oréal,88 the Luxembourg judges provided further elements of interpretation. Ad- dressing the lawsuit brought by L’Oréal against eBay for the sale, through the latter’s platform, of a number of products in violation of the former’s trademark rights, the CJEU confirmed several of the points addressed in Google France, adding that
where … [an] operator has provided assistance which entails, in particular, optimising the presentation of the offers for sale in question or promoting those offers, it must be con- sidered not to have taken a neutral position between the customer-seller concerned and potential buyers but to have played an active role of such a kind as to give it knowledge of, or control over, the data relating to those offers for sale.89
As recently as in 2021, YouTube and Cyando90 once again confirmed the principles set in Google France and L’Oréal, holding that providers of content-sharing platforms, in order to enjoy the safe harbour regime of the ECD, must behave as neutral actors.91 The deci- sion, however, is particularly interesting as it also addresses the question of whether the resort to AI systems for content moderation and curation should lead to the conclusion of excluding a provider from the enjoyment of the exemption from liability. In this respect, the Court clarified that the implementation of technological measures aimed at detecting illegal content, as well as the provision of automated indexing systems, of a search func- tion, and/or of a recommender system suggesting content based on users’ profiles or pref- erences are “not a sufficient ground for the conclusion that that operator has ‘specific’ knowledge of illegal activities carried out on that platform or of illegal information stored in it”.92
With respect to the implementation of technical systems for moderation, the CJEU also rendered two “twin” landmark decisions interpreting the ECD prohibition to impose general monitoring obligations upon providers of intermediary services. Once again
87 ibid 118.
88 Case C-324/09, L’Oréal SA and Others v eBay International AG and Others [2011] ECLI:EU:C:2011:474.
89 ibid 116. In the case at hand, specifically, eBay had actively organized the display of products to be sold, thus assisting and fostering transactions between its clients. Moreover, eBay had been notified by L’Oréal of the actual existence of transactions infringing the firm’s property rights and had not taken action. 90 Joined Cases C-682/18 and C-683/18, Xxxxx Xxxxxxxx v Google LLC and Others and Elsevier Inc v Cyando AG [2021] ECLI:EU:C:2021:503. The judgment concerned two separate cases involving the lia- bility of providers of content-sharing platforms for copyright infringement: in YouTube, the plaintiff had brought action against the famous video-sharing platform after a number of videos reproducing singer Sarah Brightman’s performances had been uploaded to the Internet in violation of their proprietary rights. In Cyando, Elsevier brought action against the operator of a file-hosting and file-sharing platform where sev-
eral protected materials had been uploaded and made available for downloading.
91 ibid 105.
92 ibid 114.
addressing the field of copyright infringement, the Scarlet93 and Netlog94 judgments held that ordering ISPs to adopt preventive filtering systems to detect content circulated ille- gally is not a measure consistent with the Directive, as such an order would require “active observation of all communications conducted on the network of the ISP concerned and, consequently, would encompass all information to be transmitted and all customers using that network”.95 In the opinion of the Court, the primary issue, in this respect, concerned the proportionality of such an order. Indeed, an injunction of this type would overall fail to strike an adequate balance between the fundamental rights concerned, namely the pro- tection of intellectual property on the one hand and the freedom to conduct business on the other,96 as well as disproportionately affect users’ rights to privacy and freedom of expression as protected under Articles 8 and 11 CFREU.97
Nonetheless, as also recognized by the CJEU,98 the prohibition of general obligations to monitor does not prevent Member States from requiring from ISPs “the termination or prevention of any infringement, including the removal of illegal information or the disa- bling of access to it”99, as Member States are only prevented from “imposing a monitoring obligation on service providers only with respect to obligations of a general nature”.100 Therefore, as long as an order is sufficiently substantiated and limited as to its scope, that is, as to the specific content which should be acted upon, that order shall be in compliance with EU law and the ECD. With respect to this point, the Luxembourg Court, moving this time from a case of defamation, offered some significant insights into the width of such a power when it rendered in 2019 the landmark judgment of Glawischnig-Piesczek v Fa- cebook.101
The case concerned the publication by a Facebook user of a post where a thumbnail image portraying Eva Glawischnig-Piesczek, a representative of the Austrian Green Party, was associated with highly derogatory and insulting terms – “lousy traitor”, “cor- rupt oaf”, a member of a “fascist party”. The Austrian Supreme Court referred some ques- tions to the CJEU regarding, notably, the consistency with EU law of an order requiring a hosting provider such as Facebook to remove content declared to be illegal, as well as the territorial and material scope that such an injunction might have. With respect to the
93 Case C-70/10, Scarlet Extended SA v Société belge des auteurs, compositeurs et éditeurs SCRL (SA- BAM) [2011] ECLI:EU:C:2011:771.
94 Case C-360/10, Belgische Vereniging van Auteurs, Componisten en Uitgevers CVBA (SABAM) v Netlog NV [2012] ECLI:EU:C:2012:85.
95 Scarlet (n 93) para 39. Similarly, Netlog (n 94) para 38.
96 Scarlet (n 93) paras 49, 53; Netlog (n 94) paras 44–47.
97 Scarlet (n 93) paras 50–53; Netlog (n 94) paras 48–51.
98 Scarlet (n 93) paras 30–31; Netlog (n 94) paras 28–29.
99 ECD rec 45.
100 ibid rec 47.
101 Case C-18/18, Eva Glawischnig-Piesczek v Facebook Ireland Limited [2019] ECLI:EU:C:2019:821. With respect to this decision, see, among others, Aleksandra Kuczerawy, ‘General Monitoring Obligations: A New Cornerstone of Internet Regulation in the EU?’ in Centre for IT & IP Law (ed), Rethinking IT and IP law: Celebrating 30 years CiTiP (Intersentia 2020); Daphne Keller, ‘Facebook Filters, Fundamental Rights, and the CJEU’s Glawischnig-Piesczek Ruling’ (2020) 69 GRUR International 616; Giovanni De Gregorio, ‘Google v. CNIL and Glawischnig-Piesczek v. Facebook: content and data in the algorithmic society’ (2020) 1 Rivista di Diritto dei Media 249.
territorial scope, the CJEU held that the ECD’s prohibition of general monitoring obliga- tions did not preclude domestic courts from ordering the removal of that illegal content on a global scale.102 As regards the material scope, the Luxembourg Court concluded that a removal order may encompass not only the content that has been found to be illegal but also any content that is “identical” or “equivalent” to it, provided, in the last case, that
the monitoring of and search for the information concerned by such an injunction are limited to information conveying a message the content of which remains essentially un- changed compared with the content which gave rise to the finding of illegality and con- taining the elements specified in the injunction, and provided that the differences in the wording of that equivalent content, compared with the wording characterising the infor- mation which was previously declared to be illegal, are not such as to require the host provider to carry out an independent assessment of that content.103
However, according to Keller, Glawischnig-Piesczek v Facebook fails to address in a sat- isfying manner the matters related to the implications that its findings may have upon the fundamental rights not only of hosting providers but, even more, of the users of the Inter- net themselves: namely, their rights to privacy and data protection; to freedom of
102 “In order to answer that question, it must be observed that, as is apparent, notably from Article 18(1), Directive 2000/31 does not make provision in that regard for any limitation, including a territorial limita- tion, on the scope of the measures which Member States are entitled to adopt in accordance with that di- rective. Consequently, and also with reference to paragraphs 29 and 30 above, Directive 2000/31 does not preclude those injunction measures from producing effects worldwide. However, it is apparent from recitals 58 and 60 of that directive that, in view of the global dimension of electronic commerce, the EU legislature considered it necessary to ensure that EU rules in that area are consistent with the rules applicable at inter- national level. It is up to Member States to ensure that the measures which they adopt and which produce effects worldwide take due account of those rules”. Glawischnig-Piesczek (n 101) paras 49–52. Similarly, with respect to the so-called “right to be forgotten”, see Case C-507/17, Google LLC, successor in law to Google Inc, v Commission nationale de l’informatique et des libertés (CNIL) [2019] ECLI:EU:C:2019:772 [72], where the CJEU concluded that, while such a right, under EU law, does not entail the obligation for a search engine to carry out the de-referencing of the personal data requested on all its versions, the authorities of the Member States concerned are not prevented from issuing de-referencing orders applicable also out- side the Union. In fact, the CJEU’s main focus, in Google v CNIL, upon the impossibility of recognizing an extraterritorial scope of action under EU law had led many commentators to argue that the two decisions, though close in time, were incoherent with each other. However, as highlighted, among others, by De Gre- gorio, both Glawischnig-Piesczek and Google v CNIL “lead to the same result, namely that EU law does not either impose or preclude national measures whose scope extends worldwide”. De Gregorio, ‘Google
v. CNIL and Glawischnig-Piesczek v. Facebook’ (n 101) 259. See also, on the links and connections between the two decisions, Oreste Pollicino, ‘L’“Autunno Caldo” Della Corte Di Giustizia in Tema Di Tutela Dei Diritti Fondamentali in Rete e Le Sfide Del Costituzionalismo Alle Prese Con i Nuovi Poteri Privati in Ambito Digitale’ (2019) 19 Federalismi.it 1.
103 Glawischnig-Piesczek (n 101) para 55. The CJEU, in this respect, clarifies that “it is important that the equivalent information … contains specific elements which are properly identified in the injunction, such as the name of the person concerned by the infringement determined previously, the circumstances in which that infringement was determined and equivalent content to that which was declared to be illegal. Differences in the wording of that equivalent content, compared with the content which was declared to be illegal, must not, in any event, be such as to require the host provider concerned to carry out an independent assessment of that content. In those circumstances, an obligation such as the one described … on the one hand – in so far as it also extends to information with equivalent content – appears to be sufficiently effec- tive for ensuring that the person targeted by the defamatory statements is protected. On the other hand, that protection is not provided by means of an excessive obligation being imposed on the host provider, in so far as the monitoring of and search for information which it requires are limited to information containing the elements specified in the injunction, and its defamatory content of an equivalent nature does not require the host provider to carry out an independent assessment, since the latter has recourse to automated search tools and technologies”. ibid 45–46.
expression and information; to a fair trial and effective remedy; and, finally, to equality and non-discrimination, which may be affected by the use of biased and under-representa- tive automated content filters.104
The CJEU case law referred to above showcases the definition of a roadmap promoted by the Court with respect to intermediary liability for third-party content, a roadmap in- dicating the Luxembourg judges’ will to set aside the inherently liberal approach of the first years of the twenty-first century. Although such case law did not address, specifi- cally, the subject of hate speech governance, the push for an update of the previous frame- work clearly has – and will likely have even more in the future – an impact also upon that area, by encouraging the EU lawmaker to draft new legislation holding ISPs accountable for the spread of illegal content: most notably, it set a guideline for the adoption of the Digital Services Act.105
Particularly relevant for the purposes of hate speech governance is, seemingly, the CJEU’s judgment for Glawischnig-Piesczek v Facebook, as it opens up to the opportunity, in cases where a content is found to be unlawful hate speech, to issue an order requiring a hosting provider to remove all “equivalent” content. Such a conclusion may have both positive and negative consequences. Indeed, while it may well represent an additional instrument in the hands of domestic courts to counter the dissemination of hate speech across the Internet, this kind of injunction would likely cause ISPs to implement rather restrictive content moderation filters, with little regard, as noted by Keller, to the provi- sion of guarantees ensuring the fundamental rights of the recipients of the service. The CJEU’s holding may thus contribute, in the future, to a higher removal rate, in absolute terms, of hate speech content across the Internet: nonetheless, it may hinder a substantive equality-oriented strategy against hate speech, such as that suggested in Chapter 2, nota- bly by risking impacting disproportionately upon minority and discriminated groups’ par- ticipation in the online digital environment.
3.4.3. A new phase for the EU
3.4.3.1. The “new season” of content moderation regulation
Against this backdrop, the second half of the 2010s saw the beginning of a new season for content moderation regulation within the EU, with the adoption of a rather wide array of new pieces of legislation requiring intermediaries to comply with duties and obliga- tions to moderate online content to prevent the spread of illegal and/or harmful material through the Internet.106
104 Keller (n 101) 2.
105 See infra, §5.
106 Claudia E Haupt, ‘Regulating Speech Online: Free Speech Values in Constitutional Frames’ (2021) 99 Washington University Law Review 751, 760; De Gregorio, ‘The Rise of Digital Constitutionalism in the European Union’ (n 82).