Showing posts with label e-commerce Directive. Show all posts
Showing posts with label e-commerce Directive. Show all posts

Monday, 7 October 2019

Facebook’s liability for defamatory posts: the CJEU interprets the e-commerce Directive




Lorna Woods, Professor of Internet Law, University of Essex

The last couple of weeks have seen a number of judgments relating to the control of information on the internet by the subject of the information.  The cases of GC et al (Case C-136/17) and Google v CNIL (Case C-507/17) concern the interpretation of General Data Protection Regulation (GDPR), looking at the obligations of search engines. The most recent case, Glawischnig-Piesczek v Facebook (Case C-18/18) concerned the impact of the e-Commerce Directive (Directive 2000/31/EC), specifically the prohibition on general monitoring found in Article 15 of that Directive, on ‘stay down’ notices. The focus of this post is on Glawischnig-Piesczek, but there is a question that reaches beyond the impact of that case on the e-Commerce Direcitve: to what extent is there a coherent approach to issues arising from the Internet across the various legal measures that intersect with it. This may go beyond the e-Commerce Directive and the GDPR to include measures related to intellectual property (notably the recent controversial Directive on Copyright in the Digital Single Market (Directive 2019/790/EU) and the Enforcement Directive (Directive 2004/48/EC)) and the combating of child exploitation (Directive on combatting the sexual abuse and sexual exploitation of children and child pornography (Directive 2011/93/EU)) and terrorism (Terrorism Directive (Directive 2017/541/EU)).

The Judgment

The facts giving rise to this reference are simple.  Glawischnig-Piesczek complained to Facebook about some defamatory posts. Facebook did not remove the posts so Glawischnig-Piesczek obtained a court order requiring Facebook to stop publishing the impugned content. The precise scope of the order gave rise to further litigation and the Austrian Supreme Court referred a number of questions to the CJEU, asking:

-          Are “stay down” notices in relation to identically worded content compatible with Article 15 of the e-commerce Directive?
-          Are there geographic limitations to the obligation?
-          Are such notices in relation to content with equivalent content to that which has been found unacceptable which does not use the same words but conveys the same meaning acceptable?
-          In relation to posts with equivalent meaning, does the obligation accrue when the intermediary becomes aware of the content?

The Court started its analysis by making clear that the immunity from suit granted by Article 14 of the Directive is not a general immunity from every legal obligation. Specifically the national authorities remain competent to require a host to terminate access to or remove illegal information. The Court also noted that Article 18 of the e-Commerce Directive requires Member States to have in place appropriate court actions to deal with illegal content. It states:

Member States shall ensure that court actions available under national law concerning information society services’ activities allow for the rapid adoption of measures, including interim measures, designed to terminate any alleged infringement and to prevent any further impairment of the interests involved.

The Court held that no limitation on the scope of such national measures can be inferred from the text of the e-Commerce Directive [para 30]. 

It then turned to the impact of Article 15. It highlighted the fact that while Article 15 prohibited general monitoring as recital 47 in the preamble of the Directive makes clear, monitoring ‘in a specific case’ does not fall within that prohibition. It then held that

[s]uch a specific case may, in particular, be found, as in the main proceedings, in a particular piece of information stored by the host provider concerned at the request of a certain user of its social network ….. [35].

Given the nature of information flows there is a risk that any such information may be re-posted, so

‘.. in order to ensure that the host provider at issue prevents any further impairment of the interests involved, it is legitimate for the court having jurisdiction to be able to require that host provider to block access to the information stored, the content of which is identical to the content previously declared to be illegal, or to remove that information, irrespective of who requested the storage of that information. In particular, in view of the identical content of the information concerned, the injunction granted for that purpose cannot be regarded as imposing on the host provider an obligation to monitor generally the information which it stores, or a general obligation actively to seek facts or circumstances indicating illegal activity, as provided for in Article 15(1) of Directive 2000/31 [37].

The Court determined “equivalent meaning” to be about the message the information posted conveys and which was “essentially unchanged”. Given the focus on meaning not form, the Court held that an injunction could extend to non-identical posts as otherwise the effects of an injunction could easily be circumvented.  The Court then considered the balance between the competing interests. The Court commented that the “equivalent information” identified by court order should contain specific elements to identify the offending content and in particular must not require the host to carry out its own independent assessment. In terms of assessing the burden on the host, the court noted that the host would have recourse to “automated search tools and technologies” [para 46].

The court concluded that the injunctions would not constitute a general obligation to monitor all content and specifically no obligation to seek facts or circumstances indicating illegal activity.

The Court also noted that Article 18 of the Directive makes no provision for territorial limitations on what measures Member States may make available. In principle, world-wide effects would be permissible [para 50], but this is subject to the proviso that EU rules must be consistent with the international law framework.

The court felt it unnecessary to respond to the third question without elaborating further.

Comment

There are a number of comments that could be made about this judgment. This post comments on three: the approach of the Court to general monitoring; non-identical content; and the issue of territorial scope. It also discusses freedom of expression issues.

General Monitoring

In this judgment there is a clear confirmation that searching for specific pieces of information/types of content does not constitute general monitoring. The Court makes it clear, at para 35, that the searching for individual pieces of content constitutes a ‘specific case’ within recital 47.  The Court gives the searching for specific information as an example of a ‘specific case’; presumably the searching of a targeted user’s stored date could be another such.  This is the first time the approach has been adopted in regards to defamation.  Perhaps the fact that the case concerns defamation rather than, for example, intellectual property explains the dearth of previous case law cited in the court’s judgment.

The statement of the Court that searching for an individual item of content does not constitute general monitoring does not address the fact that such a search would presumably involve search all content held. Yet in McFadden (Case C-484/14), the Court described the scope of Article 15 thus:

As regards, first, monitoring all of the information transmitted, such a measure must be excluded from the outset as contrary to Article 15(1) of Directive 2000/31, which excludes the imposition of a general obligation on, inter alia, communication network access providers to monitor the information that they transmit. [McFadden, 87]

This is broadly similar to the approach in the early case of L’Oreal (Case C-324/09) which referred to Article 15 precluding ‘an active monitoring of all the data of each of its customers in order to prevent any future infringement ...’ [para 139].  The prevention of future infringements in the context of L’Oreal could be achieved – in the view of the Court – however by the suspension of the perpetrator.  Yet monitoring of the data of all customers seems to be what would be required to find the specific case of content. The matter remains unacknowledged in the Court’s analysis in Glawischnig-Piesczek.  Perhaps the assumption is (a) that the concern underpinning the prohibition in Article 15 derives from privacy; and (b) that when we look for one thing we do not really see the other things that we look at during our search – and that this might particularly be so when the searching is automated.  Of course, arguments that automated review of communications data does not constituted an invasion of privacy have not been accepted by the Court (e.g. Watson/Tele2).  In any event, further support for the distinction between searching for an identified piece of content and searching in a less targeted fashion is found in the context of the fight against child sexual abuse and exploitation and in the context of enforcement of intellectual property.

Non-identical content

The clarification that an injunction may also extend to non-identical content raises a number of issues.  While the Court states that a host should not be required to make its own judgment on these matters it is not clear how similar the content needs to be.  It is also unclear whether the Court is concerned here with the wording or the message conveyed. At [40] it refers to the ‘message conveyed’, which could refer to the idea in issue rather than its precise expression. The Court then referred to ‘information, the content of which, whilst essentially conveying the same message, is worded slightly differently …’ [41]. An approach based on wording (or presumably identifiable items of content such as images) has the benefit of being more easily described by order. It is probably easier to circumvent.  There seems to be an assumption in the judgment that technological means are available to implement this sort of requirement, though whether that is the case is another question.

Territorial Scope

The territorial scope of the order is also worth mentioning.  Like its Advocate General, Szpunar, the Court does not envisage any territorial limitation of the removal/blocking as a result of EU law.  It is important to note that the Court does not say that injunctions must have such extraterritorial effect. Rather the question is about the interplay between each national legal system’s own way of dealing with these issues (and area the Court noted gave Member States wide discretion) and the fact that Article 18 is silent as to any limitations. The silence of EU law allows Member States freedom to take action.  A further issue is that the Court noted the impact of international law without however elaborating what it meant – are we talking fundamental human rights (this seems unlikely given the existence of the EU Charter) or international comity, for example?. 

This is one of a number of cases in which the Court has had to consider the territorial scope of EU law in the context of the Internet – the most recent being the Right to be Forgotten case: Google v CNIL (Case C-507/17). There the Court held that

Where a search engine operator grants a request for de-referencing …, that operator is not required to carry out that de-referencing on all version of its search engine, but on the version of that search engine corresponding to all the Member States. [73]

It seems then that the opposite conclusion has been reached. This is overstating the point.  While the fact that EU law does not require extraterritoriality, the GDPR’s silence on the point gives space to a national court to make an order with extra-territorial effect, a point the Court makes express in para 72. A national could then, within the constraints of its own national rules, make such an order. In such a situation the position would seem similar to that under Article 18 e-Commerce Directive as understood in Glawischnig-Piesczek. A contrast to the silence of the EU legislature on extraterritoriality of blocking/de-listing can be seen in the Terrorism Directive. There Article 21 imposes an obligation on Member States to obtain the removal of terrorist content hosted outside their territory, but it also recognises that that may not be possible.

In Google v CNIL, while the Court recognised the possibility for national courts to make orders for de-referencing with extra-territorial effect, it expressly noted that in doing so they must weigh up the competing interests of the data subjects  and the right of others to freedom of information [para 72]. It is noticeable that in Glawischnig-Piesczek the balancing is different. The Court notes the interest of the subject of the information and also the need not to impose an excessive burden on the host provider [para 45, para 46]. The existence of other rights: the right of the host to carry on a business and the rights of those posting the material and those wishing to receive it – both aspects of freedom of expression - are not expressly mentioned. To some extent the issue of rights will be covered through the national courts, which will be the bodies to carry out that balancing within their own national frameworks and within the limits of EU law. By contrast to Google v CNIL, however, there is no instruction from the Court that these are matters to be considered, nor any express recognition that the balance between the right to private life (including the protection of reputation) and freedom of expression differs between territories. What might be seen as the legitimate protection of private life in one place is an infringement of speech in another.  So, while the matter was not directly the Court’s concern in this case, it is somewhat surprising that the issues were not directly considered.

Barnard & Peers: chapter 9
Photo credit: Department of Defense, via Wikicommons

Friday, 17 May 2019

Facebook, defamation and free speech: a pending CJEU case






Preliminary Notes on the Pending Case Glawischnig-Piesczek v. Facebook Ireland Limited

Dr Paolo Cavaliere, University of Edinburgh Law School, paolo.cavaliere@ed.ac.uk

Introduction

In the next few months the Court of Justice of the European Union is expected to deliver a decision with the potential to become a landmark in the fields of political speech and intermediary liability (the Advocate-General’s opinion is due June 4). In fact the Court will have to render its opinion on two intertwining yet distinct questions: first, the case opens a new front in the delineation of platforms’ responsibility for removing illegal content, focusing on whether such obligations should extend to content identical, or even similar to other posts already declared unlawful. Secondly, the decisions will also determine whether such obligations could be imposed even beyond the territorial jurisdiction of the seised court. What is at stake is, in ultimate analysis, how much responsibility platforms should be given in making proactive assessments of the illegality of third-party content, and how much power courts should be given in imposing their own standards of acceptable speech across national boundaries.

Summary of the case

The plaintiff is a former Austrian MP and spokeswoman of the Green Party who – before retiring from politics – was reported as criticizing the Austrian Government’s stance on the refugee crisis in an article published by the news magazine oe24.at in April 2016. A Facebook user shared the article on her private profile along with her own comment, which included some derogatory language. In July, the plaintiff contacted Facebook and requested the post to be removed, only for the platform to decline the request as it did not find the post in breach of its own terms of use nor of domestic law. The plaintiff then filed an action for interim injunctive relief seeking removal of the original post, and of any other post on the platform with ‘analogous’ content. After the Commercial Court of Vienna found the post unlawful, Facebook proceeded to remove it.

However, the Court considered that Facebook, by failing to remove the original post on the plaintiff’s first notice, was not covered by the exemption from secondary liability and ordered the platform to remove any further post that would include the plaintiff’s picture alongside identical or ‘analogous’ comments. The Higher Regional Court of Vienna then found that such an obligation would amount to an obligation of general monitoring on Facebook’s part and removed the second part of the injunction, while upholding that the original post was manifestly unlawful and should have been removed by the platform following the first notification from the plaintiff. The Higher Court also confirmed that Facebook should remove any future posts that would include the same derogatory text alongside any image of the plaintiff. Facebook appealed this decision to the Austrian Supreme Court.

The Court referred to the CJEU two main ranges of questions:

- First, whether it would be compatible with Article 15(1) of the E-commerce Directive an obligation for host providers to remove posts that are ‘identically worded’ to other illegal content. In case of positive answer, the Court asks whether this obligation could expand beyond identical content and include content that is analogous in substance, despite a different wording. These are ultimately questions concerning the responsibility that platforms can be given in making their own assessment of what content amounts to unlawful speech, and what are the limits of “active monitoring”.

- Second, whether national courts can order platforms to remove content only within the national boundaries, or beyond (‘worldwide’). This is a question concerning the admissibility of extra-territorial injunctions for content removal.

Analogous content and active monitoring

To start with, it needs to be clarified that the dispute focuses effectively on a case of political speech, only formally concerns a case of defamation. The post on Facebook was considered by the Austrian court in breach of Art 1330 of the Austrian Civil Code, which protects individual reputation. However, the status of the plaintiff, who served as the spokeswoman of a national political party at the time, gives a different connotation to the issue. Established case-law of the European Court of Human Rights (Lingens v. Austria, 1986; Oberschlick v. Austria (no. 2), 1997) has repeatedly found that the definition of defamation in relation to politicians must be narrower than usual and the limits of acceptable criticism wider, especially when public statements susceptible of criticism are involved. In this case, the plaintiff had made public statements concerning her party’s immigration policy: the circumstance is relevant since the ECtHR traditionally identifies political speech with matters of public interest and requires interferences to be kept to a minimum. By established European standards the impugned content here amounts to political commentary, and the outcome of the case could inevitably set a new standard for the treatment of political speech online.

While intermediaries enjoy a series of immunities under the E-commerce Directive, which also notably established a prohibition for state authorities to impose general monitoring obligations, the 2011 Report of the Special Rapporteur to the General Assembly on the right to freedom of opinion and expression exercised through the Internet clarified that blocking and filtering measures are justified in principle when they specifically target categories of speech prohibited under international law, and the determination of the content to be blocked must be undertaken by a competent judicial authority. A judicial order determining the exact words (‘lousy traitor’, ‘corrupt oaf’, ‘fascist party’) may be an adequately precise guidance for platforms to operate, depending on how precise the contours of the order are.

To put the question in a context, the requirement to cancel ‘identical’ content marks the latest development in a growing trend to push platforms to take active decisions in content filtering. It cannot be neglected that the issue of unlawful content re-appearing, in identical or substantially equivalent forms, is in fact becoming increasingly worrisome. In a workshop held in 2017, delegates from the EU Commission heard from industry stakeholders that the problem of repeat infringers has become endemic to the point that, for those platforms that implement notice-and-takedown mechanisms, 95% of notices reported the same content to the same sites, at least in the context of intellectual property infringements. If rates of re-posting of content infringing other personality rights such as reputation can be considered anecdotally similar, then any attempts to clear platforms of unlawful content recall the proverbial endeavor of emptying the ocean with a spoon.

Nonetheless, the risk of overstepping the limits of desirable action is always looming. A paradigmatic example comes from early drafts of Germany’s Network Enforcement Law, which included a requirement for platforms to prevent re-uploads of same content already found unlawful – a provision that closely resembles the one at stake here. The requirement was expunged from the final version of the statute amid fears of over-blocking and concerns that automated filters would not be able, at the current state of technology, to correctly understand the nuances and context of content that is similar of equivalent at face value, such as irony or critical reference for instance.

The decision of the German law-makers to eventually drop the requirement – evidently considered a step too far even in the context of a statute already widely considered to hardly strike a suitable balance between platform responsibilities and freedom of expression – is indicative of the high stakes in the decision that the CJEU faces. A positive answer from the CJEU would mean a resurgence of this aborted provision on Europe-wide scale.

The idea of platforms’ monitoring of re-uploaded content is being gaining traction in digital industries for a little while now and is trickling down into content regulation. In the field of SEO, the concept of “duplicate content” defines content that has been copied or reused from other Web pages, sometimes for legitimate purposes (e.g. providing a mobile-friendly copy of a webpage), sometimes resulting in flagrant plagiarism. Yet definitions diverge when it comes to the criteria considered: while duplicate content is most commonly defined as ‘identical or virtually identical to content found elsewhere on the web’, Google stretches the boundaries to encompass content that is ‘appreciably similar’. Content regulation simply cannot afford the same degree of flexibility in defining ‘identically worded’ content, as the criterion of judicial determination required by the Special Rapporteur and the prohibition of general monitoring obligations in the E-commerce Directive exclude it.

In the area of copyright protection, it is in principle possible for service providers like YouTube to automatically scan content uploaded by private users and compare it to a database of protected works provided by rights-holders. In the case of speech infringing personality rights and other content-based limitations, discourse analysis is necessary to understand the context, and this kind of task would evidently amount to a private intermediary making a new determination on the legality of the speech.

The assessment of what amounts to unlawful speech rarely depends on the sole wording; context plays a fundamental role in the assessment, and that is all but a straightforward exercise. The European Court of Human Rights’ case-law includes several examples of complex evaluations of the local circumstances to determine whether or not an interference with speech would be justified.

For instance, in the case of Le Pen v. France (2010), the Court considered that comments, that could seem at face value derogatory towards a minority, needed anyway to be considered in the context of an ongoing general debate within the Country, and stressed that the domestic courts should be responsible for assessing the breadth and terms of the national debate and how to take it into account when determining the necessity of the interference. In Ibragimov v. Russia (2018), the Court noted that the notion of attack on religious convictions can change significantly from place to place as no single standard exists at the European level and, similar to political debates in societies, domestic authorities are again best placed to ascertain the boundaries of criticism of religions ‘[b]y reason of their direct and continuous contact with the vital forces of their countries’. The historical context is consistently taken into account to determine whether a pressing social needs exist for a restriction, and is enough to justify different decisions in respect to speech that otherwise appears strikingly similar.

For instance, outlawing Holocaust denial can be a legitimate interference in countries where historical legacies justify proactive measures taken in an effort to atone for their responsibility in mass atrocities (see Witzsch v. Germany (no. 1), 1999; Schimanek v. Austria, 2000; Garaudy v. France, 2003); whereas a similar statute prohibiting the denial of the Armenian genocide would be an excessive measure in a country like Switzerland with no strong links with the events in 1915’s Ottoman Empire (Perinçek v. Switzerland, 2015).

The intricacies of analysing the use of language against a specific historical and societal context are perhaps best illustrated by the Court’s minute analysis in Dink v. Turkey (2010). The Court was confronted with expressions that could very closely resemble hate speech: language such as ‘the purified blood that will replace the blood poisoned by the “Turk” can be found in the noble vein linking Armenians to Armenia’, and references to the Armenian origins of AtatĂĽrk’s adoptive daughter, were included in articles written by the late Turkish journalist of Armenian origin Fırat Dink.   The Court eventually came to the conclusion that it was not Turkish blood that Dink referred to as poison, but rather attitudes of the Armenian diaspora’s campaign which he intended to criticise. The Court built extensively on the assessment made by the Principal State Counsel at the Turkish Court of Cassation – who analysed all Dink’s articles published between 2003 and 2004 – in order to be able to ascertain whether these expressions amounted to denigrating Turkishness, and in what ways references to blood and the origins of AtatĂĽrk’s daughter amounted to sensitive subjects in Turkish ultranationalist circles and were susceptible to ignite animosity.

Not only social and political context matters, often it is precisely the use of language in a culturally specific way that forms a fundamental part of the Court’s assessment, with the conclusion that words alone have little importance and it is instead their use in specific contexts that determines whether or not they cross the boundaries of lawful speech. In Leroy v. France (2008), the Court went to great lengths in evaluating the use of the first person plural “We” and a parodistic quote of an advertising slogan to establish that a cartoon mocking the 9/11 attacks amounted to hate speech.  

Beyond the Court’s experience, examples of words that, though otherwise innocuous, can become slurs if used in a certain context abound: for instance, the term ‘shiptari’ in Southern Slavic-speaking countries to indicate Albanians especially in Serbia acquires a particularly nasty connotation as it was often used by Slobodan Milošević to show contempt of the Albanian minority in Yugoslavia. In Greece, the term lathrometanastes (literally ‘illegal immigrants’) has been appropriated and weaponised by the alt-right rhetoric to purposefully misrepresent the legal status of asylum seekers and refugees in an attempt to deny them access to protection and other entitlements, and now arguably lies outside of the scope of legitimate political debate,[1] to the point that it has been included in specialised research on indicators of intolerant discourse in European countries.

This handful of examples shows how language needs to be understood in the context of historical events and social dynamics, and can often convey a sense beyond their apparent meaning. While for domestic and supranational courts this seems challenging enough already, the suggestion that it would suffice for platforms to just check for synonyms and turns of phrases in a mechanical fashion is simplistic at best.

Extraterritorial injunctions

This plain observation calls into question whether it would be appropriate for the CJEU to answer in the positive the question on extra-territorial injunction. The Austrian Court’s order is in fact addressed towards an entity based outside the Court’s territorial jurisdiction and the order sought is to operate beyond the Austrian territory. To clarify, the novelty here resides not on the seising of Austrian courts, but rather on the expansive effect of their decision; the question concerns whether it would be appropriate for the effects of the injunction sought to extend beyond the limits of the national jurisdiction and effectively remove content from Facebook at the global level.

The Court of Justice has already interpreted jurisdiction in a similarly expansive way on a few occasions. In L’OrĂ©al v. eBay (2011) the Court decided to apply EU trade-mark law on the basis that trade-mark protected goods were offered for sale from locations outside the EU but targeted at consumers within the EU. In Google Spain (2014), the Court decided to apply EU data protection law to the processing of a European Union citizen’s personal data carried out ‘in the context of the activities’ of an EU establishment even if the processor was based in a third country. The Court considered that delimiting the geographical scope of de-listings would prove unsatisfactory to protect the rights of the data subjects. A similar reasoning was the basis for deciding in Schrems (2015) that EU data protection law should apply to the transfer of personal data to the US.

One common element emerges from the case-law of the Court of Justice so far, in that the extraterritorial reach of court orders seems to be a necessary measure to ensure the effectiveness of EU rules and the protection of citizens’ or businesses’ rights. The Court has been prepared to grant extraterritorial reach when fundamental rights of European Union citizens were at stake (for instance in the context of processing of personal data) or when, in case of territorially protected rights such as trade marks, a conduct happening abroad was directly challenging the protected right within the domestic jurisdiction. It is dubious that the case at stake resembles either of these circumstances; in fact limiting political speech requires a different analysis.

A politician certainly is entitled to protect their own reputation, however when the criticism encompasses aspect of an ongoing public debate, the limits of acceptable speech broaden considerably: whether the speech falls within, and contributes to, an ongoing social conversation is very much a factual and localised consideration. Conversations that are irrelevant or even offensive within one national public sphere could very well be of the utmost relevance to communities based elsewhere, especially minorities or diasporas, who could find themselves deprived of their fundamental right to access information.

The CJEU has traditionally paid attention to connecting factors justifying extraterritorial orders. Following its own jurisprudence, it will now be faced with the challenge of identifying a possible connecting element to justify a worldwide effect of the Austrian Court’s local assessment. It needs to be recalled that a fundamental tenet of L’OrĂ©al is the principle that the mere accessibility of a website is not enough a reason to conclude that a jurisdiction is being targeted, and it is for national courts to make the assessment. With the exception of the ECtHR that applies to date one of the most expansive jurisdictional approaches (Perrin, 2005), international policy-makers (such as for instance the UN Special Rapporteur on Freedom of Opinion and Expression, the OSCE Representative on Freedom of the Media, the OAS Special Rapporteur on Freedom of Expression and the ACHPR Special Rapporteur on Freedom of Expression and Access to Information’s Joint Declaration on Freedom of Expression and the Internet of 2011, among others) and most courts favour a different approach inspired to judicial self-restrain, and put emphasis on a ‘real and substantial connection’ to justify jurisdiction over Internet content.

When personality rights are at stake, the recent CJEU decision in Bolagsupplysningen of 2017 (incidentally, derogating from the more established CJEU decisions in Shevill, 1995, and eDate, 2011) suggested that, when incorrect online information infringes personality rights, applications for rectification and/or removal are to be considered single and indivisible, and a court with jurisdiction can rule on the entirety of an application.

However, this precedent (already controversial in its own right) seems unfit to apply to this case. Bolagsupplysningen builds at least on the same rationale as in the other decisions of the CJEU mentioned above, that expansive jurisdiction can be justified by the necessity to guarantee the protection of citizens’ fundamental rights and not to see them frustrated by a scattered territorial application. In the case of political speech, where limitations need to be justified by an overriding public interest, such as typically public safety, the connecting element the Court looks for becomes immediately less apparent, as it cannot be assumed that the same speech would prove equally inflammatory in different places under different social and political circumstances. In other words, public order better lends itself to territorially sensitive protection.     

This taps into the assessment of the necessity and proportionality of the measure that decision-makers need to make before removing content. As a matter of principle, the geographical scope of limitations is part of the least restrictive means test: the ECtHR for instance in Christians Against Fascism and Racism (1980) considered that even when security considerations would outweigh the disadvantage of suppressing speech and thus justify the issue of a ban, said ban would still need a ‘narrow circumscription of its scope in terms of territorial application’ to limit its negative side effects.  Similarly, the 2010 OSCE/ODIHR – Venice Commission Guidelines on Freedom of Peaceful Assembly (as quoted by the ECtHR in Lashmankin v. Russia of 2017) consider that ‘[b]lanket restrictions such as a ban on assemblies in specified locations are in principle problematic since they are not in line with the principle of proportionality which requires that the least intrusive means’, and thus restrictions on locations of public assemblies need to be weighed against the specific circumstances of each case. Translated in the context of digital communications, the principle requires that the territorial scope of content removal orders is narrowly circumscribed and strictly proportionate to the interest protected. An injunction to remove commentary on national politics worldwide, as a result, seems unlikely to be considered the least intrusive means.

Conclusions

The decision of the CJEU has the potential to change the landscape of intermediary responsibility and the boundaries of lawful speech as we know them. By being asked to remove content that is identical or analogous, intermediaries will be making, in all other cases other than removing mere copies of posts that have already been found illegal, active determinations on the legality of third-party content. While the re-upload of illegal content is an issue of growing importance that needs to be addressed, consideration needs to be paid as to whether this would be an appropriate measure, as solutions borrowed from other fields like copyright protection can sit at odds with the specificities of content regulation and infringe on European and international standards for the protection of freedom of expression online.

Similarly, by granting an extraterritorial injunction in this case the Court would follow a stream that has been emerging in the last few years in privacy and data protection. Thanks to extraterritorial reach, the GDPR is rapidly becoming a global regulatory benchmark for the processing of personal data, which arguably benefits European Union citizens and protects their relevant fundamental rights. The same could not be true if this rationale would be applied to standards of legitimate political speech. It is questionable whether the EU (or any other jurisdiction) bears any interest in setting a global regulatory benchmark for content regulation. By restricting the accessibility of content beyond the national boundaries where the original dispute took place, it would restrict other citizens’ right to receive information without granting any substantive benefit, such as protecting public security, to the citizens of the first state.

Barnard & Peers: chapter 9
Photo credit: Slate magazine


[1] L. Karamanidou (2016) ‘Violence against migrants in Greece: beyond the Golden Dawn’, Ethnic and Racial Studies, 39:11, 2002-2021; D. Skleparis (2016) ‘(In)securitization and illiberal practices on the fringe of the EU’, European Security, 25:1, 92-111.

Friday, 21 September 2018

Crushing terrorism online – or curtailing free speech? The proposed EU Regulation on online terrorist content




Professor Lorna Woods, University of Essex

On 12th September 2018, the Commission published a proposal for a regulation (COM(2018) 640 final) aiming to require Member States to require certain internet intermediaries to take proactive if not pre-emptive action against terrorist content on line as well as to ensure that state actors have the necessary capacity to take action against such illegal content. It is described as “[a] contribution from the European Commission to the Leaders’ meeting in Salzburg on 19-20 September 2018”. The proposal is a development from existing voluntary frameworks and partnerships, for example the EU Internet Forum, and the non-binding Commission Recommendation on measures to effectively tackle illegal content online ((C(2018)1177 final), 1st March 2018) and its earlier Communication on tackling illegal content online (COM(2017) 555 final). In moving from non-binding to legislative form, the Commission is stepping up action against such content; this move may also be seen as part of a general tightening of requirements for Internet intermediaries which can also be seen in the video-sharing platform provisions in the revised Audiovisual Media Services Directive and in the proposals regarding copyright. Since the proposal has an “internal market” legal base, it would apply to all Member States.

The Proposal

Article 1 of the proposed Regulation sets out its subject matter, including its geographic scope.  The scope of the proposed regulation is directed to certain service providers, “hosting service provider” in respect of specified content “illegal terrorist content”.  Terms are defined in Article 2. Article 2(1) defines “hosting service provider” (HSP) as “a provider of information society services consisting in the storage of information provided by and at the request of the content provider and in making the information stored available to third parties”. The definition of illegal terrorist content found in Article 2(5) is one (or more) of the following types of information:

(a) inciting or advocating, including by glorifying, the commission of terrorist offences, thereby causing a danger that such acts be committed;
(b) encouraging the contribution to terrorist offences;
(c) promoting the activities of a terrorist group, in particular by encouraging the participation in or support to a terrorist group within the meaning of Article 2(3) of Directive (EU) 2017/541
(d) instructing on methods or techniques for the purpose of committing terrorist offences.

The format does not matter: thus terrorist content can be found in text, images, sound recordings and videos.

Article 3 specifies the obligations of the HSPs. In addition to a specific obligation to prohibit terrorist content in their terms and conditions, HSPs are obliged to take appropriate, reasonable and   proportionate actions against terrorist content, though those actions must take into account fundamental rights, specifically freedom of expression.

Article 4 introduces the idea of a removal order, and requires that the competent authorities of the Member States are empowered to issue such orders; requirements relating to removal orders are set out in Article 4(3).  It does not seem that the issuing of such orders require judicial authorization, though the Regulation does envisage mechanisms for HSPs or the “content provider” to ask for reasons; HSPs may also notify issuing authorities when the HSP views the order as defective (on the basis set out in Article 4(8)), or to notify the issuing authority of force majeure. Article 4(2) states:

Hosting service providers shall remove terrorist content or disable access to it within one hour from receipt of the removal order.

The regulation also envisages referral orders; these do not necessitate the removal of content, nor – unlike the position for removal orders – does it specify deadlines for action. On receipt of a referral order, a HSP should assess the notified content for compatibility with its own terms and conditions. It is obliged to have in place a system for carrying out such assessments. There is also an obligation in Article 6 for HSPs in appropriate circumstances to take (unspecified) effective and proportionate proactive measures and must report upon these measures. Article 6 also envisages the possibility that competent authorities may – in certain circumstances – require a hosting service provider to take specified action.

Article 7 requires hosting service providers to preserve data for certain periods.  The hosting service provider is also required to provide transparency reports as well as to operate within certain safeguards specified in Section III, including transparency reporting, human oversight of decisions, complaints mechanisms and information to content providers – these are important safeguards to ensure that content is not removed erronously.  Section IV deals with cooperation between the relevant authorities and with the HSPs.  Cooperation with European bodies (e.g. Europol) is also envisaged.  As part of this, HSPs are to establish points of contact.

The Regulation catches services based in the EU but also those outside it which provide services in the EU (with jurisdiction in relation to Article 6 (proactive measures), 18 (penalties) and 21 (monitoring) going to the Member State in which the provider has its main establishment) and should designate a legal representative. The Member State in which the representative is based has jurisdiction (for the purposes of Articles 6, 18 and 21). Failure so to designate means that all Member States would have jurisdiction.  Note that as the legal form of the proposal is a Regulation, national implementing measures would not be required more generally.

Member States are required to designate competent authorities for the purposes of the regulation, and also to ensure that penalties are available in relation to specified articles such penalties to be effective, proportionate and dissuasive.  The Regulation also envisages a monitoring programme in respect of action taken by the authorities and the HSPs.  Member States are to ensure that their competent authorities have the necessary capacity to tackle terrorist content online.

Preliminary Comments

The proposal is in addition to the Terrorism Directive, the implementation date for which is September 2018.  That directive includes provisions requiring the blocking and removal of content; is the assumption that – even before they are require legally to be in place – these provisions are being seen as ineffective.

This is also another example of what seems to be a change in attitude towards intermediaries, particularly those platforms that host third party content.  Rather than the approach from the early 2000s – exemplified in the e-Commerce Directive safe harbour provisions – that these providers are and to some extent should be expected to be content-neutral, it now seems that they are being treated as a policy tool for reaching content viewed as problematic.  From the definition in the Regulation, it seems that some of the HSPs could have – provided they were neutral -fallen within the terms of Article 14 e-Commerce Directive: they are information society service providers that provide hosting services.  The main body of the proposed regulation does not deal with the priority of the respective laws but in terms of the impact on HSPs, the recitals claim

“any measures taken by the hosting service provider in compliance with this Regulation, including any proactive measures, should not in themselves lead to that service provider losing the benefit of the liability exemption provided for in that provision.  This Regulation leaves unaffected the powers of national authorities and courts to establish liability of hosting service providers in specific cases where the conditions under Article 14 of Directive 2000/31/EC for liability exemption are not met”.

This reading in of what is effectively a good Samaritan saving clause follows the approach that the Commission had taken with regard to its recommendation – albeit in that instance without any judicial or legislative backing.  Here it seems that the recitals of one instrument (the Regulation) are being deployed to interpret another (the e-Commerce Directive). 

The recitals here also specify that although Article 3 puts HSPs under a duty of care to take proactive measures, this should not constitute ‘general monitoring’; such general monitoring is precluded according to Article 15 e-Commerce Directive. How this boundary is to be drawn remains to be seen. Especially as the regulation envisages prevention of uploads as well as swift take-downs. Further, recital 19 also recognises that

“[c]onsidering the particularly grave risks associated with the dissemination of terrorist content, the decisions adopted by the competent authorities on the basis of this Regulation could derogate from the approach established in Article 15(1) of Directive 2000/31/EC, as regards certain specific, targeted measures, the adoption of which is necessary for overriding public security reasons”.

This is a new departure in the interpretation of Article 15 e-Commerce Directive.

The Commission press release suggests the following could be caught: social media platforms, video streaming services, video, image and audio sharing services, file sharing and other cloud services, websites where users can make comments or post reviews. There is a limitation in that the content hosted should be made available to third parties. Does this mean that if no one other than the content provider can access the content, the provider is not an HSP?  This boundary might prove difficult in practice.  The test does not seem to be one of public display so services where users who are content providers can choose to let others have access (even without the knowledge of the host) might fall within the definition. What would be the position of a webmail service where a user shared his or her credentials so that others within that closed circle could access the information? Note that the Commission is also envisaging services whose primary purpose is not hosting but which allows user generated content– e.g. a news website or even Amazon – also fall within the definition. 

The scope of HSP is broad and may to some extent overlap with that of video-sharing platforms or even audiovisual media service providers for the purposes of the Audiovisual Media Services Directive (AVMSD).  Priorities and conflicts will need to be ironed out in that respect. The second element of this broadness is that the HSP provisions are not just applying to the big companies, the ones to some extent already cooperating with the Commission, but also to small companies. In the view of the Commission terrorist content may be spread just as much by small platforms as large.  Similar to the approach in the AVMSD, the Commission claims that the regulatory burden will be proportionate as the proportionality with mean the level of risk as well as the economic capabilities would be taken into account. 

In line with the approach in other recent legislation (e.g. GDPR, video-sharing platforms provisions in AVMSD) the proposal has an extraterritorial dimension. HSPs would be caught if they provide a service in the EU.  The recitals clarify that “the mere accessibility of a service provider’s website or of an email address and of other contact details in one or more Member States taken in isolation should not be a sufficient condition for the application of this Regulation” [rec 10]; instead a substantial connection is required [rec 11]. Whether this will have a black out effect similar to the GDPR remains to be seen; it may depend on whether the operator is aware enough of the law; how central the hosting element is and how large a part of its operations the EU market is.

While criminal law, in principle, is a matter for Member States, the definition of terrorist content relies on a European definition – though whether this definition is ideal is questionable.  For companies that operate across borders, this is presumably something of a relief (and as noted above, the proposal is based on Article 114 TFEU, the internal market harmonisation power).  The Commission also envisages this a mechanisms limiting the possible scope of the obligations – only material that falls within the EU definition falls within the scope of this obligation – thereby minimising impact on freedom of expression (Proposal p. 8).  Whether national standards will consequently be precluded is a different question.  Note that the provisions in the AVMSD that focus on video sharing platforms were originally envisaged as maximum harmonisation but, as a result of amendments from the Council, retuned to minimum harmonisation (the Council amendments also introduced provisions on terrorist content into the AVMSD based on the same definition).

The removal notice is a novelty aimed at addressing differential approaches in the Member States in this regard (an on-going problem within the safe harbour provisions of the e-Commerce Directive), but also to ensure that such take down requests are enforceable.  Note, however, that it is up to each Member State to specify the competent authorities, which may give rise to differences between the Member States, perhaps also indicating differences in approach.  The startling point is probably the very short timescale: 1 hour (a complete contrast to the timing for example specified in the UK’s Terrorism Act 2006).  The removal notices have been a source of concern.  This is not very long which will mean that - especially with non-domestic providers and taking into account time differences - HSPs will need to think how to man such a requirement (unless the HSPs plan to automate their responses to notices), especially if the HSP hopes to challenge ‘unsatisfactory’ notices (Art 4(8)). 

Given the size of the penalties in view, industry commentators have suggested that all reported content will be taken down.  This is certainly would be a concern in relation to situations where the HSPs had to identify terrorist content (ie ascertain not just that it was in a certain location but also that it met the legal criteria) themselves.  Is it not the case that this criticism is fully appropriate here.  Here, HSPs are not having to decide whether or not the relevant content is terrorist or not- the notice will make that choice for them.  Further, the notice is made not by private companies with a profit agenda but instead by public authorities (presumably) orientated to the public good and with some experience in the topic as well as in legal safeguards.  Furthermore, the authority must include reasons. Indeed, the Commission is of the view that referrals are limited to the competent authorities which will have to explain their decisions ensures the proportionality of such notices (Proposal p. 8). Nonetheless, a one hour time frame is a very short period of time.

Another ambiguity arises in the context of referral notices. It seems that the objective here is to put the existing voluntary arrangements on a statutory footing but with no obligation on the HSP to take the content down within a specified period. Rather the HSP is to assess whether the content referred is compatible with the HSPs terms of service (not whether the content is illegal terrorist content).  Note this is a different from the situation where the HSP discovers the content itself and there has been no official view as to whether the content falls within the definition of terrorist content or not. This seems rather devoid of purpose: relevant authorities have either decided that the content is a problem (in which case the removal notice seems preferable as the decision is made by competent authorities not private companies) or the notice refers to content which is not quite bad enough to fall with the content prohibited by the regulation but the relevant authorities would still like it down, with the responsibility for that decision being pushed on to the HSP. Such an approach seems undesirable.

Article 6 requires HSPs to take effective proactive measures.  These are not specified in the Regulation, and may therefore allow the HSPs some leeway to take measures that seem appropriate in the light of each HSP’s own service and priorities, though it seems here that there may also be concerns about the HSPs’ interpretation of relevant terrorist content.  It is perhaps here that criticisms about the privatisation of the fight against terror comes to the fore.  Note, however that Article 6(4) allows a designated authority to impose measures specified by the authority on the HSP.  Given that this is dealt with at the national level, some fragmentation across the EU may arise; there seems to be no cooperation mechanism or EU coordination of responses under Article 6(4).

There is also the question of freedom of expression. Clearly state mandated removal of content should be limited, but it is the intention that HSPs have no freedom to remove objectionable content for other reasons. At some points, the recitals suggest precisely this: “hosting service providers should act with due diligence and implement safeguards, including notably human oversight and verifications, where appropriate, to avoid any unintended and erroneous decision leading to removal of content that is not terrorist content” [rec 17]. Presumably the intention is that HSPs should take steps to avoid mistakenly considering content to be terrorist.  They clearly are under obligations to take other forms of content down, e.g. child pornography and hate speech. 

More questionable is the position with regard other types of content: the controversial and the objectionable, for example.  As private entities human rights obligations do not bite on them in the same way as they do with regards to States, so there may be questions about the extent to which a content provider can claim freedom of expression against an unwilling HSP (e.g. for Mastodon, the different instances have different community standards set up by that community - should those communities not be entitled to enforce those standards (providing that they are not themselves illegal)?).  There may moreover be differences between the various Member States as to how such human rights have horizontal effect and the deference given to contractual autonomy.  With regard to the video sharing platforms, it seems that room is given to the platforms to enforce higher standards if they so choose; there is not such explicit provision here.

A final point to note is the size of the penalties that are proposed.  The proposal implicitly distinguished between one-off failings and a ‘systematic failure to comply with obligations’.  In the latter cases, penalties of up to 4% of global turnover- in this there are similarities to the scale of penalties under the GDPR.  This seems to be developing into a standard approach in this sector.

Barnard & Peers: chapter 25, chapter 9
JHA4: chapter II:5
Photo credit: Europol

Thursday, 19 January 2017

When is Facebook liable for illegal content under the E-commerce Directive? CG v. Facebook in the Northern Ireland courts



Lorna Woods, Professor of Internet Law, University of Essex

Introduction

The ubiquity of social media platforms and their significance in disseminating information (true or false) to potentially wide groups of people was highly unlikely to have been in the minds of the European legislators when they agreed, in 2000, the e-Commerce Directive (Directive 2000/31/EC) (ECD). Facebook itself was launched only in 2004. Despite the changing times and technological capabilities, the Commission has decided not to revise the ECD, specifically its safe harbour provisions for intermediaries, in its current single digital market programme.  Although the ECD seems set to remain unchanged, the application of the safe harbour provisions raises many difficult questions which have not yet been fully answered at EU level by the Court of Justice. CG v. Facebook ([2016] NICA 54), a decision of the Northern Irish Court of Appeal, illustrates some of these difficulties and certainly raises questions about the proper interpretation of the ECD and its relationship with the Data Protection Directive.

Intermediary Immunity - Legal Framework

The ECD provides immunity from liability for certain ‘information society service providers’ (ISS providers) on certain conditions.  To gain immunity, the ISS provider must

-          be an ISS provider within the terms of the ECD; and
-          one of the following applies:
-          the provider is a ‘mere conduit’ (Art. 12 ECD);
-          provides caching services (Art. 13 ECD); or
-          provides hosting services (Art. 14 ECD).

Each one of these three categories provides for a different level of immunity, which seems connected with the level of knowledge the ISS provider is assumed to have of the problematic content. Here Article 14, which deals with hosting, is the relevant provision. It provides:

1. Where an information society service is provided that consists of the storage of information provided by a recipient of the service, Member States shall ensure that the service provider is not liable for the information stored at the request of a recipient of the service, on condition that:
(a) the provider does not have actual knowledge of illegal activity or information and, as regards claims for damages, is not aware of facts or circumstances from which the illegal activity or information is apparent; or
(b) the provider, upon obtaining such knowledge or awareness, acts expeditiously to remove or to disable access to the information.
2. Paragraph 1 shall not apply when the recipient of the service is acting under the authority or the control of the provider.
3. This Article shall not affect the possibility for a court or administrative authority, in accordance with Member States' legal systems, of requiring the service provider to terminate or prevent an infringement, nor does it affect the possibility for Member States of establishing procedures governing the removal or disabling of access to information.

The recitals to the ECD give more detail as to the scope of services protected by Article 14 and there is a certain amount of case law on this point, notably Google Adwords (Case C-236/08) and the Grand Chamber decision in L’Oreal v. eBay (Case C-324/09). Recital 42 has been pointed to by the Court in these cases as relevant for understanding the sorts of activities protected by the immunity. Recital 42 refers to services of a

mere technical, automatic and passive nature, which implies that the information society service provider has neither knowledge of nor control over the information which is transmitted or stored.

The ECJ in Google Adwords referred to this as being ‘neutral’ (para 113-4). The Grand Chamber in its subsequent L’Oreal decision suggested that advice in optimising presentation would mean a provider was no longer neutral (para 114).

The provision protects relevant ISS providers from liability in relation to illegal content, provided they have no knowledge (actual or constructive) of the illegal activity or information, and that if they have such knowledge, they have acted expeditiously to remove it. In L'Oreal v eBay the Court of Justice provided a standard or test by which one can measure whether or not a website operator could be said to have acquired an 'awareness' of an illegal activity of illegal information in connection with its services, that is whether "a diligent economic operator would have identified the illegality and acted expeditiously".   The CJEU also held that an awareness of illegal activities or information may become apparent as the result of an investigation by the operator itself or where the operator receives notification of such activity.  Article 14 does not protect ISS providers from injunctions, or the costs associated with any such injunctions (see Recital 45).

Additionally, Article 15 specifies that, for those falling within Articles 12-14, Member States cannot impose a ‘general obligation’ to monitor content to determine whether content is illegal. There has been a considerable amount of dispute as to the relationship between this provision and the scope of immunity, especially given the requirements in L’Oreal.  Recital 40 notes that ‘service providers have a duty to act, under certain circumstances, with a view to preventing or stopping illegal activities’ and that the immunity provisions ‘should not preclude the development and effective operation, by the different interested parties, of technical systems of protection and identification and of technical surveillance instruments made possible by digital technology’. The Recitals also state:

(47) Member States are prevented from imposing a monitoring obligation on service providers only with respect to obligations of a general nature; this does not concern monitoring obligations in a specific case and, in particular, does not affect orders by national authorities in accordance with national legislation.

(48) This Directive does not affect the possibility for Member States of requiring service providers, who host information provided by recipients of their service, to apply duties of care, which can reasonably be expected from them and which are specified by national law, in order to detect and prevent certain types of illegal activities.

The distinction between general monitoring and specific monitoring has yet to be fully elaborated, and is an issued much discussed in the context of intellectual property enforcement, especially as regards keeping pirated copies of materials down after taking it down in the first place.

Facts of CG

McCloskey opened a Facebook page in August 2012 entitled ‘Keeping Our Kids Safe from Predators’ in which he published details of individuals who had criminal convictions relating to sexual offences involving children.  This page was not subject to any privacy settings.  One individual who was so named brought action against Facebook and an interim injunction was issued requiring Facebook to remove the page and related comments, on the basis that the comments responding to the posting were threatening, intimidatory, inflammatory, provocative, reckless and irresponsible. This was the XY litigation. Immediately after the page was removed, McCloskey set up a new page, Predators 2. CG was identified on this page on 22 April 2013; his photograph was published and there were discussions about where he lived. Comments included abusive language, violent language – including support for those who would commit violence against CG and for the exclusion of CG from the community in which he lived.  The disclosure of CG’s residence was contrary to the position taken by the Public Protection Arrangements in Northern Ireland (PPANI), which took the view that such disclosure interferes with the rehabilitation process.

On 26th April 2013, CG’s solicitors wrote to Facebook and its solicitors in Northern Ireland, claiming the material was defamatory and that CG’s life was at risk. A hardcopy of Predators 2 page was enclosed. Facebook’s response was that CG should use the online reporting tool, but CG expressed a desire not to have to engage with Facebook. By 22 May 2013 Facebook removed all postings on Predators 2, but on 28 May, CG issued proceedings. Subsequently, CG’s solicitors wrote to Facebook complaining that the photograph had been shared 1622 times and that other Facebook users had included comments threatening violence. They identified the main URL, but not all such instances which Facebook then requested. This information was provided on 3rd and 4th December and removed on 4th or 5th December. A further reposting of the photographed by RS occurred on 23 December, stating that this was what a “pedo” looked like. A letter of claim was send to Facebook on 8th January 2014, identifying the relevant URLs and the page was taken down on 22 January 2014.  While CG accepted that the defamation claim was without merit, it was accepted that he was extremely concerned about potential violence as well as the effect on his family.

Judgment at First Instance

The trial judge had to deal with claims against McCloskey, as well as claims against Facebook.  The trial judge, having reviewed the evidence, concluded that McCloskey’s conduct constituted harassment of CG. The case against Facebook was based on the tort of misuse of private information. To find that there had been such misuse, there had to be a reasonable expectation of privacy in relation to the relevant information  which should take into account all the circumstances (relying on JR38 [2015] UKSC 42 and Murray v. Express Newspapers [2008] EWCA Civ 446). The judge also accepted the submission that the Data Protection Act, and specifically the category of ‘sensitive data’, provided a useful touchstone as to what information could be seen as private (see Green Corns Ltd v. Claverly Group Limited [2005] EWHC 958). The judge concluded that the use of a photograph or name in conjunction with information which could identify where CG lived and any information about his family members were private information. The judge considered that Facebook was put on notice of the problematic nature of the material by the XY litigation (which mentioned the Predator 2 page) and that simple searches would reveal the page, as it had an almost identical name with identical purposes. The trial judge concluded that it was apparent on the face of the posts that consideration of the lawfulness of the posts was needed. As regards the Electronic Commerce (EC Directive) Regulations 2002, which implement the ECD in the UK, the judge rejected the contention that there was an obligation to give Facebook notice in a particular form. So, neither the ECD nor the 2002 Regulations protected Facebook from the claim of misuse of private information.

A further claim under the Data Protection Act was added late in the day. The judge concluded that –in the absence of relevant discovery - CG had not established this proposition. Facebook appealed. CG also appealed as regards the data protection point, but did not pursue this point.

Court of Appeal Judgment

The Court noted that there was agreement that McCloskey’s behaviour was unreasonable conduct sufficient to give rise to criminal liability (R v Curtis [2010] EWCA 123), and that the 2002 Regulations do not cover injunctions. The Court agreed that this was an appropriate case in which to make an order taking to down the material to protect CG from continued intimidation [para 40]. The Court noted that the tort of misuse of private information and harassment, while complementary, are not the same and that a finding of harassment did not automatically mean that there had been a misuse of private information.

As regards the tort, the Court noted that there was no dispute between the parties that this case was about an intrusion, but that the tort would come into play only if there was a reasonable expectation of privacy in the information, which is a fact sensitive determination.  The Court of Appeal noted the public interest in knowing about criminal convictions; it also disagreed with the trial court judge about the reading across of the categories of sensitive information in the DPA. It held:

The fact that the information is regulated for that purpose does not necessarily make it private’ [para 45].

Reviewing the material, the Court held that the context of harassment was determinative to the finding that CG has a reasonable expectation of privacy in the material [para 49]. By contrast, RS was protected by principles of open justice which allow citizens ‘to communicate the decisions of the criminal justice systems to others’ and therefore CG did not have a reasonable expectation of privacy in relation to that posting [para 51].

The Court then considered whether Facebook could rely on the safe harbour provisions of the ECD and the 2002 Regulations. It held that the 2002 Regulations need to be understood in the light of Art 15 ECD even though it is not formally implemented in the UK. According to the Court, Article 15 ‘clearly’ applied to Facebook [para 52]. While not expressly stated, the Court’s approach is based on the assumption that Article 14 (safe harbour provisions for those providing hosting services) and Regulation 19 of the 2002 Regulations, which implement it, also apply.

The Court then considered the issue of notice. Facebook argued that CG had not given proper notice, on the basis that CG had not used Facebook’s online submission process. The Court of Appeal agreed with the trial court’s dismissal of this argument, stating, ‘[a]ctual knowledge is sufficient however acquired’ [para 58]. Facebook challenged the approach taken at first instance, that Facebook had the resources to find the material and assess it [High Court, para 61].  It was also argued that the way the High Court approached the question of constructive knowledge also implied a monitoring obligation. The trial judge referred to the XY litigation; that litigation plus the letters of CG’s solicitors; and the litigation together with some elementary investigation of the profile. The Court of Appeal agreed with these concerns.  It stated the question as being:

Whether Facebook had actual knowledge of the misuse of private information … or knowledge of facts and circumstances which made it apparent that the publication of the information was private

before commenting that

[t]he task would, of course, have been different if there had been a viable claim in harassment made against Facebook [para 62].

It did not elaborate the basis or extent of the difference.

The Court concluded that the XY litigation did not fix Facebook with sufficient notice; that it only could do so if Facebook was subject to a monitoring obligation. In any event, knowledge of a propensity to harass did not fix Facebook with notice about the private information. As regards the correspondence, the Court held that this too was insufficient to fix Facebook with notice. While it referred to the problematic content, it did not refer to misuse of privacy. ‘The correspondence did not, therefore, provide actual notice of the basis of claim which is now advanced’ [para 64]. The Court also considered that there was nothing in the letters to indicate that the information was private. So, while ‘the omission of the correct form of legal characterisation of the claim ought not to be determinative of the knowledge and facts and circumstances which fix social networking sites such as Facebook with liability’, it is necessary to identify ‘a substantive complaint in respect of which the relevant unlawful activity is apparent’. 

Here, since there was no indication in the letter of claim that the address was the issue, the Court did not ‘consider that the correspondence raised any question of privacy in respect of the material published’. [para 69] By contrast, in the letter of 26th November, CG referred to the general identification of where CG was living and the threat from paramilitaries. This was sufficient to establish knowledge of facts and circumstances in relation to that particular post. Referring to the Court of Justice in L’Oreal, the Court noted that Facebook is obliged to act as a diligent economic operator. This point was not argued; Facebook was found to be liable in respect of that post for the period 26th November-4/5 December.

The burden of proof is in the first instance on the claimant to show knowledge; thereafter the ISS must prove it did not.

As regards the DPA, it was agreed that Predators contained personal data and sensitive personal data, the issue was whether Facebook Ireland could be seen as subject to the UK DPA.  The ECJ rulings in Google Spain (Case C-131/12) and Weltimmo (Case C-230/14) were argued before the Court. The Court did not accept the submission that Google Spain was limited to its particular facts and the concern that the protection offered by the Data Protection Directive would be undermined if it excluded out of EU data controllers. The Court here noted that Weltimmo in fact built on the approach in GoogleSpain. It concluded that Facebook is a data controller established in the UK for the purposes of the DPA.  Although the Court accepted that the ECD does not cover data protection, and this is reflected in Regulation 3 of the 2002 Regulations, the Court held at para 95:

The starting point has to be the matter covered by the e-Commerce Directive which is the exemption for information society services from the liability to pay damages in certain circumstances …We do not consider that this is a question relating to information society services covered by the earlier Data Protection Directive and accordingly do not accept that the scope of the exemption from damages is affected by those Directives.’

Comment

This case is one of a number coming through the Northern Irish court system regarding different types of problematic content and the responsibility of social media platforms to take action against such content.  Shortly before this case was handed down, the High Court handed down its decision in J20 v Facebook Ireland Ltd ([2016] NIQB 98). Other cases are working their way through the system: AY v Facebook (Ireland) Ltd ([2016] NIQB 76), concerning naked images of a school girl on a ‘shame page’; MM v BC, RS and Facebook ([2016] NIQB 60), concerning revenge porn; and Galloway v Frazer and Google t/a YouTube ([2016] NIQB 7) concerning defamatory and harassing videos.  While this case is based in the particular cultural and legal context of Northern Ireland, and raises questions on the meaning of private information, it also leads of questions about the interpretation of EU laws, notably the ECD and DPD.

The first point to note is that the Court does not directly address the question of the applicability of Articles 14 and 15 ECD, beyond stating the Article 15 clearly applies. Article 15 is dependent on the ISS provider providing services that fall within one of Article 12, 13 or 14 ECD, with Article 14 being relevant here. So the question is whether Article 14 ECD (and consequently Regulation 19 of the 2002 Regulations) applies here. While the text of Article 14 ECD refers to ‘the storage of information provided by a recipient of the service’, the case law makes it clear that not any storage will do. Rather, the service provider must be neutral as regards the content, technical and passive.  In this regard, services Facebook provide regarding information of interest to Facebook users (News Feed algorithm and content recommendation algorithm, as well as Ad Match services), may mean that the question of neutrality and passivity here is at least worthy of investigation, in that Facebook may promote certain content (in the term of L’Oreal, para 114). Of course in Netlog (Case C-360/10), the Court of Justice held that a social media platform could benefit from Article 14, but this does not mean that all will – much will depend on the facts (see eg Commission 2012 Working Paper on trust in the digital single market (SEC(2011) 1641 final, accompanying COM(2011) 942 final).

Assuming Article 14 (and its UK equivalent, Regulation 19) applies, the next question is whether Facebook was on notice.  The ECD is silent on the nature of any formalities, leaving it to Member States and industry (via self-regulation per Recital 40) to fill in the detail.  In its 2012 Working Paper, the Commission acknowledged that there were diverging views as to what notice required, ranging from those who argued that nothing less than a court order should be accepted (seemingly thereby focussing on just actual knowledge) through to those who suggested that general awareness of the use of the site for illegal content was sufficient (which covers constructive knowledge) (p. 33-34). It seems there are three main issues here:

- Whether notice has to be given in any particular format;
- Whether notice has to identify the illegality or whether identifying the problematic content will do; and
- The relationship between constructive notice and Article 15, also bearing in mind the obligations of the diligent economic operator.

Facebook argued of course that a person complaining about content should use the tools provided by Facebook and provide rather precise information.  The Court, rightly, held that to require a particular format to be used but run counter to the aim (particularly with reference to the 2002 Regulations) of facilitating the ability of users to make complaints. It is less clear the position of the Court with regard to the need to provide URLs. The need to provide specific URLs makes it difficult for claimants especially those who seek orders for content to be taken down and to stay down (seen particularly in the field of intellectual property enforcement, for example even in L’Oreal). In this case, where the Court found Facebook liable CG had provided specific URLs, but the Court is silent on whether the lack of specific URLs was a determinative factor in the other instances.  It is submitted that, provided sufficient identifying information about the content is provided, precise URLs should not be required especially for a diligent economic operator (discussed below).

The Court focussed on the question of whether CG sufficiently identified the reason why the content is illegal. In this, the Court observes that the omission of the correct legal characterisation is not determinative; to have held to the contrary would undermine the ability of claimants without lawyers to have material taken down. The Court moves on to suggest that the relevant unlawful activity has to be apparent. It does not consider to whom such unlawfulness must be apparent, or indeed the prior question of whether the ECD requires just notification of content or activity perceived as illegal by the complainant, rather than a justification of why the complainant thinks that. While on the facts of this case there are concerns that CG referred to causes of action that were clearly wrong (e.g, defamation), it is arguable that the Court’s position needs further refinement. Certainly the Court’s approach on this aspect seems generous to Facebook in terms of what it needs to be told.

In this regard a number of comments can be made.  While, an operator would need to make an assessment about the legitimacy of a take down request, that is a separate issue from the fact of being notified that someone thinks some content is problematic. Further, there may a world of difference between what a man on the street might so recognise and that which the diligent economic operator should recognise and the detail required for that. Indeed, in L’Oreal, the ECJ held:

although  such  a  notification  admittedly  cannot  automatically  preclude  the  exemption  from  liability  provided  for  in  Article  14  of  Directive  2000/31,  given  that  notifications  of  allegedly  illegal  activities  or  information  may  turn out to be insufficiently precise or inadequately substantiated, the fact remains that such notification  represents,  as  a  general  rule,  a  factor  of  which  the  national  court  must  take  account  when  determining,  in  the  light  of  the  information  so  transmitted  to  the  operator,  whether  the  latter  was  actually  aware  of  facts  or  circumstances  on  the  basis  of  which  a  diligent economic operator should have identified the illegality (para 121-2).

This suggests that a diligent economic operator may not just rely on what a complainant said, but may have to take steps to fill in the blanks.  As the Commission reported in 2012, it has been suggested by some that the degree to which it is obvious that the activity or information is illegal should play a role in this assessment.  Some content is more obviously problematic than others. This position is not incompatible with the approach of the Court here: the problem for CG is that an address is not usually that problematic in privacy terms, it was the context (not apparent on the face of it) that made it so [para 69].  This distinction may have relevance for the AY litigation, if not the revenge porn case – depending on the nature of the images.

The final point of concern relates to general monitoring. The rejection by the Court of the possibility becoming aware of a particular type of content (as from the XY litigation) and being on notice as a consequence deserves further examination. This depends on what is meant by ‘general monitoring’ as opposed to a ‘specific’ monitoring obligation, accepted by recital 47 ECD, and recognised by the Commission in its 2012 Working Paper (p. 26).  It is unfortunate that the Court did not give this more attention. While case law has made clear that filtering of all content, for example, constitutes general monitoring (SABAM v Scarlet (Case C-70/10)), it has been argued- principally in the context of IP enforcement -that searching for a particular instance of content (re-occurring) is not.  Such a broad view of general monitoring as the Court here adopted also seems to decrease the space in which the diligent economic operator acts, raising questions about the meaning of L’Oreal.  Note also that the Commission in its recent review noted ‘there are important areas such as incitement to terrorism, child sexual abuse and hate speech on which all types of online platforms must be encouraged to take more effective voluntary action to curtail exposure to illegal or harmful content’ (COM/2016/0288 final).  This suggests that the Commission may expect such platforms to be proactive and not merely reactive. 

Perhaps the most significant point, and one on which a reference should perhaps have been made, is the relationship between the ECD and DPD, a point yet not dealt with in English law (see Mosley v Google [2015] EWHC 59 (QB)).  The Court accepted fairly readily that Facebook (Ireland) falls under the UK DPA, but then insists that despite the fact that data protection is excluded from the field of application of the ECD, that Facebook pages and comments fell within the “matter covered by the e-Commerce Directive” which provide a “tailored solution for the liability of [ISS providers] in the particular circumstances” set out in the ECD. It did not explain why, beyond asserting that the ECD safe harbour provisions do ‘not interfere with any of the principles in relation to the processing of personal data, the protection individuals ... or the free movement of data’ [para 95]. In this assessment, the Court overlooked the fact that under the DPD a remedy must be provided to individuals, so as to make effective their rights and, that the protection awarded to data subjects should not vary depending on the mechanism used for that processing.  Furthermore, Recital 14 to the ECD elaborates that

The protection of individuals with regard to the processing of personal data is solely governed by Directive 95/46/EC …..the implementation and application of this Directive should be made in full compliance with the principles relating to the protection of personal data.

Whilst a Member State was free to provide more far-reaching to protection to intermediaries, this freedom reaches its limit when it conflicts with another harmonised area of EU law, such as data protection. The Court’s position on this point, and especially its reasoning, in the light of the terms of both directives, is not convincing.

In sum, the outcome – liability for Facebook on one aspect of the content posted – sounds on the face of it a narrowing of immunity.  The reality points in a different direction. While there are a number of problematic issues with which the court had to deal, the impact of this judgment lies in the statements of general principle which the Court made. Significantly, these fell into areas ultimately governed by EU law, rather than purely domestic matters.  It is far from certain that those issues are clearly determined at EU level, nor that the Court’s assessment here is free from doubt.


Photo credit: