Showing posts with label civil liability. Show all posts
Showing posts with label civil liability. Show all posts

Saturday, 20 April 2024

‘Trusted’ rules on trusted flaggers? Open issues under the Digital Services Act regime





Alessandra Fratini and Giorgia Lo Tauro, FratiniVergano European Lawyers

Photo credit:  Lobo Studio Hamburg, via Wikimedia Commons

 

1 Introduction

The EU’s Digital Services Act (DSA) institutionalises the tasks and responsibilities of ‘trusted flaggers’, key actors in the online platform environment, that have existed, with roles and functions of variable scope, since the early 2000. The newly applicable regime fits with the rationale and aims pursued by the DSA (Article 1): establishing a targeted set of uniform, effective and proportionate mandatory rules at Union level to safeguard and improve the functioning of the internal market (recital 4 in the preamble), with the objective of ensuring a safe, predictable and trusted online environment, within which fundamental rights are effectively protected and innovation is facilitated (recital 9), and for which responsible and diligent behaviour by providers of intermediary services is essential (recital 3). This article, after retracing the main regulatory initiatives and practices at EU level that paved the way for its adoption, looks at the DSA’s trusted flaggers regime and at some open issues that remain to be tested in practice.

 

2 Trusted reporters: the precedents paving the way to the DSA

The activity of flagging can be generally recognised as that of third parties reporting harmful or illegal content to intermediary service providers that hold that content in order for them to moderate it. In general terms, it refers to flaggers that have “certain privileges in flagging”, including “some degree of priority in the processing of notices, as well as access to special interfaces or points of contact to submit their flags”. This, in turn, poses issues in terms of both the flaggers’ responsibility and their trustworthiness since, as rightly noted, “not everyone trusts the same flagger.”

In EU law, the notion of trusted flaggers can be traced back to Directive 2000/31 (the ‘e-Commerce Directive’), the foundational legal framework for online services in the EU. The Directive exempted intermediaries from liability for illegal content they managed if they fulfilled certain conditions: under Articles 12 (‘mere conduit’), 13 (‘caching’) and 14 (‘hosting’) – now replaced by Articles 4-6 DSA – intermediary service providers were liable for the information stored at the request of the recipient of the service if, once become or made aware of any illegal content, such content was not removed or access to it was not disabled “expeditiously” (also recital 46). The Directive encouraged mechanisms and procedures for removing and disabling access to illegal information to be developed on the basis of voluntary agreements between all parties concerned (recital 40).

This conditional liability regime encouraged intermediary services providers to develop, as part of their own content moderation policies, flagging systems that would allow them to rapidly treat notifications so as not to trigger liability. The systems were not imposed as such by the Directive, but adopted as a result of the liability regime provided therein.

Following the provisions of Article 16 of the Directive, which supports the drawing up of codes of conduct at EU level, in 2016 the Commission launched the EU Code of Conduct on countering illegal hate speech online, signed by the Commission and several service providers, with others joining later on. The Code is a voluntary commitment made by signatories to, among others, review the majority of the flagged content within 24 hours and remove or disable access to content assessed as illegal, if necessary, as well as to engage in partnerships with civil society organisations, to enlarge the geographical spread of such partnerships and enable them to fulfil the role of a ‘trusted reporter’ or equivalent. Within the context of the Code, trusted reporters are entrusted to provide high quality notices, and signatories are to make information about them available on their websites.

Subsequently, in 2017 the Commission adopted the Communication on tackling illegal content online, to provide guidance on the responsibilities of online service providers in respect of illegal content online. The Communication suggested criteria based on respect for fundamental rights and of democratic values to be agreed by the industry at EU level through self-regulatory mechanisms or within the EU standardization framework. It also recognised the need to strike a reasonable balance between ensuring a high quality of notices coming from trusted flaggers, the scope of additional measures that companies would take in relation to trusted flaggers and the burden in ensuring these quality standards, including the possibility of removing the privilege of a trusted flagger status in case of abuses.

Building on the progress made through the voluntary arrangements, the Commission adopted Recommendation 2018/334 on measures to effectively tackle illegal content online. The Recommendation establishes that cooperation between hosting service providers and trusted flaggers should be encouraged, in particular, by providing fast-track procedures to process notices submitted by trusted flaggers, and that hosting service providers should be encouraged to publish clear and objective conditions for determining which individuals or entities they consider as trusted flaggers. Those conditions should aim to ensure that the individuals or entities concerned have the necessary expertise and carry out their activities as trusted flaggers in a diligent and objective manner, based on respect for the values on which the Union is founded.

While the 2017 Communication and 2018 Recommendation are the foundation of the trusted flaggers regime institutionalized by the DSA, further initiatives took place in the run-up to it.

In 2018, further to extensive consultations with citizens and stakeholders, the Commission adopted a Communication on tackling online disinformation, which acknowledged once again the role of trusted flaggers to foster credibility of information and shape inclusive solutions. Platform operators agreed on a voluntary basis to set self-regulatory standards to fight disinformation and adopted a Code of Practice on disinformation. The Commission’s assessment in 2020 revealed significant shortcomings, including inconsistent and incomplete application of the Code across platforms and Member States and lack of an appropriate monitoring mechanism. As a result, the Commission issued in May 2021 its Guidance on Strengthening the Code of Practice on Disinformation, containing indications on the dedicated functionality for users to flag false and/or misleading information (p. 7.6). The Guidance also aimed at developing the existing Code of Practice towards a ‘Code of Conduct’ as foreseen in (now) Article 45 DSA.

Further to the Guidance, in 2022 the Strengthened Code of Practice on Disinformation was signed and presented by 34 signatories who had joined the revision process of the 2018 Code. For signatories that are VLOPs, the Code aims to become a mitigation measure and a Code of Conduct recognized under the co-regulatory framework of the DSA (recital 104).

Finally, in the context of provisions/mechanisms defined before the DSA, it is worth mentioning Article 17 of Directive 2019/790 (the ‘Copyright Directive’), which draws upon Article 14(1)(b) of the e-Commerce Directive on the liability limitation for intermediaries and acknowledges the pivotal role of rightholders when it comes to flagging unauthorised use of their protected works. Under Article 17(4), in fact, “[i]f no authorisation is granted, online content-sharing service providers shall be liable for unauthorised acts of communication to the public, including making available to the public, of copyright-protected works and other subject matter, unless the service providers demonstrate that they have: (a) made best efforts to obtain an authorisation, and (b) made, in accordance with high industry standards of professional diligence, best efforts to ensure the unavailability of specific works and other subject matter for which the rightholders have provided the service providers with the relevant and necessary information; and in any event (c) acted expeditiously, upon receiving a sufficiently substantiated notice from the rightholders, to disable access to, or to remove from their websites, the notified works or other subject matter, and made best efforts to prevent their future uploads in accordance with point (b)” (emphasis added).

 

3 Trusted flaggers under the DSA

The DSA has given legislative legitimacy to trusted flaggers, granting a formal (and binding) recognition to a practice that far developed on a voluntary basis.

According to the DSA, a trusted flagger is an entity that has been granted such status within a specific area of expertise by the Digital Service Coordinator (DSC) in the Member State in which it is established, because it meets certain legal requirements. Online platform providers must process and decide upon - as a priority and with undue delay - notices from trusted flaggers concerning the presence of illegal content on their online platform. That requires that online platform providers take the necessary technical and organizational measures with regard to their notice and action mechanisms. Recital 61 exposes the rationale and scope of the regime: notices of illegal content submitted by trusted flaggers, acting within their designated area of expertise, are treated with priority by providers of online platforms.

The regime is mainly outlined in Article 22.

Eligibility requirements

Article 22(2) sets out the three cumulative conditions to be met by an applicant wishing to be awarded the status of trusted flagger: 1) expertise and competence in detecting, identifying and notifying illegal content; 2) independence from any provider of online platforms; and 3) diligence, accuracy and objectivity in how it operates. Recital 61 clarifies that only entities - being them public in nature, non-governmental organizations or private or semi-public bodies - can be awarded the status, not individuals. Therefore, (private) entities only representing individual interests, such as brands or copyright owners, are not excluded from accessing the trusted flagger status. However, the DSA displays a preference for industry associations representing their member interests applying for the status of trusted flagger, which appears to be justified by the need to ensure that the added-value of the regime (the fast-track procedure) be maintained, with the overall number of trusted flaggers awarded under the DSA remaining limited. As clarified by recital 62, the rules on trusted flaggers should not be understood to prevent providers of online platforms from giving similar treatment to notices submitted by entities or individuals that have not been awarded trusted flagger status, from otherwise cooperating with other entities, in accordance with the applicable law. The DSA does not prevent online platforms from using mechanisms to act quickly and reliably against content that violates their terms and conditions.

The status’ award

Under Article 22(2), the trusted flagger status shall be awarded by the DSC of the Member State in which the applicant is established. Different from the voluntary trusted flagger schemes, which are a matter for individual providers of online platforms, the status awarded by a DSC must be recognized by all providers falling within the scope of the DSA (recital 61). Accordingly, the DSC shall communicate to the Commission and to the European Board for Digital Services details of the entities to which they have awarded the status of trusted flagger (and whose status they have suspended or revoked - Article 22(4)), and the Commission shall publish and keep up to date such information in a publicly available database (Article 22(5)).

Under Article 49(3), Member States were to designate their DSCs by 17 February 2024; the Commission makes available the list of designated DSCs on its website. The DSCs, who are responsible for all matters relating to supervision and enforcement of the DSA, shall ensure coordination in its supervision and enforcement throughout the EU. The European Board for Digital Services, among other tasks, shall be consulted on Commission’s guidelines on trusted flaggers, to be issued “where necessary”, and for matters “dealing with applications for trusted flaggers” (Article 22(8)).

The fast-track procedure

Article 22(1) requires providers of online platforms to deal with notices submitted by trusted flaggers as a priority and without undue delay. In doing so, it refers to the generally applicable rules on notice and action mechanisms under Article 16. On the priority to be granted to trusted flaggers’ notices, recital 42 invites providers to designate a single electronic point of contact, that “can also be used by trusted flaggers and by professional entities which are under a specific relationship with the provider of intermediary services”. Recital 62 explains further that the faster processing of trusted flaggers’ notices depends, amongst other, on “actual technical procedures” put in place by providers of online platforms. The organizational and technical measures that are necessary to ensure a fast-track procedure for processing trusted flaggers’ notices remain a matter for the providers of online platforms.

Activities and ongoing obligations of trusted flaggers

Article 22(3) requires trusted flaggers to regularly (at least once a year) publish detailed reports on the notices they submitted, make them publicly available and send them to the awarding DSCs. The status of trusted flagger may be revoked or suspended if the required conditions are not consistently upheld and/or the applicable obligations are not correctly fulfilled by the entity. The status can only be revoked by the awarding DSC following an investigation, either on the DSC’s own initiative or on the basis of information received from third parties, including providers of online platforms. Trusted flaggers are thus granted the possibility to react to, and fix where possible, the findings of the investigation (Article 22(6)).

On the other hand, if trusted flaggers detect any violation of the DSA provisions by the platforms, they have the right to lodge a complaint with the DSC of the Member State where they are located or established, according to Article 53. Such a right is granted not only to trusted flaggers but to any recipient of the service, to ensure effective enforcement of the DSA obligations (also recital 118).

The role of the DSCs

With the DSA it becomes mandatory for online platforms to ensure that notices submitted by the designated trusted flaggers are given priority. While online platforms maintain discretion as to entering into bilateral agreements with private entities or individuals they trust and whose notices they want to process with priority (recital 61), they must give priority to entities that have been awarded the trusted flagger status by the DSCs. From the platforms’ perspective, the DSA ‘reduces’ their burden in terms of decision-making responsibility by shifting it to the DSCs, but ‘increases’ their burden in terms of executive liability (for the implementation of measures ensuring the mandated priority). From the reporters’ perspective, the DSA imposes a set of (mostly) harmonised requirements to be awarded the status by a DSC, once and for all platforms, and to maintain such status afterward.

While the Commission’s guidelines are in the pipeline, some DSCs have proposed and adopted guidelines to assist potential applicants with the requirements for the award of the trusted flagger status. Among others, the French ARCOM published “Trusted flaggers: conditions and applications” on its website; the Italian AGCOM published for consultation its draft “Rules of Procedure for the award of the trusted flagger status under Article 22 DSA”; the Irish Coimisiún na Meán published the final version of its “Application Form and Guidance to award the trusted flagger status under Article 22 DSA”; as did the Austrian KommAustria, the Danish KFST and the Romanian ANCOM. The national guidelines have been developed following exchanges with the other authorities designated as DSCs (or about to be so) with the view to ensuring a consistent and harmonised approach in the implementation of Article 22. As a matter of fact, the published guidelines are largely comparable.

 

4 Open issues

While the DSA’s regime is in its early stages and no trusted flagger status has been awarded yet, some of its merits have been acknowledged already, such as the fact that it has standardised existing practices, harmonised eligibility criteria, complemented special regimes – such as the one set out in Article 17 Copyright Directive - confirmed the cooperative approach between stakeholders, and finally formalised the role of trusted flaggers as special entities in the context of notice and action procedures.

At the same time, the DSA’s regime leaves on the table some open issues, such as the respective role of trusted flaggers and other relevant actors in the context of tackling illegal/harmful content online, such as end users and reporters that reach bilateral agreements with the platforms, which remain to be addressed in practice for the system to effectively work.

The role of trusted flaggers vis-à-vis end users

While the DSA contains no specific provision on the role of trusted flaggers vis-à-vis end users, some of the national guidelines published by the DSCs require that the applicant entity, as part of the condition relating to due diligence in the flagging process, indicates whether it has mechanisms in place to allow end users to report illegal content to it. In general, applicants have to indicate how they select content to monitor (which may include end users’ notices) and how they ensure that they do not unduly concentrate their monitoring on any one side and apply appropriate standards of assessment taking all legitimate rights and interests into account. As a matter of fact, the organisation and management of the relationship with end users (onboarding procedures, collection and processing of their notices, etc.) are left to the trusted flaggers. For example, some organisations (such as those part of the INHOPE network, operating in the current voluntary schemes) offer hotlines to the public to report to them, including anonymously, illegal content found online.

Although it is clear from the DSA that end users retain the right to flag their notices directly to online platforms (Article 16) with no duty to notify trusted flaggers, as well as their right to autonomously lodge a complaint against platforms (Article 53) and to claim compensation for damages (Article 54), it remains unclear whether, in practice, it will be more convenient for end users to rely on specialised trusted flaggers for their notices to be processed more expeditiously – in other words, whether the regime provides sufficient incentives, at least for some end users, to go the trusted flaggers’ way. On the other hand, it remains unclear to what extent applicant entities will be actually ‘required’ to put in place effective mechanisms to allow end users to report illegal or harmful content to them – in other words, whether the due diligence requirements will imply the trusted flaggers’ review of end users’ notices, within their area of expertise.

From another perspective, in connection with the reporting of illegal content, trusted flaggers may come across infringements by the platforms, as any recipient of online services. In such cases, Article 53 provides the right to lodge a complaint with the competent DSC, with no difference being made between complaints lodged respectively by trusted flaggers and by end users. If ‘priority’ is to be understood as the main feature of the privileged status granted to trusted flaggers when flagging illegal content online to platforms, a question arises about the possibility of granting them a corresponding priority before the DSCs when they complain about an infringement by online platforms. And in this context, one may wonder whether lodging a complaint to the DSC on behalf of end users might also fall within the scope of action of trusted flaggers (to the extent of claiming platforms’ abusive practices such as shadow banning, recital 55).

The role of trusted flaggers vis-à-vis other reporters

The DSA requires online platforms to put in place notice and action mechanisms that shall be “easy to access and user-friendly” (Article 16) and to ensure an internal complaint-handling system to recipients of the service (Article 20). However, as noted above, these provisions concern all recipients, with no difference in treatment for trusted flaggers. Although their notices are granted priority by virtue of Article 22, which leaves platforms free to choose the most suitable mechanisms, the DSA says nothing about ‘how much priority’ should be guaranteed to trusted flaggers with respect to notices filed not only by end users, but (also - and especially) by other entities/individuals with whom platforms have agreements in place.

In this respect, guidance would be welcome as to the degree of prevalence that platforms are expected to give trusted flaggers’ notices compared to other trusted reporters’, as would a clarification as to whether the nature of the content may influence such prevalence. From the trusted flaggers’ perspective, there should be a rewarding incentive to engage in a role that comes with the price tag of ongoing obligations.

 

5 Concluding remarks

While the role of trusted flaggers is not new when it comes to tackling illegal content online, the tasks newly entrusted to the DSCs in this context are. This results in a different allocation of responsibilities for the actors involved, with the declared aims of ensuring harmonisation of best practices across sectors and territories in the EU and a better protection for users online. Some open issues, as the ones put forward above, appear at this stage to be relevant, in particular for ensuring that the trusted flaggers’ mechanism effectively works as an expeditious remedy against harmful and illegal content online. It is expected that the awaited Commission guidelines under Article 22(8) DSA will shed a clarifying light on those issues. In the absence, there is a risk that the costs-benefits analysis - with the costs being certain and the benefits in terms of actual priority uncertain - might make the “trusted flagger project” unattractive for a potential applicant.

Wednesday, 18 January 2023

Proposed AI Liability Directive: The EC lending a helping hand

 



 

Ida Varošanec (PhD student, University of Groningen) and Nynke Vellinga (post-doc researcher, University of Groningen)

 

Photo credit: Cryteria, via Wikimedia commons

 

 

1. Objectives of the proposal

 

On 28 September 2022, the European Commission published a proposal for an AI Liability Directive and an accompanying update of a complementary Product Liability Directive. In the preceding Report on Artificial Intelligence Liability, the Commission acknowledged the immense potential of artificial intelligence (AI). However, it has also identified the risks associated with it. For instance, connectivity of an AI-encompassing product can compromise its safety for users as it may be susceptible to cyber-attacks. Moreover, the outcomes of AI cannot always be predicted. To this end, ex ante risk assessments can be insufficient to address the possible wrongs. The opacity inherent in advanced AI-based products and systems makes it difficult to ascertain the responsibility of AI systems’ behaviours and choices. It is pivotal that humans can be able to understand how algorithmic decisions were reached in order to make a liability claim. Particularly, the opacity of AI systems can hinder victims in proving fault and causality in such cases. Consequently, the AI Liability Directive aims to ensure the provision of protection for victims of AI commensurate with those where damage has been caused by other products. It aims to increase trust in new technologies as well as to contribute to the ‘rollout of AI’ and improve its development in the internal market by preventing fragmentation and increasing legal certainty through harmonisation. Once adopted, these proposals will complement other AI regulation (e.g. the proposed AI Act) and establish liability rules for software and AI systems in the EU.

 

2. The proposed AI Liability Directive: scope

 

Other than what the name might suggest, the proposed AI Liability Directive does not provide any new ground of civil liability. That remains a matter of the national legislature, except when it comes to liability for defective products under the regime of the Product Liability Directive. Instead, the proposed AI Liability Directive provides for burden of proof rules on disclosure of evidence and rebuttable presumptions on a causal link. The rules cannot be invoked in every tort law case: only cases on fault liability fall within the scope of the AI Liability Directive. These are cases where liability for damage caused by (the use of) an AI system are based on fault. Fault encompasses wrongful actions and omissions. Due to the characteristics of AI systems, it can be difficult or prohibitively expensive to prove fault. Consequently, those suffering damage caused by an AI system might not get compensated for damage suffered, whereas those suffering damage from a non-AI system would be able to get compensated as they do not incur the same difficulties in proving fault. The proposed AI Liability Directive would address this discrepancy by providing rules on:

 

‘(a) the disclosure of evidence on high-risk artificial intelligence (AI) systems to enable a claimant to substantiate a non-contractual fault-based civil law claim for damages;

(b) the burden of proof in the case of non-contractual fault-based civil law claims brought before national courts for damages caused by an AI system.’ (art. 1 Proposal)

 

The proposed AI Liability Directive does not apply to risk-based liability claims. However, the proposed new Product Liability Directive does provide similar rules on disclosure of evidence and the burden of proof (art. 8 and 9).

 

The scope of the applicability of the proposed AI Liability Directive is partially limited to a specific category of AI system: the high-risk AI systems. For the definition of a high-risk AI system, the AI Liability Directive refers to the proposed AI Act. The AI Act identifies and lays down rules as per levels of risk associated with AI systems – those that carry (1) unacceptable risk, (2) high risk, and (3) limited risk. The fourth category – that of systems that pose a minimal risk (e.g. spam filters) – although within the material scope are not subject to any concrete rules. The first category – (1) unacceptable risk – concerns AI systems that are a clear threat to the safety, livelihoods and rights of persons (e.g. manipulation and social scoring systems). The third category (those of limited risk), is subject to specific transparency obligations due to their nature (e.g. deep fakes). High-risk AI systems represent those which are embedded in products subject to third-party assessment under sectoral legislation, and those which are not components of products but are deemed to be high-risk when used in certain areas (e.g. transport, education, safety components etc.). Such systems are subject to a set of requirements (e.g. risk assessments, mitigation systems, data quality, logging, and technical documentation) before being placed on the market.

 

The rules on disclosure of evidence as laid down in art. 3 of the proposed AI Liability Directive only apply to these high-risk AI systems. The rules on the burden of proof, however, apply to claims relating to all AI systems. (art. 4).

 

3. The proposed AI Liability Directive: disclosures and presumptions

 

3.1 Rebuttable presumption of a causal link

 

Article 4 introduces a rebuttable presumption of a causal link in the case of fault. It allows the courts to presume the causal connection between the fault of the defendant and the AI output (or failure to produce it) under three cumulative conditions. Firstly, the fault needs to be established (either by an assuming court or a claimant) consisting of non-compliance with the duty of care under EU or national law. Secondly, it must be ‘considerably likely’ that the fault has influenced the output of an AI system. Finally, damage by an AI system needs to be demonstrated. Paragraphs (2) and (3) differentiate between providers and users of AI systems.

 

The causal link concerning a claim for damages caused by a high-risk AI system shall not be presumed if the defendant demonstrates that sufficient evidence and expertise is reasonably accessible for the claimant to prove this causal link (art. 4(4)). When the claim concerns an AI system that is not high-risk, the presumption of the causal link shall only be applied where the national court considers it excessively difficult for the claimant to prove the causal link (art. 4(5)). Moreover, the defendant can always rebut any presumption regarding the causal link (art. 4(6)).

 

3.2 The disclosure of evidence

 

Article 3 of the proposed AI Liability Directive establishes the conditions regarding the disclosure of evidence and introduces a rebuttable presumption of non-compliance. This applies to high-risk AI systems as defined in the AI Act.

 

Article 3(1) of the Directive allows a court to order the disclosure of relevant evidence about specific high-risk AI systems that are suspected to have caused damage. Recital (16) confirms that this requirement has been unaccounted for by the AI Act proposal. However, the disclosure provided for in the AI Liability Directive does not seem to be absolute. Rather, it seems to be subject to a certain proportionality assessment since disclosure is only allowed to the extent necessary for sustaining the liability claim. To do that, national courts ought to consider the legitimate interests of all parties. Particularly, this applies in relation to the preservation of trade secrets and confidential information. The explanation notes convey that the aim is to strike a balance between ‘the claimant’s rights and the need to ensure that such disclosure would be subject to safeguards to protect the legitimate interests of all parties concerned, such as trade secrets or confidential information’. In other words, the goal is to strike a balance between claimant’s rights and the need for safeguards imposed by the court to preserve trade secrets or confidential information. The court will presume that the defendant did not comply with the duty of care if they refuse to disclose the requested information. In this case, the defendant can remedy that and rebut that presumption by providing evidence.

 

Recital (20) confirms that national courts should have the power to take specific measures to ensure the confidentiality of trade secrets during and after the proceedings in a proportionate manner in respect of balancing interests. Such measures could include restricting access to documents containing trade secrets and access to hearings or documents and transcripts thereof to a limited number of people. However, the courts cannot decide on this without considering the need to ensure the right to an effective remedy, fair trial and potential harm that could occur.

 

4. Comment

 

It is commendable that the EU is taking steps to address the information asymmetry between AI systems’ developers and individuals harmed by their creations. The (prospect of the) realisation of liability and compensation can provide an important incentive to AI providers and users to ensure the safe and correct functioning of their systems. Together with the Product Liability Directive, the proposed AI Act and other product safety rules such as the General Product Safety Directive, the European Commission is designing a comprehensive framework addressing the safety of AI systems and liability for damage caused by those systems.

 

Nevertheless, the AI Liability Directive harbours an important flaw that might have been overlooked:  the AI Liability Directive offers defendants a way to avoid having to disclose evidence. As the proposal currently stands, if the defendant refuses to provide trade secret information about an AI system as evidence, they will be presumed as non-compliant with the duty of care. A defendant might decide it is strategically wiser to simply pay compensation in exchange for non-disclosure. After all, trade secrets are of great economic importance to such enterprises and one of the conditions for legal protection of trade secrets requires continuous efforts to keep information secret. In other words, non-compliance becomes a choice in order to avoid disclosure.

 

This is at odds with the drive for transparency of high-risk AI systems in the EU AI Act (art. 13). By offering an option to avoid transparency, the AI Liability Directive undermines this requirement for transparency indicated in the AI Act. This creates tension between these two new instruments. The EC could have chosen a clearer stance on transparency and its necessity, by carrying the requirement of transparency from the AI Act through to the AI Liability Directive.

 

There is an additional disadvantage to the route the EC has chosen. By avoiding the disclosure of the information necessary to establish fault in liability claims, one can avoid any flaws of the AI systems to be disclosed. This might take away any motivation to improve an AI system, as sufficient financial means offer the possibility to keep any shortcomings of the AI system hidden from the public eye. The lack of transparency could thereby lead to disincentivising the development and improvement of AI systems. Ultimately, this might negatively impact innovation and trust in AI.

 

 



Thursday, 19 January 2017

When is Facebook liable for illegal content under the E-commerce Directive? CG v. Facebook in the Northern Ireland courts



Lorna Woods, Professor of Internet Law, University of Essex

Introduction

The ubiquity of social media platforms and their significance in disseminating information (true or false) to potentially wide groups of people was highly unlikely to have been in the minds of the European legislators when they agreed, in 2000, the e-Commerce Directive (Directive 2000/31/EC) (ECD). Facebook itself was launched only in 2004. Despite the changing times and technological capabilities, the Commission has decided not to revise the ECD, specifically its safe harbour provisions for intermediaries, in its current single digital market programme.  Although the ECD seems set to remain unchanged, the application of the safe harbour provisions raises many difficult questions which have not yet been fully answered at EU level by the Court of Justice. CG v. Facebook ([2016] NICA 54), a decision of the Northern Irish Court of Appeal, illustrates some of these difficulties and certainly raises questions about the proper interpretation of the ECD and its relationship with the Data Protection Directive.

Intermediary Immunity - Legal Framework

The ECD provides immunity from liability for certain ‘information society service providers’ (ISS providers) on certain conditions.  To gain immunity, the ISS provider must

-          be an ISS provider within the terms of the ECD; and
-          one of the following applies:
-          the provider is a ‘mere conduit’ (Art. 12 ECD);
-          provides caching services (Art. 13 ECD); or
-          provides hosting services (Art. 14 ECD).

Each one of these three categories provides for a different level of immunity, which seems connected with the level of knowledge the ISS provider is assumed to have of the problematic content. Here Article 14, which deals with hosting, is the relevant provision. It provides:

1. Where an information society service is provided that consists of the storage of information provided by a recipient of the service, Member States shall ensure that the service provider is not liable for the information stored at the request of a recipient of the service, on condition that:
(a) the provider does not have actual knowledge of illegal activity or information and, as regards claims for damages, is not aware of facts or circumstances from which the illegal activity or information is apparent; or
(b) the provider, upon obtaining such knowledge or awareness, acts expeditiously to remove or to disable access to the information.
2. Paragraph 1 shall not apply when the recipient of the service is acting under the authority or the control of the provider.
3. This Article shall not affect the possibility for a court or administrative authority, in accordance with Member States' legal systems, of requiring the service provider to terminate or prevent an infringement, nor does it affect the possibility for Member States of establishing procedures governing the removal or disabling of access to information.

The recitals to the ECD give more detail as to the scope of services protected by Article 14 and there is a certain amount of case law on this point, notably Google Adwords (Case C-236/08) and the Grand Chamber decision in L’Oreal v. eBay (Case C-324/09). Recital 42 has been pointed to by the Court in these cases as relevant for understanding the sorts of activities protected by the immunity. Recital 42 refers to services of a

mere technical, automatic and passive nature, which implies that the information society service provider has neither knowledge of nor control over the information which is transmitted or stored.

The ECJ in Google Adwords referred to this as being ‘neutral’ (para 113-4). The Grand Chamber in its subsequent L’Oreal decision suggested that advice in optimising presentation would mean a provider was no longer neutral (para 114).

The provision protects relevant ISS providers from liability in relation to illegal content, provided they have no knowledge (actual or constructive) of the illegal activity or information, and that if they have such knowledge, they have acted expeditiously to remove it. In L'Oreal v eBay the Court of Justice provided a standard or test by which one can measure whether or not a website operator could be said to have acquired an 'awareness' of an illegal activity of illegal information in connection with its services, that is whether "a diligent economic operator would have identified the illegality and acted expeditiously".   The CJEU also held that an awareness of illegal activities or information may become apparent as the result of an investigation by the operator itself or where the operator receives notification of such activity.  Article 14 does not protect ISS providers from injunctions, or the costs associated with any such injunctions (see Recital 45).

Additionally, Article 15 specifies that, for those falling within Articles 12-14, Member States cannot impose a ‘general obligation’ to monitor content to determine whether content is illegal. There has been a considerable amount of dispute as to the relationship between this provision and the scope of immunity, especially given the requirements in L’Oreal.  Recital 40 notes that ‘service providers have a duty to act, under certain circumstances, with a view to preventing or stopping illegal activities’ and that the immunity provisions ‘should not preclude the development and effective operation, by the different interested parties, of technical systems of protection and identification and of technical surveillance instruments made possible by digital technology’. The Recitals also state:

(47) Member States are prevented from imposing a monitoring obligation on service providers only with respect to obligations of a general nature; this does not concern monitoring obligations in a specific case and, in particular, does not affect orders by national authorities in accordance with national legislation.

(48) This Directive does not affect the possibility for Member States of requiring service providers, who host information provided by recipients of their service, to apply duties of care, which can reasonably be expected from them and which are specified by national law, in order to detect and prevent certain types of illegal activities.

The distinction between general monitoring and specific monitoring has yet to be fully elaborated, and is an issued much discussed in the context of intellectual property enforcement, especially as regards keeping pirated copies of materials down after taking it down in the first place.

Facts of CG

McCloskey opened a Facebook page in August 2012 entitled ‘Keeping Our Kids Safe from Predators’ in which he published details of individuals who had criminal convictions relating to sexual offences involving children.  This page was not subject to any privacy settings.  One individual who was so named brought action against Facebook and an interim injunction was issued requiring Facebook to remove the page and related comments, on the basis that the comments responding to the posting were threatening, intimidatory, inflammatory, provocative, reckless and irresponsible. This was the XY litigation. Immediately after the page was removed, McCloskey set up a new page, Predators 2. CG was identified on this page on 22 April 2013; his photograph was published and there were discussions about where he lived. Comments included abusive language, violent language – including support for those who would commit violence against CG and for the exclusion of CG from the community in which he lived.  The disclosure of CG’s residence was contrary to the position taken by the Public Protection Arrangements in Northern Ireland (PPANI), which took the view that such disclosure interferes with the rehabilitation process.

On 26th April 2013, CG’s solicitors wrote to Facebook and its solicitors in Northern Ireland, claiming the material was defamatory and that CG’s life was at risk. A hardcopy of Predators 2 page was enclosed. Facebook’s response was that CG should use the online reporting tool, but CG expressed a desire not to have to engage with Facebook. By 22 May 2013 Facebook removed all postings on Predators 2, but on 28 May, CG issued proceedings. Subsequently, CG’s solicitors wrote to Facebook complaining that the photograph had been shared 1622 times and that other Facebook users had included comments threatening violence. They identified the main URL, but not all such instances which Facebook then requested. This information was provided on 3rd and 4th December and removed on 4th or 5th December. A further reposting of the photographed by RS occurred on 23 December, stating that this was what a “pedo” looked like. A letter of claim was send to Facebook on 8th January 2014, identifying the relevant URLs and the page was taken down on 22 January 2014.  While CG accepted that the defamation claim was without merit, it was accepted that he was extremely concerned about potential violence as well as the effect on his family.

Judgment at First Instance

The trial judge had to deal with claims against McCloskey, as well as claims against Facebook.  The trial judge, having reviewed the evidence, concluded that McCloskey’s conduct constituted harassment of CG. The case against Facebook was based on the tort of misuse of private information. To find that there had been such misuse, there had to be a reasonable expectation of privacy in relation to the relevant information  which should take into account all the circumstances (relying on JR38 [2015] UKSC 42 and Murray v. Express Newspapers [2008] EWCA Civ 446). The judge also accepted the submission that the Data Protection Act, and specifically the category of ‘sensitive data’, provided a useful touchstone as to what information could be seen as private (see Green Corns Ltd v. Claverly Group Limited [2005] EWHC 958). The judge concluded that the use of a photograph or name in conjunction with information which could identify where CG lived and any information about his family members were private information. The judge considered that Facebook was put on notice of the problematic nature of the material by the XY litigation (which mentioned the Predator 2 page) and that simple searches would reveal the page, as it had an almost identical name with identical purposes. The trial judge concluded that it was apparent on the face of the posts that consideration of the lawfulness of the posts was needed. As regards the Electronic Commerce (EC Directive) Regulations 2002, which implement the ECD in the UK, the judge rejected the contention that there was an obligation to give Facebook notice in a particular form. So, neither the ECD nor the 2002 Regulations protected Facebook from the claim of misuse of private information.

A further claim under the Data Protection Act was added late in the day. The judge concluded that –in the absence of relevant discovery - CG had not established this proposition. Facebook appealed. CG also appealed as regards the data protection point, but did not pursue this point.

Court of Appeal Judgment

The Court noted that there was agreement that McCloskey’s behaviour was unreasonable conduct sufficient to give rise to criminal liability (R v Curtis [2010] EWCA 123), and that the 2002 Regulations do not cover injunctions. The Court agreed that this was an appropriate case in which to make an order taking to down the material to protect CG from continued intimidation [para 40]. The Court noted that the tort of misuse of private information and harassment, while complementary, are not the same and that a finding of harassment did not automatically mean that there had been a misuse of private information.

As regards the tort, the Court noted that there was no dispute between the parties that this case was about an intrusion, but that the tort would come into play only if there was a reasonable expectation of privacy in the information, which is a fact sensitive determination.  The Court of Appeal noted the public interest in knowing about criminal convictions; it also disagreed with the trial court judge about the reading across of the categories of sensitive information in the DPA. It held:

The fact that the information is regulated for that purpose does not necessarily make it private’ [para 45].

Reviewing the material, the Court held that the context of harassment was determinative to the finding that CG has a reasonable expectation of privacy in the material [para 49]. By contrast, RS was protected by principles of open justice which allow citizens ‘to communicate the decisions of the criminal justice systems to others’ and therefore CG did not have a reasonable expectation of privacy in relation to that posting [para 51].

The Court then considered whether Facebook could rely on the safe harbour provisions of the ECD and the 2002 Regulations. It held that the 2002 Regulations need to be understood in the light of Art 15 ECD even though it is not formally implemented in the UK. According to the Court, Article 15 ‘clearly’ applied to Facebook [para 52]. While not expressly stated, the Court’s approach is based on the assumption that Article 14 (safe harbour provisions for those providing hosting services) and Regulation 19 of the 2002 Regulations, which implement it, also apply.

The Court then considered the issue of notice. Facebook argued that CG had not given proper notice, on the basis that CG had not used Facebook’s online submission process. The Court of Appeal agreed with the trial court’s dismissal of this argument, stating, ‘[a]ctual knowledge is sufficient however acquired’ [para 58]. Facebook challenged the approach taken at first instance, that Facebook had the resources to find the material and assess it [High Court, para 61].  It was also argued that the way the High Court approached the question of constructive knowledge also implied a monitoring obligation. The trial judge referred to the XY litigation; that litigation plus the letters of CG’s solicitors; and the litigation together with some elementary investigation of the profile. The Court of Appeal agreed with these concerns.  It stated the question as being:

Whether Facebook had actual knowledge of the misuse of private information … or knowledge of facts and circumstances which made it apparent that the publication of the information was private

before commenting that

[t]he task would, of course, have been different if there had been a viable claim in harassment made against Facebook [para 62].

It did not elaborate the basis or extent of the difference.

The Court concluded that the XY litigation did not fix Facebook with sufficient notice; that it only could do so if Facebook was subject to a monitoring obligation. In any event, knowledge of a propensity to harass did not fix Facebook with notice about the private information. As regards the correspondence, the Court held that this too was insufficient to fix Facebook with notice. While it referred to the problematic content, it did not refer to misuse of privacy. ‘The correspondence did not, therefore, provide actual notice of the basis of claim which is now advanced’ [para 64]. The Court also considered that there was nothing in the letters to indicate that the information was private. So, while ‘the omission of the correct form of legal characterisation of the claim ought not to be determinative of the knowledge and facts and circumstances which fix social networking sites such as Facebook with liability’, it is necessary to identify ‘a substantive complaint in respect of which the relevant unlawful activity is apparent’. 

Here, since there was no indication in the letter of claim that the address was the issue, the Court did not ‘consider that the correspondence raised any question of privacy in respect of the material published’. [para 69] By contrast, in the letter of 26th November, CG referred to the general identification of where CG was living and the threat from paramilitaries. This was sufficient to establish knowledge of facts and circumstances in relation to that particular post. Referring to the Court of Justice in L’Oreal, the Court noted that Facebook is obliged to act as a diligent economic operator. This point was not argued; Facebook was found to be liable in respect of that post for the period 26th November-4/5 December.

The burden of proof is in the first instance on the claimant to show knowledge; thereafter the ISS must prove it did not.

As regards the DPA, it was agreed that Predators contained personal data and sensitive personal data, the issue was whether Facebook Ireland could be seen as subject to the UK DPA.  The ECJ rulings in Google Spain (Case C-131/12) and Weltimmo (Case C-230/14) were argued before the Court. The Court did not accept the submission that Google Spain was limited to its particular facts and the concern that the protection offered by the Data Protection Directive would be undermined if it excluded out of EU data controllers. The Court here noted that Weltimmo in fact built on the approach in GoogleSpain. It concluded that Facebook is a data controller established in the UK for the purposes of the DPA.  Although the Court accepted that the ECD does not cover data protection, and this is reflected in Regulation 3 of the 2002 Regulations, the Court held at para 95:

The starting point has to be the matter covered by the e-Commerce Directive which is the exemption for information society services from the liability to pay damages in certain circumstances …We do not consider that this is a question relating to information society services covered by the earlier Data Protection Directive and accordingly do not accept that the scope of the exemption from damages is affected by those Directives.’

Comment

This case is one of a number coming through the Northern Irish court system regarding different types of problematic content and the responsibility of social media platforms to take action against such content.  Shortly before this case was handed down, the High Court handed down its decision in J20 v Facebook Ireland Ltd ([2016] NIQB 98). Other cases are working their way through the system: AY v Facebook (Ireland) Ltd ([2016] NIQB 76), concerning naked images of a school girl on a ‘shame page’; MM v BC, RS and Facebook ([2016] NIQB 60), concerning revenge porn; and Galloway v Frazer and Google t/a YouTube ([2016] NIQB 7) concerning defamatory and harassing videos.  While this case is based in the particular cultural and legal context of Northern Ireland, and raises questions on the meaning of private information, it also leads of questions about the interpretation of EU laws, notably the ECD and DPD.

The first point to note is that the Court does not directly address the question of the applicability of Articles 14 and 15 ECD, beyond stating the Article 15 clearly applies. Article 15 is dependent on the ISS provider providing services that fall within one of Article 12, 13 or 14 ECD, with Article 14 being relevant here. So the question is whether Article 14 ECD (and consequently Regulation 19 of the 2002 Regulations) applies here. While the text of Article 14 ECD refers to ‘the storage of information provided by a recipient of the service’, the case law makes it clear that not any storage will do. Rather, the service provider must be neutral as regards the content, technical and passive.  In this regard, services Facebook provide regarding information of interest to Facebook users (News Feed algorithm and content recommendation algorithm, as well as Ad Match services), may mean that the question of neutrality and passivity here is at least worthy of investigation, in that Facebook may promote certain content (in the term of L’Oreal, para 114). Of course in Netlog (Case C-360/10), the Court of Justice held that a social media platform could benefit from Article 14, but this does not mean that all will – much will depend on the facts (see eg Commission 2012 Working Paper on trust in the digital single market (SEC(2011) 1641 final, accompanying COM(2011) 942 final).

Assuming Article 14 (and its UK equivalent, Regulation 19) applies, the next question is whether Facebook was on notice.  The ECD is silent on the nature of any formalities, leaving it to Member States and industry (via self-regulation per Recital 40) to fill in the detail.  In its 2012 Working Paper, the Commission acknowledged that there were diverging views as to what notice required, ranging from those who argued that nothing less than a court order should be accepted (seemingly thereby focussing on just actual knowledge) through to those who suggested that general awareness of the use of the site for illegal content was sufficient (which covers constructive knowledge) (p. 33-34). It seems there are three main issues here:

- Whether notice has to be given in any particular format;
- Whether notice has to identify the illegality or whether identifying the problematic content will do; and
- The relationship between constructive notice and Article 15, also bearing in mind the obligations of the diligent economic operator.

Facebook argued of course that a person complaining about content should use the tools provided by Facebook and provide rather precise information.  The Court, rightly, held that to require a particular format to be used but run counter to the aim (particularly with reference to the 2002 Regulations) of facilitating the ability of users to make complaints. It is less clear the position of the Court with regard to the need to provide URLs. The need to provide specific URLs makes it difficult for claimants especially those who seek orders for content to be taken down and to stay down (seen particularly in the field of intellectual property enforcement, for example even in L’Oreal). In this case, where the Court found Facebook liable CG had provided specific URLs, but the Court is silent on whether the lack of specific URLs was a determinative factor in the other instances.  It is submitted that, provided sufficient identifying information about the content is provided, precise URLs should not be required especially for a diligent economic operator (discussed below).

The Court focussed on the question of whether CG sufficiently identified the reason why the content is illegal. In this, the Court observes that the omission of the correct legal characterisation is not determinative; to have held to the contrary would undermine the ability of claimants without lawyers to have material taken down. The Court moves on to suggest that the relevant unlawful activity has to be apparent. It does not consider to whom such unlawfulness must be apparent, or indeed the prior question of whether the ECD requires just notification of content or activity perceived as illegal by the complainant, rather than a justification of why the complainant thinks that. While on the facts of this case there are concerns that CG referred to causes of action that were clearly wrong (e.g, defamation), it is arguable that the Court’s position needs further refinement. Certainly the Court’s approach on this aspect seems generous to Facebook in terms of what it needs to be told.

In this regard a number of comments can be made.  While, an operator would need to make an assessment about the legitimacy of a take down request, that is a separate issue from the fact of being notified that someone thinks some content is problematic. Further, there may a world of difference between what a man on the street might so recognise and that which the diligent economic operator should recognise and the detail required for that. Indeed, in L’Oreal, the ECJ held:

although  such  a  notification  admittedly  cannot  automatically  preclude  the  exemption  from  liability  provided  for  in  Article  14  of  Directive  2000/31,  given  that  notifications  of  allegedly  illegal  activities  or  information  may  turn out to be insufficiently precise or inadequately substantiated, the fact remains that such notification  represents,  as  a  general  rule,  a  factor  of  which  the  national  court  must  take  account  when  determining,  in  the  light  of  the  information  so  transmitted  to  the  operator,  whether  the  latter  was  actually  aware  of  facts  or  circumstances  on  the  basis  of  which  a  diligent economic operator should have identified the illegality (para 121-2).

This suggests that a diligent economic operator may not just rely on what a complainant said, but may have to take steps to fill in the blanks.  As the Commission reported in 2012, it has been suggested by some that the degree to which it is obvious that the activity or information is illegal should play a role in this assessment.  Some content is more obviously problematic than others. This position is not incompatible with the approach of the Court here: the problem for CG is that an address is not usually that problematic in privacy terms, it was the context (not apparent on the face of it) that made it so [para 69].  This distinction may have relevance for the AY litigation, if not the revenge porn case – depending on the nature of the images.

The final point of concern relates to general monitoring. The rejection by the Court of the possibility becoming aware of a particular type of content (as from the XY litigation) and being on notice as a consequence deserves further examination. This depends on what is meant by ‘general monitoring’ as opposed to a ‘specific’ monitoring obligation, accepted by recital 47 ECD, and recognised by the Commission in its 2012 Working Paper (p. 26).  It is unfortunate that the Court did not give this more attention. While case law has made clear that filtering of all content, for example, constitutes general monitoring (SABAM v Scarlet (Case C-70/10)), it has been argued- principally in the context of IP enforcement -that searching for a particular instance of content (re-occurring) is not.  Such a broad view of general monitoring as the Court here adopted also seems to decrease the space in which the diligent economic operator acts, raising questions about the meaning of L’Oreal.  Note also that the Commission in its recent review noted ‘there are important areas such as incitement to terrorism, child sexual abuse and hate speech on which all types of online platforms must be encouraged to take more effective voluntary action to curtail exposure to illegal or harmful content’ (COM/2016/0288 final).  This suggests that the Commission may expect such platforms to be proactive and not merely reactive. 

Perhaps the most significant point, and one on which a reference should perhaps have been made, is the relationship between the ECD and DPD, a point yet not dealt with in English law (see Mosley v Google [2015] EWHC 59 (QB)).  The Court accepted fairly readily that Facebook (Ireland) falls under the UK DPA, but then insists that despite the fact that data protection is excluded from the field of application of the ECD, that Facebook pages and comments fell within the “matter covered by the e-Commerce Directive” which provide a “tailored solution for the liability of [ISS providers] in the particular circumstances” set out in the ECD. It did not explain why, beyond asserting that the ECD safe harbour provisions do ‘not interfere with any of the principles in relation to the processing of personal data, the protection individuals ... or the free movement of data’ [para 95]. In this assessment, the Court overlooked the fact that under the DPD a remedy must be provided to individuals, so as to make effective their rights and, that the protection awarded to data subjects should not vary depending on the mechanism used for that processing.  Furthermore, Recital 14 to the ECD elaborates that

The protection of individuals with regard to the processing of personal data is solely governed by Directive 95/46/EC …..the implementation and application of this Directive should be made in full compliance with the principles relating to the protection of personal data.

Whilst a Member State was free to provide more far-reaching to protection to intermediaries, this freedom reaches its limit when it conflicts with another harmonised area of EU law, such as data protection. The Court’s position on this point, and especially its reasoning, in the light of the terms of both directives, is not convincing.

In sum, the outcome – liability for Facebook on one aspect of the content posted – sounds on the face of it a narrowing of immunity.  The reality points in a different direction. While there are a number of problematic issues with which the court had to deal, the impact of this judgment lies in the statements of general principle which the Court made. Significantly, these fell into areas ultimately governed by EU law, rather than purely domestic matters.  It is far from certain that those issues are clearly determined at EU level, nor that the Court’s assessment here is free from doubt.


Photo credit: 

Saturday, 17 September 2016

Public wi-fi and liability for illegal downloads: the CJEU judgment in McFadden




Lorna Woods, Professor of Internet Law, University of Essex

Is a business which offers free wi-fi to its customers liable if they download content unlawfully? That was the main issue in the recent CJEU judgment in McFadden, a rare case on Article 12 of the EU’s eCommerce Directive, which provides that the provider of an information society service is not liable for the transmission of information if it is a ‘mere conduit’ (as further defined) of that information.  While must of the decision seems to fit well into existing law on intermediaries and the provisions of services in general, there are a couple of potentially unexpected developments: those relating to costs for injunctive relief; and the requirements of those providing access to the Internet.

Facts

McFadden runs a lighting and sound system shop in which he offers free access to a wi-fi network to the general public in order to draw the attention of potential customers to his goods and services.  This network was used to download some content unlawfully. The question was whether McFadden was indirectly liable (for not making his network secure) or whether he could rely on intermediary immunity from liability contained in the e-Commerce Directive.

Judgment

The first requirement for someone to be able to rely on the e-Commerce Directive is to show that that body is an information society service provider within the terms of the Directive.  The problem here is that the wi-fi was provided free of charge, so does McFadden provide an economic service to bring himself within the TFEU?

The key requirement is that a service is one which is ‘normally provided for remuneration’ (as found in the case law on Article 57 TFEU, which defines ‘services’ for free movement purposes, as well as Article 1(2) of Directive 98/34, which sets out rules on consulting about new technical barriers to information society services).  Referring to Recital 18 of the e-commerce directive, the Court confirmed that under a proper interpretation of the definition of ‘information society service’ in Article 12(1) of the directive, ‘cover only those services normally provided for remuneration’ (para 39).

Applying this principle to this context, the Court noted “it does not follow that a service of an economic nature performed free of charge may under no circumstances constitute an ‘information society service’” (para 41).  For example, the service may be paid for by a third party, as is the case in free to air television which is often financed through advertising revenue rather than viewer subscription.  Here McFadden was providing the service as an advertising technique, so the Court concluded that the provision of wi-fi fell within Article 12 (para 43).

The Court then considered the application of Article 12. The national court recognised that a first requirement would be the transmission of content, but questioned whether other conditions – notably the existence of a contractual relationship – would also be required.  The Court confirmed that transmission is required and – following the wording of the recital 42 – that the activity of transmission as a ‘mere conduit’ is of a mere technical, automatic and passive nature (para 48).  While the Court accepted that there might be an interpretation of the German language version that suggested the existence of further requirements, there was no such aspect in other language versions.  In the interests of a uniform interpretation of the directive across the various language versions, the Court concluded that there were no further conditions to be satisfied (para 54).

The next question related to the types of immunity from liability, the provisions in relation to mere conduit, caching and hosting being expressed in slightly different terms.  So while Article 14 requires a host to act expeditiously on becoming aware of illegal content to continue to benefit from immunity, there is no such requirement in Article 12.  The Court suggested (as the Advocate General had done) that this difference may result from the different nature of the services provided. In particular, a host may have greater opportunity to identify illegal content than a mere conduit.  On this basis, the Court thought it inappropriate to imply a condition that a mere conduit should be required to act expeditiously into Article 12 (para 65).

The national court further questioned whether Article 12(1), read in conjunction with Article 2(b), was to be interpreted as meaning that there were additional conditions beyond that in Article 12 to which a service provider providing access to a communication network would be subject.  Again, the Court rejected this proposition on the basis that it should not disturb the balance between the various interests achieved by the legislature (para 68-69).

The Court then considered the type of relief which an injured third party might claim, and particularly whether such a party could seek injunctive relief against a mere conduit.  While Article 12 precludes a damages claim, and claims for costs (paras 74-75), according Article 12(3) a national court may order the mere conduit to take action to end the infringement of copyright (para 76), notably injunctive relief.  Costs in respect of such an action may be claimed (para 78).

The next set of questions concerned the measures that might be required of a mere conduit, given the relevance of Article 17(2) of the EU Charter of Fundamental Rights (EUCFR) (right to intellectual property), Article 16 EUCFR (right to conduct a business), and Article 11 EUCFR (freedom of expression).  The Court, adopting a margin of appreciation style approach, recognised that in case of conflict of rights it is for the national authorities to ensure a fair balance of interests (citing the CJEU ruling in Promusicae) (para 83).  The national court envisaged three possible measures McFadden could take:

-          examining all communications passing through an internet connection,
-          terminating that connection or
-          password-protecting it.

Article 15 precludes requiring intermediaries to monitor traffic and therefore the first obligation would be contrary to the  terms of the eCommerce Directive. Requiring McFadden to terminate the connection would constitute a serious interference with his rights to run a business under Article 16 EUCFR and so cannot be considered as reaching a ‘fair balance’ (paras 88-89). The third measure also constitutes a restriction of Article 16 as well as the Article 11 rights of users. In neither instance, however, is the essence of the right undermined.  The Court noted that ‘the measure adopted must be strictly targeted, in the sense that it must serve to bring an end to a third party’s infringement of copyright or of a related right but without thereby affecting the possibility of internet users lawfully accessing information using the provider’s services’, as otherwise the measure would – following the CJEU ruling in UPC Wien – be disproportionate (para 93).  Securing the network by password protecting it (but not blocking access to any particular content)

‘may dissuade the users of that connection from infringing copyright or related rights, provided that those users are required to reveal their identity in order to obtain the required password and may not therefore act anonymously’ (para 96).

Given that this third option is the only possible option, not securing the network would result in the right to intellectual property being deprived of any protection. Requiring password protection therefore strikes a fair balance.

Comment

In terms of following its Advocate General to find a free service to fall within the scope of the e-Commerce Directive (and indeed the treaty framework) the Court’s approach is not unexpected.  This is a long-standing approach of the Court in relation to services in general – indeed the Court refers to old television jurisprudence as authority – and also follows its approach in Papasavvas (C-291/13), which related to an online service which was funded through advertising revenue, rather than direct payments by users. 

This approach is potentially necessary in the context of the Internet, where many services are provided ‘free’ but in actual fact paid for by advertising or other secondary uses of user data.  Nonetheless, it is worth noting that in McFadden there was some economic context: McFadden was using the offer of free wi-fi as a form of advertisement.  The extent to which some such connection is required has not been addressed.  The Advocate General noted that 'there is no need to consider whether the scope of Directive 2000/31 [the e-commerce Directive] might also extend to the operation of such a network in circumstances where there is no other economic context' (para 50).  The Court did not address this at all.

The judgment contains little new as far as the conditions for the application of Article 12 are concerned, though it is perhaps useful to have the confirmation that further conditions should not be implied into the Directive. 

The Court determined that Article 12 (and presumably also Articles 13 and 14) give protection against costs – which as the Advocate General noted – could be as punitive as damages themselves.  Note however the distinction between an action for damages and an action for injunctive relief.  Article 12 does not protect an intermediary from being on the receiving end of an injunction, and the prohibition on costs does not extend to costs incurred in relation to given notice or an action for an injunction.  This is a difference in approach from that of the Advocate General and may be significant for those companies which have challenged injunctions.  The litigation in England and Wales in regard to blocking injunctions for trademarks in Cartier may be a case in point.

The extent to which intermediaries are under an obligation to put an end to infringing behaviour by end users has been the subject of much discussion and this judgment is useful in bringing some clarity.  The Court locates this discussion within the framework of fundamental rights, and starts off by indicating that this is a matter for national authorities – seemingly taking a leaf out of the Strasbourg court’s book on margin of appreciation. In practice, however, the Court gives a pretty clear steer to the national court on the ‘right’ outcome and in this there is a marked difference between its approach and that of the Advocate General.  Both agree that the first two possible measures are not possible. The Advocate General took the view at para 145 of his opinion that imposing security obligations

'would not in itself be effective, and thus its appropriateness and proportionality remain open to question'.

He also thought that forcing password protection could discourage or hinder usage of the WiFi service, undermining the business model of the operator. In contrast, the Court stated that the password protecting of the network is permissible; indeed, it could be seen as suggesting that the enforcement of copyright requires the network to be protected as the only means to protect Article 16. Does this suggest that all networks would be required to impose such measures on the chance that someone might use them to infringe copyright? On that basis, this judgment could have worrying implications for anonymity on the Internet, and certainly may have implications for the development of wi-fi/public access services.


Photo credit: www.amazon.co.uk