Claire Walker

With the Heartbleed web vulnerability in the tech headlines, the practical guidance issued recently by EU regulators on when to alert individuals to data breaches (and on preventive steps to reduce the risk of breaches occurring in the first place) is particularly timely. Datonomy highlights some of the key recommendations on when to make the difficult judgement call over notification.

Why the new guidance? Does it apply to your organisation?

The recent Opinion issued by the EU’s Article 29 Working Party (the body made up of national data protection regulators) concerns the ever-topical issue of personal data breach notification. Specifically, it sets out the regulators’ collective view on when data controllers should alert data subjects to a personal data breach which is likely to adversely affect those individuals’ personal data or privacy.

The guidance sets out good practice for “all controllers”. Strictly speaking the obligation to report data breaches only applies to communications services providers under current rules; however in practice, handling a data breach is a business-critical issue for all organisations. The illustrations and in the guidance are drawn from a wide range of contexts.  As well as analysing the triggers for notifying individuals that their data has been compromised, the guidance sets out practical steps to reduce the risk of breaches occurring and/ or to mitigate their severity. It is therefore a must-read for all in house counsel and their colleagues in the IT function – both in devising a data reach response plan, and in designing systems to reduce the risk of vulnerabilities in the first place.

A quick recap on breach notification obligations – current and future

As reported by Carsten on Datonomy last year, communications service providers (CSPs) are already subject to reporting obligations under EU Regulation 611/2013. CSPs’ obligations are two fold:

  • to report all data breaches to the regulator (within 24 hours); and
  • to notify the data subject “without undue delay” when the breach is “likely to adversely affect the personal data or privacy” of that individual.

Notification to the affected individual is not required if the CSP has implemented “appropriate technological protection measures” to render the data unintelligible to any person who is not authorized to access it. The Regulation defines what constitutes “unintelligible”, by reference to encryption and hashing. It does not set out specific standards but it authorises the Commission to publish a separate indicative list of technological protection measures that are sufficient for that purpose

As Datonomy readers will be aware, these notification obligations are likely to be formally extended to all data controllers, regardless of sector, under the draft EU Data Protection Regulation.

However, notification of data breaches, both to the regulator and to affected individuals, is already an important practical consideration for all organisations from a damage limitation point of view. While not risk –free, voluntary notification to the regulator and to individuals may help to mitigate the sanctions imposed by a regulator where a data controller has suffered a data breach as a result of falling short of data security obligations under the UK Data Protection Act.

Wide ranging illustrations of when a breach “likely to adversely affect” a person’s privacy

The guidance sets out seven different and wide-ranging breach scenarios. These include: loss of laptops containing medical data and financial data; web vulnerabilities exposing life insurance and medical details; unauthorised access to an ISP’s customers’ details including payment details; disclosure of hard copy credit card slips; unauthorised access to subscribers’ account data both through unauthorised disclosure and through coding errors on a website. Whilst not exhaustive, these worked examples do provide useful analysis of the different types of harm which could trigger the obligation to notify individuals.

The guidance breaks the analysis down onto three different categories of data breach, and gives illustrations of the adverse privacy effects of each type. These are:

Confidentiality breach: unauthorised disclosure of or access to personal data which can lead to ID theft, phishing attacks, misuse of credit card details, compromise of other accounts or services which use the same log in details  and a wide range of other detrimental effects on the individual’s family and private life and work prospects. Most of the examples focus on confidentiality breach.

Availability breach: accidental/ unlawful destruction or loss – which can lead to disruption and delay and even financial loss. (The illustrations of financial loss, and the consideration of “secondary effects” in a data availability context will also be of interest to those negotiating liability provisions in commercial deals which involve the handling or personal data.)

Integrity breach: the alteration of personal data – which can lead to serious consequences for medical and customer data.

The distinction is significant, particularly as the need to notify individuals about confidentiality breach can be mitigated or eliminated by the use of appropriate encryption – see below.

The guidance also stresses the need to consider likely “secondary effects” of a breach which may not appear in itself to adversely affect privacy. The example given here is of the hacking of a music service website. While the direct effect may be limited (leak of names, contact details and users’ musical preferences) it is the secondary effect – the fact that passwords have been compromised, and that users may use the same passwords across other accounts – which creates the need to notify individuals of the breach.

Prevention better than cure: practical steps to avoid the need to report breaches to individuals

In relation to each scenario, the guidance sets out examples of appropriate safeguards to reduce the risk of such breaches occurring in the first place and/ or mitigating the privacy impact. As noted above, notification to individuals is not required if a data controller can satisfy the regulator that the data has been rendered unintelligible. Common themes which run through these practical recommendations include:

  • Encryption: First and foremost, the guidance emphasises the need for appropriate, state of the art encryption with a sufficiently strong and secret key
  • Secure storage of passwords: salted and using a state of the art cryptographic hash function – simply hashing passwords is unlikely to meet the “unintelligible” data exemption for notification.
  • Password policies: requiring stronger password choices for users, and requiring password resets whenever passwords are compromised.
  • Vulnerability scanning: to reduce the risk of hacks and other breaches
  • Regular back ups: to mitigate against the effects of availability breach
  • Systems and process design: to reduce the risk of breach and /or mitigate its effects –the examples given include dissociation of medical information from individuals’names.
  • Access controls: Limiting global access, and restricting access to databases on a “need to know” and “least privilege” basis – including minimising access given to vendors for system maintenance.
  • Staff training e.g. on how to delete data securely data.
  • Incident management policies: the guidance also highlights the importance of good incident management policies in limiting the duration and effects of data breaches.

Be proactive and plan!

The new Opinion provides organisations with helpful guidance on making the difficult judgement call over when to notify customers and other individuals about breaches of their personal information. Perhaps even more importantly, it sets out some of the minimum preventive standards that regulators expect data controllers to adopt in order to demonstrate that they have implemented “appropriate” security measures under the current rules. The Opinion urges data controllers to “be proactive and plan appropriately”. The guidance will help organisations decide when they need to alert individuals – but it is having a crisis management team and a (rehearsed) action plan in place that will enable a calm and swift response, should a data breach arise.

Matthias Vierstraete

On 8 April 2014 the European Court of Justice ruled that the Data Retention Directive 2006/24/EC interferes in a particularly serious manner with the fundamental rights to respect for private life and to the protection of personal data. The Directive is declared invalid.

A.    The Directive

Directive 2006/24/EC strives for harmonization of the Member States’ national legislations providing for the retention of data by providers of publicly available electronic communications services or of a public communications network for the prevention, investigation, detection and prosecution of criminal offences. The initial intention was that service and network providers would be freed from legal and technical differences between national provisions.

The Directive and national laws implementing the Directive were often criticized. The main argument being that massive data retention was said to endanger the right to privacy. The advocates of the rules, however, argued that these rules were necessary for authorities to investigate and prosecute organized crime and terrorism.

B.    The Court of Justice

By way of preliminary rulings referred to the Court of Justice of the European Union, the Irish High Court and the Austrian Constitutional Court asked the Court of Justice to examine the validity of the Directive, in particular in the light of two fundamental rights under the Charter of Fundamental Rights of the EU, namely the fundamental right to respect for private life and the fundamental right to the protection of personal data.

  • Analysis of the data to be retained

The Court of Justice verified the data which providers must retain pursuant to the Directive. This data includes data necessary to trace and identify the source of a communication and its destination, to identify the date, time, duration and type of a communication, to identify the location of mobile equipment, the name and address of the user, the number called, IP addresses, etc. The Court observes that the retention of this data makes it possible to know the identity of the participants in communications, to identify the time of the communication, the place from where the communication took place and the frequency of communications with certain persons (§26).

This data, according to the Court allows very precise conclusions concerning private lives of persons whose data has been retained, such as habits of everyday life, places of residence, movements, social relationships and social environments frequented.

  • Analysis of the interference with fundamental rights

The Court comes to the conclusion that both requiring the retention of the data and allowing competent national authorities to access those data constitutes in itself interference with the fundamental right to respect for private life and with the fundamental right to the protection of personal data (respectively articles 7 and 8 of the Charter of Fundamental Rights of the European Union) (§ 32 – 36).

The Court agrees with the Advocate General when it states that the interference is “particularly serious. The Court in this respect holds that “the fact that data are retained and subsequently used without the subscriber or registered user being informed is likely to generate in the minds of the person concerned the feeling that their private lives are the subject of constant surveillance” (§37).

This interference is according to the Court not only serious, but moreover it is not justified. Besides the fact that the retention of data as required by the Directive does not as such adversely affect the essence of the respect for private life and protection of personal data (content of the communications as such may not be reviewed) and the Directive genuinely satisfies an objective of general interest (public security), the Court is of the opinion that the Directive has exceeded the limits imposed by the proportionality principle (§69):

  • The Directive covers all persons and all means of electronic communications as well as all traffic data without any differentiation, limitation or exception being made in the light of the objective of fighting against serious crime (§57);
  • The Directive fails to lay down any objective criterion by which to determine the limits of the access of the competent national authorities to data and their subsequent use (§60);
  • The data retention period is set at between a minimum of 6 months and a maximum of 24 months without any distinction being made between categories of data and not stating that the determination of the period must be based on objective criteria (§63 – 64);
  • The Directive does not provide for sufficient safeguards to ensure effective protection of data against the risk of abuse and against unlawful access and use (§66);
  • The Directive does not require data to be retained within the EU and thus does not meet the Charter’s requirement that compliance control by an independent authority is ensured.

The Court of Justice thus declares the Directive invalid.

C.    What’s next?

Following the Court’s invalidation of the Directive, one could wonder how this will affect European legislation and national legislation.

  • Europe

The invalidity ruled by the Court applies from the day where the Directive entered into force. It is as if the Directive never existed.

The European Commission stated in a first reaction that it “will now carefully asses the verdict and its impacts”. It is not clear whether the Commission will draft new legislation replacing the invalidated Directive. Taking into account the fact that the current Commission’s term only runs until 31 October 2014, it is not much anticipated that new law will be put forward soon.

  • Member States

Member States having transposed the Directive into national laws may now consider the future of these laws.

In case their national law is a literal transposition of the now invalidated Directive, the national laws meet with the same fate. One may consider that in such situation Member States should redraft their laws in order to be in line with the relevant Directives (95/46/EC and 2002/58/EC) and the Charter of Fundamental Rights of the European Union.

If national law deviates from the Directive, Member States should assess whether the deviations are in line with the relevant Directives (95/46/EC and 2002/58/EC) and the Charter of Fundamental Rights of the European Union.

The Court of Justice’s ruling may also have an impact on national cases concerning the legality of national laws implementing the Directive, as there are several cases pending before the constitutional courts.

  • Austria and Ireland are obviously at the basis of the European Court of Justice’s ruling, following their constitutional courts’ requests for a preliminary ruling concerning the validity of Directive 2006/24/EC;
  • Belgium: On 24 February 2014, the Belgian “Liga voor Mensenrechten” and “Ligue des droits de l’Homme” together filed a complaint before the constitutional court in order to obtain cancellation of the Belgian law implementing the Directive. The complaint was funded through crowdfunding. Following the Court of Justice’s ruling, some political parties already asked government to take the necessary steps and to amend the current legislation;
  • Bulgaria: In 2008, the Bulgarian Constitutional Court found part of the national law incompatible with the right to privacy;
  • France: In 2006, the French Constitutional Court ruled that French law provisions similar to those provided for in the Directive are not contrary to the constitution. However, in December 2013, the French data protection authority (CNIL) reacted vigorously against a new law enabling certain ministries, including French secret services, access to data retained by telecommunications operators, internet and hosting service providers, without prior approval from a judge. On that occasion, the CNIL called for a national debate on surveillance issues which could be influenced by the recent ECJ’s ruling.
  • Germany: The German Constitutional Court already declared the German implementing act unconstitutional in 2010;
  • Romania: In 2009, the Romanian Constitutional Court declared the national law on data retention unconstitutional as breaching, among others the right to privacy and the secrecy of correspondence;
  • Slovakia: In 2012, a complaint was filed before the constitutional court in order to assess the conformity with the constitution;
  • Spain: The Directive was implemented into national laws in 2007. The Spanish data protection authority (AEPD) had voiced its reservations about the Directive and requested the Government to accompany the implementation of these rules with measures curtailing the impact on data subjects’ privacy;
  • Sweden: In May 2013, Sweden was ordered to pay the European Commission 3 million EUR because Sweden had failed its obligation to timely implement the Directive;
  • United Kingdom: As yet there has been no official comment from the UK government or the Information Commissioner on the ruling of the Court of Justice. Controversial 2012 proposals for a Communications Data Bill to overhaul and significantly extend the UK’s data retention obligations were already in the political long grass – and the Court of Justice’s ruling means they are likely to stay there as we understand it.

Obviously the current situation creates uncertainties and we understand the issue will be very much discussed in Brussels (and elsewhere) in the coming weeks.

Sylvie Rousseau and Matthias Vierstraete, Olswang Brussels

Jai Nathwani

With cyber-security tipped as one of the top tech trends for 2014 a lot has already been written about the controversial data security breach proposals in Europe. But what is happening elsewhere in the world? We hear from one of Datonomy’s Asia correspondents, Olswang Partner Elle Todd

Datonomy was pleased to lead a discussion and mock cyber security breach scenario alongside the local chapter of the IAPP in Singapore last week where such issues are gaining a lot of attention and interest.

The engaging session, attended by a variety of practitioners, followed the unfortunate exploits of a fictitious international e-commerce company faced with an anonymous threat from an individual claiming that they had managed to obtain the customer database and would release it to the blogosphere. As the morning unfolded, more facts and problems emerged for the company and the audience discussed how best to respond to the potential disaster from PR, cyber risk management and legal perspectives.

Security breaches on the rise

Singapore has not been immune to cyber and other security breaches in real life in recent years. The most high profile recent incident occurred in November last year when the Singapore Prime Minister’s website was hacked by hactivist group Anonymous. Another concerned the theft of wealthy client data from Standard Chartered Bank in December 2013.

From this July there will be some new legal changes that companies will need to navigate in thinking through their response if they fall victim to such an incident.

New rules and regulations

As Datonomy has reported Singapore will get its first ever comprehensive data protection legislation with effect from 2 July this year. Containing many provisions and principles which will be familiar to those versed in European law, the new Act does contain requirements on organisations to “protect personal data in its possession or under its control by making reasonable security arrangements to prevent unauthorized access, collection, use, disclosure, copying, medication, disposal or similar risks”. Beyond that, however, there are no obligations on notification in the event of a cyber or other security breach. A company will therefore need to ensure that its systems are adequate so that a breach does not give rise to regulatory fines or other censure but does not have a legal obligation to notify the regulator or customers.

The financial services industry in Singapore does not get off so lightly however since the Monetary Authority of Singapore (the financial services regulator) has published new rules which come into force just the day before this new personal data protection act. The MAS rules will require banks to “make all reasonable effort to maintain high availability for critical systems” and notify the Authority “as soon as possible but not later than 1 hour, upon the discovery of a relevant incident” being a system malfunction or IT security incident which has a severe and widespread impact on the bank’s operations or materially impacts the bank’s service to its customers.

A five year masterplan

Looking at the developments in Singapore is of interest internationally however not just because of developments in legislation but the significant resources it is fostering and investing in to support organisations in fighting cyber threats.

Last year the Singapore Government announced a ‘Five Year National Cyber-Security Masterplan’ which will focus on three key areas:

-       Enhancing security of critical infocomm infrastructure;

-       Increasing efforts to promote infocomm security adoption among end-users and businesses; and

-       Growing Singapore’s pool of security experts in collaboration with educational institutes and industry partners.

Another key related and exciting development is the scheduled opening of Interpol’s Global Complex for Innovation in Singapore later this year. This impressive new building will act as a cutting-edge research and development facility as well as providing innovative training, operational support and partnerships. This is Interpol’s first HQ outside of Lyon, France.

Datonomy will be watching the emergence of the results of such innovation with interest since they could well prove useful in Europe as the debates continue there.

Posted on behalf of Elle Todd, partner, Olswang Asia

A budget for Big Data

Claire Walker - March 20th, 2014
Claire Walker

Even Datonomy is jumping on the Budget news bandwagon this morning, having spotted the Chancellor’s announcement that the Government is to invest £42m in Big Data and algorithm research. The new institute will be named in honour of wartime code breaker Alan Turing.  Meanwhile, Datonomy’s tax colleagues have been analysing the wider business impact of the Chancellor’s other announcments – you can find their Budget Blog at this link and see this link for the BBC’s coverage of the new Alan Turing Institute

UK ICO’s new approach to public complaints

Katharine Alexander - January 29th, 2014
Katharine Alexander

The ICO recently announced “subtle but significant” changes in its approach to data protection complaints about businesses made by the public. Consumer facing brands will want to stay on the right side of the law anyway – what will the changes mean in practice, and when does a business run the risk of enforcement action?

 The ICO has launched a Consultation entitled ‘our new approach to data protection concerns’, running from 18 December 2013 to 31 January 2014, seeking to collect the views of ICO regulated organisations. The proposed changes are planned to take effect from 1 April 2014.

 Why is the ICO’s approach changing?

The ICO received 40,000 written enquiries or complaints, and 214,000 phone calls in 2012/13 from members of the public. In only 35% of these instances, had data protection legislation actually been breached. The ICO is therefore encouraging individuals to address their concerns to the organisation complained about. The approach to data protection concerns is therefore being streamlined, in a bid to allow the regulatory body to focus on serious contraventions, and repeat offenders who breach the legislation.

When will the ICO take action in response to a complaint?

Businesses still need to take care. Once an individual has raised a complaint with the organisation, if they are not satisfied with the outcome, they may still send their complaint, and the organisation’s response, to the ICO. The ICO will keep a record of complaints in order to identify and take action against patterns that emerge. If the organisation complained of is a repeat offender, or it is a serious breach, enforcement action will still be taken.

What does this mean for responsible brands?

This is therefore good news for compliant organisations, with existing systems in place to respond to queries and resolve complaints, as not much will have to change. In addition, any positive initiative or strategy used by an organisation may be published on the ICO website.

However, this does not mean that businesses can be blasé. Subject access requests and data protection complaints are often a symptom of wider customer dissatisfaction. It must not be forgotten that in today’s world, enforcement comes not only in the form of the ICO, but in the reputational damage caused to brands by individuals complaining through social media. In some instances, this could be more far reaching than some enforcement action by the ICO. Reputational damage will be further cemented, with the ICO publishing the number of breaches by an organisation on their website.

Organisations with an opinion on this matter have until 31 January to respond to the ICO’s consultation. Following the consultation, the ICO’s new approach will take effect on 1 April 2014.

Recap on UK enforcement powers and enforcement policy

Even though the changes to complaint handling may not be big news for the majority of companies, it may be helpful to recap on the circumstances when the risk of enforcement could arise. The ICO has no powers to award compensation to the public, but can take a range of enforcement actions against organisations.

Details of ICO enforcement can be found here, and Datonomy has previously highlighted the changes to their policy last year. According to the ICO, they have served over 5000 decision notices since 2005, and published 27 undertakings in 2013. They may also impose fines of up to £500,000 in the most serious cases, to act as ‘both a sanction and a deterrent’ (according to their enforcement policy).

In order to impose a monetary penalty, the ICO must be satisfied that:

  1. there has been a serious contravention of section 4(4) of the Act by the organisation,
  2. of a kind likely to cause substantial damage or substantial distress, (i.e. one of the data protection principles) and is either,
  3. which is deliberate or,
  4. the organisation knew or ought to have known that there was a risk that the contravention would occur, i.e. reckless, and that such a contravention would be of a kind likely to cause substantial damage or substantial distress, but failed to take reasonable steps to prevent the contravention.

The ICO enforcement policy can be found here. It details that although sometimes action is taken as a result of a complaint of an individual, the initial drivers also include issues of general public concern, concerns of a novel or intrusive nature, or concerns which become apparent through the ICO’s other activities. It also details criteria for when they are likely to take action. These factors include:

  • the scale of detriment to an individual the breach is likely to have,
  • the number of individuals adversely affected,
  • whether enforcement action could stop an ongoing adverse impact, and
  • whether the attitude and conduct of the organisation in question suggests a deliberate, willful or cavalier approach to data protection issues,

amongst others.

As indicated by the enforcement section of their website, the ICO are transparent in their enforcement action, in line with their first Guiding Principle in their enforcement policy. Therefore, the threat is not only a potential pecuniary penalty, but in some cases more crucially, reputational damage to a company. In addition to the enforcement notices and undertakings detailed above, the ICO will further ‘name and shame’ organisations with poor data protection practices, by publishing the number of complaints made about an organisation.

The wider context

The Consultation has been broached at a time when data protection is a hot topic. 2013 saw cyber security continue to be a global concern, with many hacks (for example at Target, and Adobe) obtaining personal data including credit and debit card details of individuals. Both cyber-security and data protection legislation is being discussed at EU level. In recent weeks we have seen regulators in France, Spain and the Netherlands impose the maximum fines available on Google over its 2012 privacy policy changes – but these are a drop in the ocean for a company like Google. Datonomy waits with interest to see what formal action the UK’s ICO takes against Google.

Larger fines are on the horizon. New EU privacy laws, which have now been delayed, could enable data protection authorities to fine companies the greater of €100 million or two/or even five per cent of global revenue.

With the new approach to complaints to data subjects, far from loosening its grip on data protection enforcement, the ICO is simply targeting its action on breaches by bigger players. The moral of the story? To ensure your organisation has good data protection and information rights practices, and keep your customers happy.

Ross McKean

The EU’s ambitious plans for a radicalisation of data protection laws have suffered a serious set-back. EU justice commissioner Viviane Reding finally conceded in a speech at a meeting of EU justice and home affairs ministers in Athens last week that the draft General Data Protection Regulation will not be agreed during the current term of the EU Parliament.

The most recent delay has been caused by the EU Council of Ministers failing to reach agreement before starting negotiations with the EU Parliament and the Commission, with several Member States demanding significant changes to the proposals.

New timetables have been proposed and optimistic statements made that there will still be a new data law by the end of this year.  However, the reality is that any prediction about the substance or process to agree the draft Regulation post this May’s parliamentary election season is guesswork at best.   Fundamental differences remain among Member States as evidenced by the failure of the Council to reach consensus to start formal negotiations.  In addition, new MEPs will have their own political agendas and priorities and may be wary of becoming embroiled in the long running saga of the draft Regulation. 

The proposals have proved to be some of the most controversial to come out of the Brussels legislative machine in years, with over 4,500 amendments proposed to the original text.  The negotiations to date have consumed vast resources and exhausted goodwill.  Even in Member States which have historically been seen as backers of the proposals, support is waning.  Poland’s Inspector General for the Protection of Personal Data, Wojciech Wiewiorowski said last week that support in Poland is dropping because the Regulation, announced two years ago by the Commission is taking too long.

Germany, regarded by many as setting the high water mark for data protection, although supportive of the proposals, wants to see a broad carve-out for the public sector to ensure that German authorities can continue to collect and process personal data without having to comply with the uniform standards.  Germany’s stance has been criticised by German MEP Jan Albrecht and rapporteur for the draft Regulation who said “obviously the German government is against European-wide common rules.  This behaviour is irresponsible against the EU citizens.” 

Irresponsible or not, Germany’s position demonstrates the challenge that the EU Council and a newly constituted Parliament will face to reach agreement on the text of the Regulation.  If data protection friendly Member States such as Germany can’t be persuaded to support the proposals, then what prospects are there to build consensus among all 28 Member States?  The UK, Denmark, Hungary and Slovenia are calling for a watering down of the proposals and re-rendering them as a Directive, which would afford more discretion to individual Member States to interpret the new requirements.

Albrecht must be concerned that his magnum opus may never become law.

Google loses crucial jurisdiction battle in the UK

Ross McKean - January 17th, 2014
Ross McKean

As Datonomy reported (see below), Google has been fined by French and Spanish data protection authorities following almost two years of toing and froing with European data protection regulators over its consolidated privacy policy.  The tiny fines and are unlikely to change Google’s privacy practices.

However, Google now has a larger headache to deal with following the judgment of Mr Justice Tugendhat in the English High Court, handed down yesterday (Judith Vidal-Hall and Others v Google Inc in the Queen’s Bench Division, Case number: HQ13X03128).  The claimants, represented by Olswang the lawfirm behind Datonomy, are a group of users of Apple’s Safari internet browser.  

The Safari users group claim that Google Inc illegally tracked and gathered information about their browsing activities by implementing a workaround to the default Safari browser block on third party cookies. 

Under the Civil Procedure Rules (the procedural rules for parties to civil litigation in the English courts), the claimants needed the permission of the High Court to serve proceedings out of the jurisdiction on Google, Inc, a Delaware incorporated company.  Google Inc applied in August 2013 for an order declaring that the English courts had no jurisdiction to try the claims and to set aside the service of the claim form. 

The High Court disagreed, finding in favour of the UK claimants holding that there was a serious issue to be tried in relation to claims for misuse of private information and in relation to various breaches of the Data Protection Act 1998 and that the claimants had established that the UK is the appropriate jurisdiction in which to try the claims.       

Although there was no conclusive view given on the merits, Google Inc now faces the prospect of defending a claim in the English High Court which in contrast to data protection regulators, enjoys considerably more fire power to impose remedies and ensure that judgments are complied with.  Any award of damages, even if relatively small, could result in a significant liability for Google when multiplied by the millions of Safari users in the UK.

Embattled Google continues to defend its privacy policy

Blanca Escribano - January 17th, 2014


Almost two years have passed since Google introduced controversial changes to its privacy policy in March 2012, by merging more than 60 separate policies for Google’s numerous services into a single privacy policy.  Since then European data protection regulators, initially through the Article 29 Working Party and more recently through a task force of data protection authorities from six Member States including the UK, France, Germany, Italy, Spain and the Netherlands, have demanded that Google takes steps to bring its new policy into line with European data protection laws.  There has been much rattling of regulatory sabers and for the most part nonchalant shrugs from the Mountain View based tech giant, which has responded to the coordinated regulatory offensive by saying that its new policy “respects European law and allows us to create simpler, more effective services.” 

 The Spanish and French data protection watchdogs have now taken matters further by imposing formal sanctions on Google Inc, fining the company Euro 900,000 and 150,000 respectively for breaching Spanish and French data protection laws.

 For an organisation that reported revenues of 50 billion dollars in 2012, these fines are miniscule and highly unlikely to have any effect on Google’s privacy practices.  The CNIL also required Google to publish a notice of the CNIL’s decision on its French search landing page for 48 hours.  This may have been a rather more effective deterrent to dissuade Google from continued non-compliance with French data protection laws given the sanctity of its search landing page and its prominence to French Google users.  However, Google announced on Monday this week that it has appealed the decision of the CNIL which means that the requirement to publish the notice is likely to be suspended pending the outcome of that appeal.

 The UK’s data protection authority, the ICO, after saying in July last year that “failure to take the necessary action to improve the policies’ compliance with the [UK] Data Protection Act by 20 September will leave [Google] open to the possibility of formal enforcement action”  has yet to make any further announcement.  Requiring Google to post a notice on the UK search landing page would be a first for the ICO, and would almost certainly be appealed by Google.  However, fines alone are unlikely to change Google’s behaviour so regulators will need to think more creatively about effective remedies.

Datonomy will be keeping a close eye on the next moves in this game of regulatory cat and mouse.

New UK guidance on making mobile apps privacy compliant

Katharine Alexander - January 6th, 2014
Katharine Alexander

With privacy and security concerns about apps regularly in the headlines, developers and brands commissioning mobile apps should factor in the important new guidance issued recently by the ICO. The guidance and practical illustrations are also relevant to other online platforms e.g. smart TVs and games consoles.

The Information Commissioner’s Office (ICO) has recently released guidelines for app developers to help them ensure apps comply with data protection laws. The guidance was released in the run-up to Christmas – when app sales soar (the ICO cites the statistic of 328 million apps downloaded in the UK on Christmas Day 2012). The guidance is timely, with privacy a worldwide concern: in the US, the  SpongeBob Squarepants app and Jay-Z’s Magna Carta app are two recent examples which have attracted adverse attention over alleged  lack of  privacy compliance, while in the UK security vulnerabilities in the SnapChat app have been in the news.  With app based games aimed at children currently under the scrutiny of the UK consumer enforcement authorities (see this article), the regulation of apps looks set to continue to be a hot topic in 2014.

Why the new guidance? Who needs to comply?

Launching the new guidance, the ICO’s office cites a survey (of 2,275 people) by YouGov, which has shown that 62% of app users are concerned about data security, and 49% have decided not to download an app due to data privacy worries. As described by Simon Rice (Group Manager at the ICO for the technology team) in his blog, this statistic demonstrates that compliance with the guidance makes commercial sense for app developers, as well as reducing legal risk.

The ICO’s guidance emphasises the need for privacy to be factored in at the design stage – and not just an afterthought addressed in a privacy policy. The Data Protection Act 1998 is technology neutral, and applies just as much to online activities such as apps, as well to offline data collection. What is valuable about the ICO’s very practical new guidance – and the numerous worked illustrations which run through it – is that it applies the principles of the DPA very specifically to the mobile app context. The document seeks to address some of the particular challenges of privacy compliance for apps – including space constraints, and the range of personal data to which apps typically have access which make privacy such a concern, such as access to a user’s location, the microphone, emails, SMS and contacts.

Datonomy readers may recognise that the ICO’s guidance is a more user-friendly version of the 30 page opinion published in February 2013 by the EU’s Article 29 Working Party (the body made up of national data protection regulators). That Opinion looked not only at compliance issues for developers, but also for OS and device manufacturers, apps store and other parties in the app ecosystem.

As Datonomy readers will be aware, the ICO guidance does not have the force of law, but is in effect a benchmark for compliance with existing rules in a particular context. With such targeted guidance available, it will be more difficult for organisations deploying apps to plead ignorance of their legal obligations.

All organisations and individuals involved in the development and use of apps should review current and new apps for privacy compliance in the light of the new guidance. Aspects of the guidance – particularly in relation to providing information and gaining consent – will also resonate with other online services, such as games consoles and smart TVs.

As with all data protection issues, a party’s exact compliance obligations will depend on understanding exactly what personal data is involved, who is the data controller and what the risks to individuals’ privacy are. Developers and commissioners therefore need to consider these issues at the design stage in order to minimise and manage their legal risk – and preserve the commercial value of customer data collected.

Basic DP concepts applied to the app ecosystem: personal data; data controllers

The most fundamental question is what – if any – personal data the app is processing. Personal data is anything which can identify, or together with another piece of information to hand can identify, a living individual. In the mobile environment, this can extend from something obvious such as a name or address, to something more specific such as a device IMEI number. The guidance gives useful illustrations and suggestions for data minimisation in order to reduce risk.

The next key issue is to identify the data controller (or data controllers) in the particular app scenario, since legal liability rests with them. This is the person or organisation who decides how personal data is dealt with. The guidance provides useful analyses of who may be the data controller in various scenarios, including social media apps, reviews and ad funded games. This will always be fact dependent. The guidance includes a reminder that the data controller(s) will be subject to the full range of normal DPA obligations e.g. registration with the ICO; transparency information; and the requirement to respond to data subject access requests. Where personal data is shared with another entity which processes it on the controller’s behalf, the normal requirements for minimum contractual protections apply. They must also be careful to demonstrate adequate protection when transferring data outside of the EEA.

What data to collect

The guidance on collecting data via apps includes:

  • only collect the minimum data necessary for the app to perform its function;
  • never store data for longer than necessary;
  • pay extra attention if the app is aimed at children not old enough to understand the significance of providing personal data;
  • allow users to permanently delete their personal data and account; and
  • ensure you have informed consent to collect usage or bug report data, otherwise use anonymised data. If using anonymised data, ensure that the minimum data necessary is still the first step, and anonymise from there.

The ICO recommends data controllers use a privacy impact assessment to ensure compliance.

Informing users and gaining consent – good practice for privacy notices

Complying with the DPA Principles on information and consent poses particular challenges in the mobile environment, where space constraints and consumers’ expectations of convenience and user friendliness make it impracticable to provide detailed privacy notices. In order to achieve this, app developers should:

  • use plain English;
  • use language appropriate for the audience (e.g. children);
  • clearly explain the purpose of collecting the personal data;
  • make privacy information available as soon as possible before the app begins to process personal data; and
  • use a layered approach – detailing the key points in summary, with access to more detail if the user wants it. Containing a privacy policy in one large document may be difficult for a user on a mobile app, on a small screen.

The guidance provides a number of very useful, short, privacy notices which illustrate how information and consent requirements can be complied with, despite the challenges.

The guidance also gives more specific advice, such as:

  • use colour and symbols;
  • highlight unexpected or onerous actions and highlight differences between platforms;
  • make use of just-in-time notifications, which are provided immediately before the data is processed, for example when requesting to use someone’s location for GPS, or when using new features of an app for the first time; and
  • ensure consent is obtained if the app passes data onto any other organisations, ensure it is clear if the app is supported by advertising, and give information on any analytics used.

It is always important to be as clear and transparent as possible. However, there is no need to state the obvious to a reasonably-informed user. The ICO uses an example of an app used to deliver orders – the need for a delivery address is obvious. They also state that if the information is given in the app store, there is no need to repeat this at a later stage (unless onerous or unexpected, as above).

Users should also be given an element of control over the use of their data – with granular options, and ensuring it is easy to review and change personal data settings from one obvious settings location.

Good data security practice for apps

The 2 pages devoted specifically to security include the following recommendations – highlighting that developers should adhere to up to date good security practices both in design of the app, and of the central servers the app communicates with:

  • ensure passwords appropriately salted and hashed on any central server;
  • use encrypted connections for usernames, passwords, sensitive information;
  • use tried and tested cryptographic methods;
  • avoid writing new code where well established implementations can be sued instead; and
  • take particular care where the app accessed data from other apps or locations.

The guidance also highlights examples of vulnerabilities specific to mobile apps, for example inter-app injection flaws, and failing to check or misconfiguring SSL/ TLS.

Other important legal compliance issues

In addition to compliance with data protection principles, the guidance provides a helpful checklist of the consumer protection rules which  app developers must also comply with:

Datonomy comment

As the ICO’s press release reminds us, ‘compliance is not a bolt-on included in the final phase of a product’s development, but is an aspect of an app’s design that should be considered at the start of the process”.

Datonomy agrees – and the ICO’s targeted guidance and illustrations are certainly a step in the right direction. Datonomy readers may also be interested in this recent article by our colleague Matt Pollins which looks at the wider legal landscape for the growth of the app.

Yesterday (12/12/2013), a serious blow was dealt to one of the fundamental building blocks establishing the legal framework for retention of data for law enforcement across Europe.  Advocate General Pedro Cruz Villalón (AG) at the Court of Justice of the European Union (ECJ) delivered an opinion stating that the Data Retention Directive (DRD) is, as a whole, incompatible with the individual’s right to privacy in the Charter of Fundamental Rights of the European Union. The opinion has potentially profound implications for law enforcement agencies and for service providers subject to the retention requirements across Europe. The opinion is here.


The DRD requires Member States to implement laws requiring telephone or electronic communications service providers to collect and retain traffic data, location data and the related data necessary to identify the subscriber or user of the services “in order to ensure that the data is available for the purposes of the investigation, detection and prosecution of serious crime” (Article 1(1) of the DRD).  Providers are not required to collect and retain content data i.e. the data communicated itself by subscribers or users of the services. Members States are required to ensure that the data is held for periods of not less than six months and not more than two years from the date of the communication. Only competent national authorities are to be permitted access to the data.  For more information about data retention requirements, go here.

Key takeaway for service providers

Service providers should watch this space and keep their own compliance programmes under review. For service providers wrestling with retention requirements, the opinion means that doubt will remain about the correct way to build a compliance programme. If the ECJ agrees with the AG, new legislation would need to be developed though the practical impact on service providers with respect to the types of data to be collected and any reduction in retention periods is unclear.

What did the AG say?

-       The AG considers that the purposes of the DRD are legitimate.

-       However, the AG is concerned that the retained data will include a lot of information about an individual’s private life and identity. There is a risk that the data may be used for unlawful purposes. The risk may be greater because the data is not retained or controlled by the competent national authorities but by the providers and the providers do not have to retain the data within the relevant Member States.

-       The AG said that the DRD does not provide minimum guarantees for access to the data and its use by the competent national authorities. (i) A more precise definition of “serious crime” would help to define when competent authorities are able to access the data. (ii) Access should be limited to judicial authorities or independent authorities. Any other access requests should be subject to review by judicial authorities or independent authorities so that access is limited to only the data that is strictly necessary. (iii) Member States should be allowed to prevent access to data in certain circumstances e.g. to protect individuals’ medical confidentiality. (iv) Authorities should be required to delete the data once used for the relevant purposes. (v) Authorities should be required to notify individuals of the access, at least after the event when there is no risk that the purpose for accessing the data would be compromised.

-       Finally, the AG said that he could not find sufficient justification for not limiting the data retention period to one year or less.

What does this all mean?

-       For now the existing requirements remain but may be subject to review. The AG’s opinion is not binding on the ECJ or indeed on any Member State.  Nevertheless, the opinion carries weight and in many cases the ECJ has gone on to follow opinions delivered by the AG.  The Judges of the ECJ are still deliberating and judgment will be given at a later date.

-       The AG also proposed that the effects of stating that the DRD is invalid should be postponed so, even if the ECJ agrees with the AG, the ECJ could allow the EU legislature a reasonable period to adopt remedying measures, so that the DRD is no longer incompatible with the Charter of Fundamental Rights.

Anne Brandenburg

After lengthy discussions, the European Parliament’s Committee on Civil Liberties, Justice and Home Affairs (“LIBE”) agreed this Monday (22 October 2013) on a compromise text of the draft General Data Protection Regulation (“GDPR”). The proposal still has a mountain to climb as opinions between the different EU institutions remain deeply divided. However, Monday’s vote is significant as it gives the European Parliament (“EP”) a mandate to start the next phase of negotiations with Member States.

The GDPR was published by the European Commission 21 months ago in January 2012 and has proved to be one of the most controversial proposals ever to come out of Brussels, with lobbyists proposing over 4000 amendments to the Commission’s text.


The compromise text was adopted by the LIBE Committee on a 49-1 vote with three abstentions. The EP’s press release is here and includes some radical proposed changes to the Commission’s draft.

Datonomy has taken a look at some of the key proposed changes which include the following:

  • Territorial Scope: Under the draft GDPR, the Regulation applies to all processing of personal data “in the context of the activities of an establishment of a controller or a processor in the Union” and to the activities of controllers not established in the Union where the processing activities relate to offering goods or services to data subjects in the in the EU or the monitoring of their behaviour. The EP’s compromise text seeks to add to the reach of this provision in two ways: it adds the passage “whether the processing takes place in the Union or not”, and makes clear that the targeting of EU citizens is caught even where no payment is required for the goods / services offered.  With this amendment, the Parliament tries to cover in particular data processing activities that take place in a cloud and/or overseas.
  • Fines: The Parliament harmonized the fines for a violation of the GDPR. According to the Commission’s draft, such fines could amount to between 0.5% of an enterprise’s annual worldwide turnover or EUR 250,000 and 2% of an enterprise’s annual worldwide turnover or EUR 1,000,000, depending on the provisions breached. The compromise text harmonizes those categories and increases the maximum fines for GDPR breaches up to 5% of an enterprise’s annual worldwide turnover or up to EUR 100,000,000 – whichever is greater.
  • Right to be forgotten and erasure: The controversial right to be forgotten is endorsed, and reinforced with an obligation on data controllers to take all reasonable steps to have that data erased by third parties. A right to have data erased following a court order is also added.
  • One-stop-shop: The compromise text confirms the one-stop-shop principle of the GDPR which provides that only the data protection authority of the country in which the business is located is competent for supervising such businesses’ data protection activities. On the other hand, data subjects have the right to lodge a complaint with a supervisory authority in any Member State if they consider the processing of their personal data is not in compliance with the GDPR.
  • Certification: According to the compromise text, controllers and processors within and outside the EU may ask any supervisory authority within the EU to certify that their processing of personal data complies with the GDPR. If this is the case they will be granted a “European Data Protection Seal” which allows for data transfers between businesses with such a seal even if one of them is based in a country that does not have an adequate level of data protection.
  • Data protection officer: The compromise text changes the requirements for appointing a data protection officer (“DPO”). While the draft GDPR required a DPO if an enterprise has 250 or more employees that carry out processing of personal data, the compromise text only relates to the number of data subjects concerned and requires the appointment of a DPO if personal data of more than 5,000 data subjects are processed in any consecutive 12-month period. Furthermore, the compromise text requires a DPO where the core activities of the controller or processor consist of the processing of sensitive personal data, location data or data on children or employees in large scale filing systems.
  • Breach notification: Good news is that the compromise text widens the time frame in which a personal data breach must be reported to the supervisory authority from 24 hours to a reporting that takes place “without undue delay”.

Procedure and what is next

The LIBE Committee’s vote gives lead Rapporteur Jan Phillipp Albrecht a mandate for negotiations with the Council in order to reach a common agreement on the final wording of the GDPR which negotiations shall preferably be concluded prior to EU Parliament elections in May 2014. The next meeting of the Council’s Justice Ministers on the data protection reform will take place on 6 December 2013. And an indicative plenary sitting of the Parliament is scheduled for 11 March 2014.

It is expected that during the inter-institutional negotiations, the compromise text will be further amended as certain aspects in the current version seem too radical to be supported by the Council (e.g., fines will probably be one of the parameters to become adjusted).

Accordingly, despite the LIBE Committee’s vote, there is still a long way to go before the new GDPR will formally be adopted, and it remains to be seen what the final detail of the reforms will look like, and whether the Commission will achieve its aim of getting the measure adopted at EU level before the European elections in the Spring. Datonomy will continue to monitor the progress and keep its readers updated on the future development of data protection in Europe.

Katharine Alexander

With recent reports of ever more daring cyber-attacks on the banking system, and claims that cyber criminals are exploiting weaknesses in the supply chain to hack major corporations, Datonomy looks at the current EU proposals on reporting security incidents which are aimed at tackling the problem – and the concerns and flaws identified by industry and by legislators.

What’s new? Some recent developments on the NISD

Datonomy readers will be familiar with the proposal for a new EU Directive on Network and Information Security (NISD) unveiled by the Commission in February, and set for its first reading in the European Parliament in early 2014. The aim of the new measures is to boost security by imposing new standards, and auditing and reporting requirements on market operators – including key infrastructure providers (e.g. energy companies) and, more controversially, ecommerce platforms and social networks.

Our earlier summary of those proposals can be found here.  But what do organisations in those sectors think of the proposals? To inform its negotiating stance in Brussels, the UK Government has been taking soundings (from May to June The Department for Business Innovation & Skills (BIS) held a Call for Evidence, seeking information about the impact the NISD could have on UK Stakeholders) and on 6th September, BIS published a summary of these responses.

The consultation drew responses from 88 organisations in the various sectors targeted by the new rules, including  Media, Banking, Transport, Energy, Health, Telecommunications, Providers of Information Society Services, Aerospace and Defence. Their concerns and comments make interesting reading for other organisations in those sectors who are keen to future proof their systems (and supply chain arrangements) for potential new obligations and sanctions – and, of course, for compliance and cyber security professionals, whose services will be in even greater demand if  the proposed  regime comes into force.

Over the Summer the draft Directive has also come under scrutiny from the EU institutions, with many of the same concerns, interestingly, echoed in the draft report by Andreas Schwab, the proposal’s Rapporteur, and during a debate in the European Council in June.

In addition, earlier this month, a major briefing note was published by one of the European Commission’s DGs. Datonomy readers with an appetite for all 172 pages of this report will find analysis of security breach trends, as well as further critique of the NISD’s proposals.

What are the key issues and concerns?

The overarching theme in all these documents is scepticism about whether the proposed breach notification requirements are proportionate or indeed effective in terms of encouraging information sharing and/ or reducing organisations’ vulnerability to attack.

The BIS Summary of Reponses categorised evidence into 14 key aspects of the Directive. To spare Datonomy’s busy readers having to read all 49 pages, some of the main themes stemming from the responses are as follows:

EU harmonisation – but what about the global picture?

Whilst there was overall support for a harmonised and non-fragmented regime for cyber-attack reporting, many respondents stressed the need for the measures to stretch further than the EU, as cyber security is a global issue.

Which businesses would be caught?

The proposed reach of the new obligations is one of the most controversial dimensions of the NISD. Stakeholders sought clarification on the ‘extremely broad’ definition of ‘Market Operators’ in Annex II of the Directive, and why these sectors have been targeted. This was foreseen by the European Council in June 2013, who perceived the need for ‘detailed discussions’ relating to the definition of ‘market operators’.  In general, stakeholders wanted the scope to be narrowed, so that businesses that in fact do not have an impact on critical infrastructure are not unintentionally (and unnecessarily) caught. Schwab agrees and suggested this should be ‘limited to infrastructures that are critical in a stricter sense’, and consequently suggested removing providers of ‘information society services’ (ISS) from the obligations. The focus should remain on energy, transport, health and finance.  

The current draft of the NISD includes all the following players within a non-exhaustive indicative list of ISS: ecommerce platforms; internet payment gateways; social networks; cloud computing services; and app stores. It is inclusion of this potentially diverse range of businesses that has attracted the most criticism. Objections include the complexity of Internet and cloud value chains; the risk of generating data which is disproportionate to the benefits to be gained; and the stifling of innovation. One stakeholder however argued that the ambit should be wider – and that software developers and hardware manufacturers should not escape the new obligations.

Mandatory vs. voluntary reporting of cyber incidents

The strongest recurring theme was animosity for mandatory reporting. The Explanatory Memorandum for the proposed Directive argues that ‘the current situation in the EU reflecting the purely voluntary approach followed so far, does not provide sufficient protection against NIS incidents and risks’. However, stakeholders object to the idea of mandatory reporting for a number of reasons.

Firstly, many organisations already have reporting mechanisms in place, and so insisting on further mandatory reports would create perceived unnecessary work and potential for duplication in reporting, and would therefore be inefficient. Comments included:

‘within the UK there are already a number of effective information sharing forums, both formal and informal, which should be encouraged and not subject to greater regulatory pressure’.

The Report from the Economic and Scientific Policy Department of the European Parliament, on behalf of the Committee for Industry, Research and Energy states that the obligations burden those ‘already talking to regulators and perhaps already sharing certain types of cyber security information as part of their obligations towards sector-specific regulators’.

Schwab’s report also  addressed this, and states that the proposal for National Competent Authorities ‘does not adequately take into account already existing structures’, and therefore the designation of more than one competent authority per Member State should be allowed.

Moreover, Stakeholders would prefer a voluntary trust-based approach to reporting mechanisms of NIS incidents. They fear that a mandatory obligation would actually decrease the amount of notifications, and encourage a ‘tick-box’ mind-set, and a ‘compliance culture’. One stakeholder said:

‘…it’s vital that the companies do not adopt a ‘tick-box’ approach to security and understand that truly effective cyber security is a combination of having the right people, processes and technologies in place’. 

The Debate in Council also addressed ‘why a legislative, rather than a voluntary approach’ was being used, and the fact that Member States required further justification of this.

Another criticism levelled at the Directive is that mandatory reporting would penalise and disincentivise organisations with more advanced NIS systems, who by definition will detect, and therefore need to report, more attacks. 

Schwab also commented on this, stating that ‘potential sanctions should not disincentivise the notification of incidents and create adverse effects’, and therefore, where a market operator has failed to comply with the Directive, but not intentionally or by gross negligence, there should be no sanction.

When should notification be triggered? The meaning of ‘significant’ – a sectoral test?

Stakeholders identified the threshold for the obligation of reporting to be triggered as another key measure in the Directive, and required clarification to the meaning of ‘significant’. Without clarification, stakeholders could not assess the impact the Directive could have on their businesses.

‘Significant’ is too broad a term; one stakeholder suggested narrowing the definition to ensure a breach would have to be ‘an incident that is not a routine or accidental breach of information technology compliance management policies but is anomalous and has the ability to create significant harm’. However, to exclude accidents would be to invalidate the aim of the proposed Directive given in the explanatory memorandum, which references the increase in the number and severity of incidents, including human mistakes. Schwab suggests adding a clear criterion for incidents which must be reported, which, if taken into account, and depending on the definition, may help resolve some concerns.

In addition, stakeholders thought that the definition of a ‘significant impact’ should be determined sector by sector, in order to ensure that ‘thresholds to trigger reporting of incidents are appropriate to the sector’.  

Yet more new regulatory bodies?

Particularly with regard to developing a Computer Emergency Response Team (CERT) and a National Competent Authority (NCA), stakeholders were concerned about the framework being too slow, especially considering that it took three years for the US CERT to have effect. There are also concerns that introducing another regulator could add more ‘confusion and complexity’ to the reporting process.

Stakeholders were also concerned that the NCA could publicise security incidents which had been reported, without the permission of the reporting organisation. Comments reflected concerns about loss of reputation, and the lack of an opportunity to remedy their systems. This may, again, act as a disincentive to voluntarily report breaches, alongside the ability of the NCA to impose sanctions.

What are the next steps?

The Call for Evidence has certainly given the UK Government plenty of food for thought as it prepares to negotiate in Brussels. BIS states that it may require further evidence from stakeholders in the future, in order to negotiate an instrument that ‘does not overburden business…; that encourages economic growth and innovation; and that fosters positive and sustainable behaviour change’. Therefore, businesses in the affected sectors should look out for further opportunities to inform and influence these proposals. The UK already has a number of voluntary initiatives up and running as part of its 2011 Cyber Security Strategy.

The first reading for the Directive is scheduled for 4th February 2014, according to the Procedure File found on the European Parliament website here. If the initial responses from businesses and Parliament and the Council (the institutions with power to determine the fate of the proposals) are anything to go by, the Directive has a long way to go before it is adopted.

There is no denying that cyber security is an issue. In the last few days alone, this Datonomist  has been reading coverage of   a cyber-plot to ‘steal millions of pounds by hijacking London high street bank’s computers’ (four men are appearing in court on 27th September as a result), and a report by the insurers Allianz about how hackers are accessing the computer systems of the large corporations via access to their smaller suppliers.

Will the mandatory auditing and reporting requirements in the Directive ever become law, and if so who will they apply to? It is too early to say for sure. But in the meantime, security incidents which are getting ‘bigger, more frequent, and more complex’ will surely focus minds on improving information security throughout the supply chain – won’t they?


Carsten Kociok

Datonomy considers the Germany authorities’ reaction to the PRISM affair, and the wider practical consequences this could have for international transfers being made under the auspices of U.S. Safe Harbor and model contracts.

After the reports about extensive surveillance activities by foreign and European intelligence services, especially by the American National Security Agency (NSA) and the UK Government Communications Headquarters (GCHQ) and possible transfers of personal data to them by American companies, European data protection authorities are raising their voices. In a letter dated 13 August 2013, the chairman of the Article 29 Working Party expressed his deep concern to the Vice-President of the European Commission, Viviane Reding, urging her to seek for more clarification from the U.S. as well as announcing the intention of the European data protection authorities to conduct own investigations regarding the compliance of foreign and European intelligence programs with EU data protection principles. Concrete actions have however not been taken at an European level yet.

Germany – USA transfer: no new authorisations under Safe Harbor

In Germany, data protection authorities already went a step further or at least announced to do so. “Data protection supervisory authorities will not issue any new permission for data transfer to non-EU countries (including for the use of certain cloud services) and will examine whether such data transfers should be suspended.” With this statement in a press release from 24 July 2013, the Conference of Federal and State Data Protection Commissioners in Germany aroused attention all over Europe. Rumors were spreading whether the German authorities even wanted to “suspend” the EU – U.S. Safe Harbor agreement, which serves as a vital base for transatlantic flows of personal data.

In their press release Federal and State Data Protection Commissioners call on the German government to provide a plausible explanation of how the unlimited access of foreign intelligence services to personal data of persons in Germany is effectively limited in accordance with the principles of Safe Harbor and standard contractual clauses for data that is transferred to countries outside of the European Union. They also address the European Commission and demand a suspension of its decisions on Safe Harbor until further information and notice is provided.

A recap on adequacy – transfer of personal data to the USA

Under the data protection directive 95/46/EC, the transfer of personal data by an European based controller to a third country, which does not ensure an adequate level of protection, is prohibited. In order to ensure an adequate level of protection for personal data that are transferred to the U.S., in 2000, the U.S. government and the European Commission developed the Safe Harbor principles (2000/520/EC) which allow U.S. American based companies to take part in a self-certification program, supervised by the Federal Trade Commission (FTC). The companies will have to comply with to several requirements regarding the processing of personal data. Data transfers to these companies will then automatically be covered by an adequate level of protection. As an alternative to the Safe Harbor regime, the European data exporter and the U.S. data importer can agree on standard contract clauses (Annex of decision 2000/87/EU) previously published by the European Commission. By using these clauses, an adequate level of protection will also be assumed. Permissions by national data protection authorities are generally not required in these cases.

Suspension of data transfers by national authorities

According to their press release, the German authorities will not “issue new permissions” for data transfers to non-EU countries and will examine whether such data transfers should be suspended on the basis of the Safe Harbor framework and the standard contractual clauses. This announcement deserves some clarification:

Firstly, it has to be emphasized that the national data protection authorities may not suspend the whole Safe Harbor principles or the underlying decision of the European Commission. This falls into the European Commission’s area of responsibility. Indeed, the Commission is currently undertaking an assessment of the Safe Harbor principles. Ms. Reding expects a result until the end of this year.

However, the competent authorities within the Member States may exercise their existing powers to suspend data flows to a certain organization that has self-certified its adherence to the principles in order to protect individuals with regard to the processing of their personal data. But in addition, further requirements have to be fulfilled, such as the determination that there is a substantial likelihood that the principles are being violated and the continuing transfer would create an imminent risk of grave harm to data subjects. The same basically applies to the standard contractual clauses.

Several uncertainties

German national authorities regard these requirements to be fulfilled. Hence, from their point of view, they may use their existing powers to suspend data flows to the U.S. However, whether the principles of Safe Harbor are really violated, is highly questionable (as the full and clear details of the surveillance activities still remain hidden) and would have to be examined closely, especially by the European Commission and a special committee, formed by representatives of the Member States.   

Other European data protection authorities do not conform to the view of their German counterparts. The Data Protection Commissioner of Ireland, for example, does not believe that there are grounds for an investigation. And in the UK, the Information Commissioner’s Office (ICO) commented the Article 29 Working Party’s letter to the European Commission by saying that he is “taking a keen interest” in the issue, but until now, has not taken any concrete actions. In Belgium, the Commission de la Protection de la Vie Privée (CPVP) has not yet published any statement in this respect.

German authorities practise what they preach

According to a statement of the data protection commissioner of Berlin, Alexander Dix, from 25 July 2013, the Commissioner is currently not taking applications to authorize transfers to the U.S. any further, or requests information from the applicants as to the measures they take in order to prevent foreign intelligence services to access the information. The data protection commissioner hints however to the possibility, that if a “U.S. provider offers encrypted means of storing data in a cloud, that would be a technical alternative to increase security”.  

It has to be kept in mind that a suspension of data transfers to a U.S. company could result in commercial disadvantages and perhaps economic damages for German based companies, which rely on transatlantic transfers of personal data to the U.S. The statement from the data protection commissioner of Berlin shows that data transfers may still be legally possible, but companies will have to make more efforts than before to convince the authorities of an adequate level of protection.


German data protection authorities live up to their fame of tough privacy watchdogs in the European Union. Nevertheless, uncertainties remain whether suspensions of data transfers will in fact be made and whether can legally be justified.

On another level, it will be interesting to observe if and how the German government and the European Commission conform to the demands of the European data protection officers and whether they will adapt or even suspend the existing rules of Safe Harbor or the standard contractual clauses. Finally, the ongoing examinations by the Art. 29 Working Party should be observed carefully, as their conclusions may well have an impact on international data transfers from the EU to the U. S. Regardless of the next events, datonomy readers should follow these developments closely, as the impacts for international business must not be underestimated.

Rebecca Davis

On 3rd September, the new EU Directive 2013/40 on attacks against information systems came into force, requiring Member States to beef up national cybercrime laws and sentencing. The Directive updates and replaces the previous Framework Decision in this area and introduces new measures including criminal offences for attacks using malicious software, and increased sentencing of up to 5 years’ imprisonment for the most serious offences. The new measures are illustrative of the EU’s increasingly aggressive stance in tackling cyber-crime – but how different is the new legislation to that already in force? Datonomy explores.

Why the new Directive?

Last week on 3 September, the new EU Directive 2013/40 on attacks against information systems came into force. The Directive was proposed in 2010 as a replacement to the previous Framework Decision 2005/222/JHA, which criminalised various activities in relation to attacks on information systems, including illegal access to information systems, and illegal interference with systems and data. Following various high-profile cyber-attacks since the passage of the Decision, including a 2009 botnet attack that successfully infiltrated the computer systems of the UK, German and French defence forces, the EU was concerned that such existing legislation was inadequate to prevent cyber-crime and so considered further measures were required. 

What’s new?

The text of the new Directive is similar to the previous Decision, and contains almost identical offences in relation to illegal access to information systems and interference with systems or data. As in the Decision, there is again an offence for involvement in incitement, aiding, abetting or attempting such offences. “Information systems” is broadly defined to include any device or group of devices which automatically process computer data by means of a programme, as well as any data stored, processed, retrieved or transmitted by such device(s). The new Directive however now introduces new offences for “illegal interception” of non-public transmissions of computer data (Article 6), and for the production, sale, procurement for use, import or distribution of tools intended to commit cyber-crime offences (Article 7). The latter is primarily targeted at the use of botnets and malicious software, which the European Parliament highlighted as a particular concern in the Directive’s Preamble, citing the potential use of such tools to gain remote access to large numbers of computer systems and potentially cause significant disruption and damage. To support this, new penalties of up to 5 years’ imprisonment are introduced for the most serious systems or data interference offences, either where carried out within the framework of a criminal organisation, or where such attacks cause significant damage or affect key infrastructure. A new penalty of up to 3 years’ imprisonment is also introduced for such offences where carried out through the use of tools specifically designed for such purpose.   

In addition to more harshly penalising cyber-crime, the Directive also focuses on improving and strengthening police and judicial co-operation across the Union to counter such attacks. Citing both the frequently cross-border nature of cyber-crime, and the “significant gaps and differences in Member States’ laws and criminal procedures” in this area, the European Parliament has implemented a number of measures designed to facilitate more wide-scale reporting and enforcement. In addition to the pre-existing requirement that Member States implement national contact points in relation to cyber-security, Member States are therefore now also required to use the existing G8 and Council of Europe network of 24/7 contact points to help combat cyber-crime, and must respond within 8 hours to any urgent requests for assistance. They must further collect statistics and data on cyber-attacks, which will be transmitted to the European Commission for consolidated review and to help prevent such attacks in the future. 

How will UK law need to change?

Whilst many of the new measures have already been implemented in the UK through previous amendment to the Computer Misuse Act 1990 (“CMA”) in 2005, it is likely that the new Directive will require further changes to UK legislation before its deadline for transposition on 4 September 2015. Although the CMA already contains an offence in relation to the use of tools for the commission of computer misuse offences (under a new section 3A inserted under the Police and Justice Act 2006) for example, its current sentencing provisions run to a maximum of 2 years, and will likely need increasing to take into account the new penalties. Although the offence of illegal interception of telecommunications data is similarly already provided for under section 1 of the Regulation of Investigatory Powers Act 2000 (“RIPA”), this only concerns data transmitted over a public information network and may therefore need amending to include transmissions over private networks. Despite this however, it is unlikely that the Directive will require fundamental changes to existing UK legislation and its amendments to the previous Framework Decision are finally of a more supplementary and enhancing nature than representing a fundamental change.

Alice Donzelot

Aberdeen City Council (“ACC”) has been fined £100,000 by the Information Commissioner’s Office (the “ICO”)  for failing to implement an adequate home working policy following one of its employees posting sensitive information online whilst working from home.

There has been a rash of fines for security breaches imposed on public sector data controllers.  Datonomy was particularly interested in this fine because of the wider implications for the private sector.  Home working, remote working and “bring your own device” security are currently in the regulatory spot light and in the notice announcing the fine, the ICO has reiterated the importance of organisations ensuring that personal data is fully secure when accessed remotely.  It is time to revisit your BYOD and remote working policies and procedures if you haven’t already done so.

In November 2011, an ACC employee unintentionally uploaded 39 pages of highly personal and confidential information relating to her job (caring for vulnerable children), including sensitive personal data, to a website whilst working from home on her home computer. Once uploaded, the information was accessible to all internet users simply by inputting relevant search terms into a search engine.  At the time, the ACC had no home working policy in place addressing data security.

The uploaded data was later discovered by a work colleague and reported to the ACC in February 2012. The ACC removed the source documents from the website and reported the data protection breach to the ICO shortly after.

Following an investigation, the ICO found that as the relevant data controller, the ACC had failed to take sufficient appropriate technical and organisational measures against unauthorised processing of personal data to prevent such unauthorised processing from occurring and had committed a breach of the seventh data protection principle. In particular, the ICO highlighted the lack of policy and technical procedures in place in relation to data security generally, and more specifically, home working. The ICO listed as examples, the following ways in which organisations might seek to ensure they comply with the seventh principle:

-       introducing a secure home working policy;

-       providing employees with the necessary equipment to ensure secure home working;

-       providing employees with adequate training;

-       management checks on the efficacy of the home working policy; and

-       taking subsequent steps to ensure that the policy was sufficiently adhered to.

On 27 August 2013, the ICO served a monetary penalty notice against ACC for £100,000 to reflect the severity of the data protection contravention, and its view that in the circumstances the ACC ought to have known there was a risk of a contravention occurring and that any such contravention was likely to cause substantial damage and distress due to the confidential and sensitive nature of the information disclosed. The ICO reinstated the importance of organisations and employers taking adequate measures to ensure that all personal data accessed by home workers is kept safe and secure at all times.

Rebecca Davis

The ICO recently updated its Data Protection Enforcement Policy in the light of recommendations from the Leveson Report. The policy remains largely the same as the ICO’s earlier 2010 policy, but contains new sections specifically clarifying the regulation of the press and incorporating the ICO’s recent Information Rights Strategy. The policy again stresses that market forces and proportionality will play a key role in the ICO’s decisions whether to take enforcement action.

The ICO last week published its updated Data Protection Enforcement Policy and Datonomy has been comparing this new improved version to the last version of the policy published in 2010. The policy sets out how the ICO intends to implement its regulatory powers under the Data Protection Act 1998, Privacy and Electronic Communications Regulations 2003 and associated legislation. The updated policy follows recommendations in the Leveson Report that the ICO publish clearer practice guidelines to ensure compliance with information rights legislation by the press and adopt an enforcement policy with specific press-related guidelines.

The new policy is substantially the same as the 2010 policy and again outlines the powers available to the ICO, including criminal prosecution, monetary penalties, the service of enforcement notices and audit. The driving factors behind enforcement continue to be complaints, matters of general public concern, and a new factor of concerns raised by the new or intrusive nature of particular activities. The ICO has again stressed that it will strive to ensure any actions taken are proportionate, taking into account market forces and the public interest. Action will therefore be less likely where there are commercial incentives encouraging compliance with the legislation, and where market forces are themselves likely to regulate the non-compliance. Enforcement will also be less likely where non-compliance has been due to ignorance of the requirements, has not caused significant detriment, or where the data controller has taken reasonable steps to prevent the breach.

Whilst the new policy generally clarifies and updates the 2010 policy, it also implements various key changes, many of which are designed to implement the Leveson recommendations. These include:

  • A new section on the processing of personal data for special purposes, including by the press, media organisations, or for literary or artistic purposes. The ICO’s powers are significantly reduced in this area, and it may only serve enforcement notices with permission from the court, where the processing of personal data is not for reason of a special purpose alone, or is not being processed with a view to publication. The ICO may not serve enforcement notices at all where to do so would prevent publication of material that has not previously been published. To allow enforcement, the court must be satisfied that the contravention is of substantial public importance.
  • Details of enforcement powers specifically related to the press, notably the power to issue Special Information Notices requiring the supply of information necessary to determine whether personal data is being processed for a special purpose.
  • Incorporation of the ICO’s Information Rights Strategy, published in December 2011. The new policy emphasises the priority sectors identified for particular attention in the ICO’s Information Rights Strategy, including healthcare, criminal justice, local government and online and mobile services. These remain key sector areas in which compliance will be more keenly monitored. The ICO will focus primarily on the public sector in taking enforcement action, and will target cases where data processing is hidden from view or where the individuals concerned have a reduced choice over how their personal data is used.
  • Greater focus on the Privacy and Electronic Communications Regulations (“PECR”) 2003. The 2010 policy implicitly covered the PECR, however the new policy now explicitly stresses the ICO’s role in monitoring and enforcing the PECR as well as the Data Protection Act 1998. Penalties specific to the PECR have now been added to the policy, including fixed monetary policy notices providing for a set payment of £1,000 in relation to failure to comply with the personal data breach notifications under the PECR, and audit and notice powers specific to the PECR. 

Google vs The Right To Be Forgotten

Blanca Escribano - July 12th, 2013

The recent AG’s Opinion in the Google case referred by the Spanish courts raises three issues of wide interest: the territorial scope of EU data protection law, liability of search engines and the Right To Be Forgotten. The ECJ will have the final say in the matter later this year. In the meantime, Datonomy flags the key issues – which are bound to influence debate on the new General Data Protection Regulation.

Datonomy’s correspondents in Spain have been following this case right from the start: back in March 2011 we reported that the Spanish Audiencia Nacional was considering requesting a preliminary ruling from the Court of Justice of the European Union (ECJ) on several matters regarding the position of search engines in relation to the European Data Protection Directive. That referral was made in March 2012, and the Advocate General in the case delivered his Opinion at the end of June. As we’re sure Datonomy readers are aware, an AG’s opinion provides guidance for the judges of the ECJ on interpreting EU legislation, but it does not bind their final decision (although in practice AG’s opinions tend to be followed).

Advocate General’s (AG) Opinion in Case C-131/12 stated that search engine service providers are not responsible for personal data appearing on web pages they process. In the opinion of AG Jääskinen, current data protection law (Directive 95/45/EC) does not establish a general “right to be forgotten” in the EU. Thus, individuals cannot invoke this right against search engine providers.

The Spanish Audiencia Nacional requested a preliminary ruling from the Court of Justice of the European Union (ECJ) on several matters regarding the position of search engines in relation to the European Data Protection Directive (mentioned above). The headline conclusions in the Opinion are as follows:

1. Territorial scope of the application of national data protection legislation. In the view of the AG Jääskinen, EU data protection legislation is applicable also to those search engines set up in a Member State when, for the purpose of promoting and selling advertising space on the search engine, an office or subsidiary which orientates its activity towards the inhabitants of that State.

2. Search engine providers cannot be considered “data controllers” in relation to the information on source web pages hosted on third party websites. In the opinion of the AG, the provision of an information location tool does not imply any control over the content included on third party web pages.

3. Directive 95/45/EC does not establish a general right to be forgotten. Under EU Directive, individuals have the right to access, rectify, erasure and object at any time to the processing of their personal data under some legal grounds. However, there is no right that allows citizens to block the dissemination of their personal data though the tools provided by search engine providers (with the exception of those contents declared illegal according to the national legislation e.g.: intellectual property infringements). Otherwise, other fundamental rights such as the freedom of expression and information would be seriously compromised.

The AG’s conclusion is not binding on the Court of Justice, and now we should wait for the final ruling of the ECJ, will be published by the end of this year. We need to emphasise that ECJ ruling will be crucial, since the new Data Protection Regulation, which is expected to be passed before the end of the EP legislature (i.e. before May 2014), proclaim this controversial new right. It is far from certain that the Right To Be Forgotten proposed in the first draft of the Regulation will make it into law be interesting to see how the Google ruling influences that debate.

Carsten Kociok

Draft rules coming into effect next month for communications service providers on when and how to notify data security breaches are the clearest indication yet of the obligations proposed for all data controllers under the draft General Data Protection Regulation. The new telco-specific regime includes some welcome concessions on when deadline for notifying regulators starts, and the circumstances when individuals need to be notified. Datonomy analyses the new rules. Who is the new regulation aimed at?

Last week, the European Commission presented a new draft Commission Regulation on the measures applicable to the notification of personal data breaches under the E-Privacy Directive 2002/58/EC. This Regulation (like the notification requirements under the 2002 Directive) applies only to “providers of publicly available telecommunications services” and will come into force in August 2013.

According to the E-Privacy Directive, telecom companies, internet service providers and other providers of publicly available electronic communications services (“CSPs”) are already obliged to notify competent national authorities and, in certain cases, also affected individuals if there is a personal data breach. The Directive however stays silent on the details of how and when such notifications must be made.  

The draft Commission Regulation aims to clarify this issue and ensure consistency by setting out EU wide technical implementing measures concerning the circumstances, format and procedures applicable to the notification requirements, thereby allowing companies that operate in more than one European country to take a pan-EU approach in case of a data breach.

Main Obligations of CSPs under the draft Commission Regulation

The main obligations of CSPs under the draft Commission Regulation are as follows:

  • Notification to the competent national authority within 24 hours after detection of the personal data breach
  • Notification to affected individuals without undue delay if the data breach is likely to adversely affect the individuals’ personal data or privacy

Notification to the competent national authority: 24 hour deadline

On the face of it, the notification deadline is unfeasibly strict: the draft Commission Regulation requires CSPs must notify all personal data breaches to the competent national authority “no later than 24 hours after the detection of the personal data breach, where feasible”.

However, there are a number of concessions to the 24 hour deadline to make the obligation more workable.

If not all necessary information are available within 24 hours, the CSP must follow a 2-step approach. In this case, an initial notification with a limited set of information must be made to the competent national authority within 24 hours, and a second notification with the remaining information must follow within three days after the initial notification.

Content of and process for notification

The notification must include a specific list of information as set out in annex 1 of the draft Commission Regulation including the name of the provider, the date and time of the incident, the circumstances of the data breach and the nature and content of the personal data concerned.

Regulators must establish secure electronic means for notification.

Feasibility of notification and “sufficient awareness” of the breach

Despite its strict timeframe requirements, the notification requirement vis-à-vis national authorities does not necessarily require CSPs to act immediately upon becoming aware of a data breach.

Firstly, notification to the competent national authority must only be made “where feasible”. Secondly, the notification obligation only applies after a personal data breach has been detected. Under the draft Commission Regulation this requires that the provider “has acquired sufficient awareness that a security incident has occurred which led to personal data being compromised in order to make a meaningful notification”. The fact that a provider should have acquired sufficient awareness if it had made diligent enquires does not fall under the definition of a “detection of a personal data breach”.

Accordingly, neither a simple suspicion that a personal data breach has occurred, nor a simple detection of an incident without sufficient information on its scope will be sufficient to constitute an obligation for the provider to notify the competent national authority – which in practice will give providers a welcome breathing space to investigate the incident fully.

Notification to individuals: factors to consider

The second main obligation under the draft Commission Regulation relates to individuals that are affected by the data breach and who must be notified by the CSP if the breach is “likely to adversely affect the individuals’ personal data or privacy”.

If this is the case, the individuals must be notified without undue delay after the personal data breach has been detected and provided with a specific set of information as laid out in annex 2 of the Regulation including a summary of the incident that caused the data breach, an estimated date of the incident and information about the measures taken by the provider to address the data breach.

When determining whether a personal data breach is “likely to adversely affect the personal data or privacy” of an individual, specific circumstances shall be taken into account. These include:

  • the nature and content of the personal data concerned (particularly financial information, sensitive personal data, location data, internet log files, browsing history, email data and itemised call lists);
  • the likely consequences of the data breach for the individual – particularly where the breach puts individuals at risk of ID theft, fraud, physical harm, psychological distress, humiliation or reputational damage, and
  • the circumstances of the breach – particularly where the data has been stolen or when the provider knows the data are in the possession of an unauthorised third party.


However, even if it is determined that the personal data breach is likely to adversely affect the personal data or privacy of an individual, a notification to the affected individual will not be necessary if the CSP has implemented “appropriate technological protection measures” to render the data unintelligible to any person who is not authorized to access it.

The Regulation defines what constitutes “unintelligible”, by reference to encryption and hashing. It does not set out specific standards but it authorises the Commission to publish a separate indicative list of technological protection measures that are sufficient for that purpose. Accordingly, once this list has been published, CSPs will be able to avoid notification obligations vis-à-vis individuals by implementing the technological measures as suggested by the Commission.

Wider perspective: proposed notification requirements under the General Data Protection Regulation – how do they compare?

As Datonomy readers will be aware, the future General Data Protection Regulation (“GDPR”) will also include notification requirements, applicable to all data controllers. The Commission’s original draft of the GDPR proposed a 24 hour notification deadline which has prompted much controversy, and which has been extended to 72 hours in more recent drafts, data controllers must notify a personal data breach to the supervisory authority without undue delay and, where feasible, not later than 24 hours after having become aware of it.

The draft Commission Regulation specifically points out in its recitals that it “is fully consistent” with the proposed notification measures under the draft GDPR. It is likely that the important concessions in the telco obligations over “feasibility”, “meaningful notification” and “awareness” will influence the wider new obligation under the GDPR. What is unclear at this stage is whether the 24/72 hour notification windows would be aligned. Certainly, many telcos argue that there is little justification for imposing stricter requirements on the sector. 


The draft Commission Regulation will enter into force two months after its publication in the EU Official Journal, meaning that the notification obligations will be fully binding and directly applicable to providers in all EU member states from 25 August 2013 without the need for any additional implementation measures by the member states.

Claire Walker

The latest development in the complex procedural journey of the draft Regulation is the publication of a (mostly business-friendly) compromise text by the Presidency of the EU Council of Ministers.  Datonomy takes stock of the current state of play, and highlights the Council’s “direction of travel” on some key practical issues.

 What’s the latest news on the regulation?  

Last week the EU Council’s Justice and Home Affairs Committee published a draft compromise text of the General Data Protection Regulation. This note from the Presidency to the Council summarises the key points. The Presidency’s marked up text will inform the Council’s negotiating stance with other EU institutions – notably LIBE, the lead European Parliamentary Committee, in the weeks and months ahead. The Presidency’s aim is to “secure broad support for its approach”. The text is significant because although it is by no means the final word, it  “reflects the Presidency’s view of the state of play of negotiations at this stage”.

That the Presidency’s amendments reflect a more pragmatic, risk-based stance on the new rules is no surprise, given its statements earlier in the year which Datonomy reported on here. However, as the Presidency’s note states, “no part of the draft Regulation can at this stage be finally agreed until the whole Regulation is agreed”.

The Regulation has also been in the news over the past week because of comments by  the proposal’s Rapporteur, German MEP Jan Albrecht, voicing concerns about the extent to which the proposals are at risk of being  watered down by the business lobby – to the extent that the new rules could end up being “weaker than the old ones”.

What’s the prognosis for the reforms now, and what’s the timeline?

So, what are the next steps in the process and what is the outlook for the reforms? Should businesses still be gearing up for new rules adopted during 2014 and effective by mid 2016 as per the Commission’s original timetable?

It’s true that, despite the political momentum, the timetable has slipped.  A key orientation vote by LIBE has been postponed twice, owing to the massive number of amendments proposed by lobbyists (3,000-4,000 depending on which source you read)  – the date of this vote is not clear,  although some sources have stated that this vote could still happen at the start of July. Formal negotiations between the Council and Parliament are not expected to kick off until the Autumn.

Opinion is divided on the outlook for the proposals. In a Commission press release Vice President Reding welcomed the text as representing “solid progress”. Mixing her seasonal  sporting metaphors, she went on: “Despite the data protection sprint we have seen under the Irish presidency, we have not yet reached the finish line. The ball is now in two courts. The ball is in the Member States’ court to continue progress in the Council, and the ball is in the European Parliament’s court, to reach its own position on the proposals. They will need to move up a gear if they want this reform to happen sooner rather than later. The clock is ticking for international competitiveness.”

There are at least two ticking clocks. The most immediate (informal) deadline is 30 June 2013 when the Irish Presidency of the Council of Ministers ends. Those in favour of the reforms have been keen to see as much progress as possible before the supportive Presidency ends its tenure. The more significant deadline is Summer 2014 when the terms of the current Commission and Parliament expire. Mid 2014 is therefore seen as a make or break point for the adoption of the Regulation. 

Key issues for businesses – what’s the Council’s stance?

The Council’s draft compromise text runs to 95 pages and covers Chapters 1 to 4 of the draft Regulation.  Key areas of concern for businesses include the following.

  • Sanctions: the Council’s draft does not cover enforcement aspects of the proposal, so does nothing to challenge the proposed fines of up to 2% of annual revenues for enterprises.
  • General approach: overall, a more pragmatic, business-friendly, risk- based regime is proposed.  In particular the new Recital 3(a) which makes it explicit that data protection rights need to be proportionate and balanced against the freedom to conduct a business. The obligations on controllers and processors take account of the nature, scope, context and purposes of processing obligations and the risk levels posed.
  • DPOs: the designation of a DPO should be optional (unless required by other EU or national law as is currently the case, for example, in Germany).
  • Extraterritorial reach:  amendments to Recital 20 and Article 3 would limit the extra territorial reach of the regime; mere accessibility of a website to EU citizens would not suffice for the Regulation to apply to an overseas data controller based outside the EU.  Factors such as language and currency used on a website would come into play in determining whether the test for “offering of goods or services” to EU data subjects would be met for the Regulation to apply. Similarly the “monitoring of data subjects’ behavior” trigger would be narrowed to behavior taking place within the EU.
  • Data breach notification:  the Council’s amendments introduce a seriousness threshold  and a longer (72 hour) deadline for notification of security breaches to the regulator. The threshold for notifying affected individuals would also be raised, to breaches “severely” affecting the individual’s rights, with a number of other mitigating get-outs.
  • Consent: for non sensitive personal data, the Council proposes a shift back from the “unrealistic” requirement for “explicit consent” across the board to a less stringent requirement for “unambiguous consent”.  The criteria for valid consent have also been relaxed. (Recital 25 and Article 7)
  • Legitimate interests condition: the Council proposes the widening of the legitimate interests, with  fraud prevention, the anonymysing or pseudonymising of data and direct marketing being within the scope of “legitimate interests” (Recital 39).
  • Scope of personal data: the scope of personal data and the dividing line with unregulated, anonymous data would be clarified (Recital 23, Article 4).
  • Regulation or Directive? The Council acknowledges that 8 Member States (including the UK) oppose a directly effective regulation and therefore the text does not rule out the possibility of the new instrument being a Directive.  

The Council’s amendments only deal with Chapters 1 to 4 of the draft Regulation; the Presidency acknowledges that further adjustments will be needed throughout rest of the proposal.

The Presidential baton passes: will Lithuania keep up the “Irish Sprint”?

So, to recap – and add to  Madame Reding’s sporting metaphors. The ultimate finish line is still a long way off, with many hurdles still littering the track. All eyes will be on the passing of the baton from Ireland to Lithuania. What practical difference will it make? This  press release by the incoming Lithuanian Presidency assures us that data protection reform is high on its priorities too.  However, Datonomy notes this comment from  Minister of Justice of Lithuania, Juozas Bernatonis:“Perhaps everybody agrees that the EU data protection reform is necessary; however, the search for solutions and appropriate balance between the protection of the rights of citizens and administrative burden for businesses should not be hasty and considered insufficiently,”  Will the pace of the reforms keep up the “sprint” set by the Irish Presidency – or could  it slow to  a legislative marathon? Datonomy will provide further commentary as the race progresses.

Carlo Piltz

Datonomy readers may have had to grapple with the tricky issue of which national data protection law to apply in the context of an online service with a cross border dimension. They are not alone – the German courts have recently considered the issue in relation to Facebook’s operations.

In April, the German Higher Administrative Court of Schleswig-Holstein ruled that German data protection law does not apply to Facebook’s collection and processing of personal data of users in Germany. Instead only Irish data protection law would be applicable.

The case

The Internet giant faced an order by the Independent Data Protection Authority of Schleswig-Holstein, which wanted to force Facebook to allow German users the use of pseudonyms for the registration and for their profile names instead of the real name. German data protection law obliges website providers to enable this feature to the extent that this is technically possible and reasonable.

The decision

According to the Higher Administrative Court, German data protection law is however not applicable here, as it is Facebook’s Irish affiliate, Facebook Ireland Ltd., that is to be regarded as the relevant establishment for the processing of personal data of users in Germany, regarding the registration and the management of their accounts.

According to article 4 (1) a) of the directive 95/46 EC, only the data protection law of that Member State is applicable, where the establishment of a controller, which carries out the relevant processing of personal data in the context of its activities, is located.   

The court furthermore stated that Facebook’s German subsidiary in Hamburg, Facebook Germany GmbH, would exclusively operate in the fields of marketing and advert acquisition without having any actual influence on the German user accounts.

Since the requirements of article 4 (1) a) of the directive 95/46/EC were fulfilled by Facebook Ireland Ltd. and its processing of personal data of German users, the court consequently did not examine the question, if German data protection law could be applied pursuant to article 4 (1) c) of the directive 95/46/EC, as both provisions are mutually exclusive.

The Higher Administrative Court completed its ruling with an additional statements saying that German data protection law would only insufficiently implement article 4 (1) a) of the directive 95/46/EC. The Higher Administrative Court further emphasised that if personal data is processed by an establishment that is not located in a EU/EEA member state, article 4 (1) c) of the directive 95/46/EC applies and determines the applicable national law.

Finding the applicable law

It is important to highlight that finding the applicable law under article 4 (1) of the directive 95/46/EC is anything but easy. The directive provides two distinctive situations, in which the national data protection law of a member state will apply:

  • Article 4 (1) a): If the processing is carried out in the context of the activities of an establishment of the controller on the territory of a member state, the national provisions of that member state apply, regardless of where the controller is established; this can even be outside of the EU/EEA.
  • Article 4 (1) c): If the controller is not established on EU/EEA territory and no relevant establishment in the EU/EEA is involved in the processing of personal data and, for purposes of processing personal data, the controller makes use of equipment, automated or otherwise, situated on the territory of a member state, the data protection law of this member state applies.

National data protection authorities in the EU take however different approaches when determining the meaning of the term “equipment”. While cookies or other software that are placed on a user’s PC or smart phone, are widely recognized as equipment, different views are taken when it comes to other scenarios. The Article 29 Working Party, for example, interprets the term equipment in a rather broad way stating that also the activities of a processor in a member state could constitute a “making use of equipment”. Other data protection authorities believe that a non-relevant establishment of a controller can be seen as equipment.

Conclusion and comment

In each case, the determination of the applicable national data protection law regime depends on how personal data are processed and on the particularities of the relevant establishment that is responsible for the processing. Since different national rules impose different rights and obligations on the data controller regarding the processing of personal data, companies should structure their data processing activities thoroughly in order to avoid legal uncertainties.

The Working Party sought to bring some clarity and consistency of interpretation to this difficult area in its 2010 Opinion here. Datonomy and its colleagues at Olswang commented on the Opinion here  and here. Could applicable law conundrums become a thing of the past for companies with multinational operations? That is certainly one of the drivers behind the  draft General Data Protection Regulation, which seeks to harmonise substantive data protection rules across Europe, and introduce “one stop” regulation by the Member State where the organisation is headquartered. In practice, will differences over substantive rules and local enforcement approaches ever be eradicated? Datonomy readers will have to wait and see!

View All Posts