Mel Shefford

With the awareness that future cyber-attacks could have very serious consequences, the Government has proposed amendments to the Computer Misuse Act 1990. In this post we look at the current offences under the Act as well as recent amendments proposed by the Serious Crime Bill.

In August 2013, the outgoing US Secretary of Homeland Security Janet Napolitano gave a farewell speech in which she warned: “Our country will, at some point, face a major cyber event that will have a serious effect on our lives, our economy and the everyday functioning of our society.”

Her message vocalised what governments, businesses and organisations around the world are well aware of: as we become increasingly reliant on technology, and as systems become even more interconnected and complex, the risk of a serious cyber-attack increases. And whilst we currently associate cyber-attacks with access to personal data and damage to commercial interests, in the future the impact could be even more serious. For example, future attacks could result in major damage to the economy, national security, the environment and/or human welfare.

With this in mind, the British Government has been ramping up efforts over the past few years to tackle cyber-crime. For example, in 2011 it launched the National Cyber Security Strategy; in 2013 the National Cyber Crime Unit started operations; and £860 million has been committed until 2016 to boost the UK’s cyber capabilities. More recently, BIS announced the Cyber Essentials scheme to help businesses protect themselves against cyber-attacks.

Most Datonomy readers will be well aware of how important it is for organisations to be proactive about preventing data breaches, and how devastating the consequences can be if a breach does occur. But what are the consequences for hackers who are caught?

Offences under the Computer Misuse Act 1990

In the UK, the hacker might be guilty of  one or more of the following offences under the Computer Misuse Act 1990:

  • Obtaining unauthorised access to computer material (for example, using another person’s ID and password to log onto a computer and access data). The maximum penalty is a 2 year prison sentence and/or an uncapped fine (Section 1).
  • Obtaining such access in order to commit or facilitate the commission of another offence, such as theft of funds or data. The maximum penalty here is a 5 year prison sentence and/or an uncapped fine (Section 2).
  • Obtaining such access in order to intentionally or recklessly impair the operation of any computer, a program or the reliability of data held on a computer; prevent or hinder access to any program or such data; or enable such impairment, prevention or hindrance. This offence carries a maximum penalty of 10 years in prison and/or an uncapped fine (Section 3).
  • Making, supplying or obtaining articles for use in any of the above offences. This carries a maximum 2 year prison sentence and/or an uncapped fine (Section 3A).

The Serious Crime Bill

In June, the Queen announced the Serious Crime Bill which (among other aims) seeks to amend the Computer Misuse Act so that serious cyber-attacks are properly punished. In particular, there is a concern that the current custodial penalties – which have been described as “woefully inadequate” by a member of the House of Lords – are not sufficient for serious cyber-attacks. The two main changes proposed by the Bill are as follows:

(1)   The creation of a new offence to cover serious cyber-attacks

This new offence would be committed where a person knowingly, and intentionally or recklessly, commits any unauthorised act in relation to a computer which causes or creates a significant risk of serious damage to human welfare, the economy, the environment or national security in any country.

An act causing damage to “human welfare” would be something causing loss to human life; human illness or injury; disruption of a supply of money, food, water, energy or fuel; disruption of a system of communication; disruption of facilities for transport; or disruption of facilities relating to health.

Commission of this offence would be punishable by up to 14 years’ imprisonment and/or a fine, except where the act causes loss to human life, human illness or injury, or serious damage to national security, in which case the penalty is life imprisonment and/or a fine.

The Home Office has acknowledged that no cyber-attack has occurred to date which would engage this new offence. However, the idea is to ensure that there are substantive penalties if a serious attack were to occur in the future. Indeed, the Home Office anticipates – and no doubt hopes – that the number of prosecutions for this offence will be minimal.

(2)   Implementation of the EU Directive on Attacks Against Information Systems (2013/40/EU)

This Directive is designed to ensure that the EU has minimum rules on cyber offences and sanctions, and to ensure co-operation between EU member states in relation to cyber-attacks. The UK is already compliant with the Directive, except for the following two aspects:

  • Tools for the commission of an offence

The existing Section 3A offence of making, supplying or obtaining articles for use in another offence under the Act requires the prosecution to prove that the defendant obtained the tool with a view to it being supplied for use to commit or assist in the commission of the other offence. The Bill seeks to amend this offence so that it covers circumstances where an individual obtained a tool with the intention to use it themselves to commit or assist in the commission of a separate offence. Given the increasing ease with which individuals can now obtain malware, the Home Office hopes that this amendment will be instrumental in helping to avoid cyber-attacks in the first place.

  •  Extension of the extra-territorial jurisdiction of the Act

The Directive requires EU member states to establish their jurisdiction over cyber offences which are committed by their nationals. The Act currently requires the prosecution to demonstrate a “significant link” to the UK for the section 1 and 3 offences, essentially being that the defendant or computer was in the UK at the time of the offence. To conform to the Directive, the Bill extends the list of possible significant links to the UK to include the defendant’s nationality. This would mean that a UK national could be prosecuted for an offence where the only link to the UK is her/her nationality, provided that the offence is also an offence in the jurisdiction where it took place.

The legislative timetable and process

The Bill started in the House of Lords, and at the time of writing, the House of Lords report stage – where the Bill will be examined in more detail and the Lords will vote on proposed amendments – is scheduled to commence on 14 October. After a third reading at the House of Lords, it will then be considered by the House of Commons. The EU implementation aspects will need to be in force on or before 4 September 2015 in order to meet the EU transposition deadline, but the rest of the Bill will no doubt be subject to more scrutiny.

Handbook on European data protection law

Andreas Splittgerber - June 6th, 2014
Andreas Splittgerber

The European Union Agency for Fundamental Rights has published a Handbook of European data protection law, to which I was a contributor.

This handbook is designed to familiarise legal practitioners who are not specialised in the field of data protection with this area of law. It provides an overview of the European Union’s and the Council of Europe’s applicable legal frameworks.

The Handbook can be found here.

Olswang publishes first edition of Cyber Alert

Jai Nathwani - June 6th, 2014
Jai Nathwani

The first edition of Olswang’s Cyber Alert, a regular round up of regulation, best practice and news from our international cyber breach and crisis management team has been published.

Please click here for a printable PDF version.  In this first edition we cover:

In the last few months we have seen news headlines ranging from the international operation against the GameOver Zeus botnet, to  state-sponsored hacking, arrests over the BlackShades malware, and the release of the latest Information Security Breaches Survey, not to mention continued concern over the Heartbleed vulnerability, so there is much for businesses to consider. Click here for a summary of some of the latest headlines.

It is also worth mentioning the European Court of Justice’s Google Spainruling in May, which is arguably the most profound internet case of this decade and which continues to send shockwaves through the tech sector. Whilst Google Spain does not relate to cybersecurity specifically, it does establish that in some circumstances a non-European company is answerable to the European courts and accountable under European data protection laws, including the requirement for appropriate technical and organisational measures to be in place to protect personal data. Read Olswang’s analysis of Google Spain here.

Claire Walker

Last week’s  seismic  decision in the Google Spain case continues to generate many column inches of comment and will no doubt continue to do so for some time. Datonomy’s colleagues in  Olswang’s international privacy team have just published a paper  considering the practical implications of this decision in the round.  You can access it at this link. The paper considers:

  •  Google’s practical options in terms of next steps
  • the implications for individuals’ rights
  • the implications for online publishers
  • what it means for the Right To Be Forgotten under the new EU Regulation
  • the impact on  wider “data debates” over other technologies such as email scanning and Google Glass
  • what it tells us about the workings of Europe’s highest commercial court, and tactical tips for bringing referrals on points of EU law.

The paper is also available in PDF here.

Google loses historic case on right to be forgotten

Marcos García-Gasco - May 14th, 2014
Marcos García-Gasco

The Court of Justice of the European Union (“CJEU”) made a historic ruling  in the case of Google v Spain [Case C‑131/12]. The CJEU ruled that Googleis responsible for the processing that it carries out of personal data which appear on web pages published by third parties.

The decision is something of a surprise given that it goes against the Advocate General’s Opinion delivered last year, and indeed is quite a bold statement by the CJEU on what it sees as the future of data protection in the internet age and the legal responsibilities of search engines.

Background

The case arose after a complaint that was brought against Google by a Spanish individual, Mario Costeja González, to the Spanish Data Protection Authority (AEPD). Mr González had been the subject of an auction notice for unpaid debts that was published in a widely-read newspaper in Spain around a decade ago.  Despite the time that had elapsed since this initial publication, this was still featured prominently in a Google search for Mr González’s name.  Mr González argued that this was in breach of the EU Data Protection Directive (the “DPD”) as the data was not current and that in such circumstances, there should essentially be a “right to be forgotten.” 

The AEPD agreed, and Google subsequently appealed to the Spanish National High Court which in turn referred questions on the meaning of the DPD to the CJEU.

The decision

Despite the Opinion of Advocate General Jääskinen, who considered last year that search engine service providers should not be considered responsible for third-party content on the basis of the DPD for personal data appearing on web pages they process, the Grand Chamber of the ECJ in the judgement published today has concluded as follows:

  • The activities carried out by Google, namely ‘finding information published, indexing it automatically, storing it temporarily and making it available to the public’, must be classified as ‘processing of personal data’ for DPD purposes.  Furthermore, the operator of the search engine must be regarded as the ‘data controller’ regardless of the fact that they have no control over the underlying data itself. 

 

  • Google falls within the territorial scope of the DPD as its Spanish subsidiary is intended to promote and sell advertising space directed at the citizens of that Member State, which is sufficient to be considered ‘established’ in that Member State.

 

  • Google must remove links to third-party websites displayed from a search of an individual, where those websites contain personal data relating to the individuals  concerned.  This is subject to certain exceptions, such as public figures, and to achieving a proper balance between the data subject’s fundamental rights and the right to information.

Conclusion

This decision has far-reaching consequences for Google in Europe.  The bar to when there would be a public interest in search engines processing data relating to individuals in the form of search returns seems very high and where the data relates to an individual who is not a public figure, it is rather doubtful that this could ever be permitted. 

There is no clear view how Google will respond to the judgment, but there must be a significant possibility that it will have to establish an elaborate administrative system to deal with individuals who complain about it using their data and sophisticated technical means to ensure that this is blocked. 

It is also unclear how the ruling will affect the ongoing negotiations of the General Data Protection Regulation.  Early drafts of the Regulation included a broad “right to be forgotten” though the latest draft has watered this down somewhat.  Commissioner Viviane Reding who is championing the draft Regulation welcomed today’s ruling saying it was a “strong tailwind” to the proposed data protection reforms in Europe.  The reality is that the Regulation still faces a rocky road before it is passed and some commentators are already questioning why we need new rules for a right to be forgotten at all given today’s ruling.

What is clearer is that the ruling is a sign of the CJEU’s reluctance to allow non-EU-based multinationals to evade European laws where they are clearly otherwise established here.

Jai Nathwani

The ISO is developing specific new security standards for cloud services, which are expected to be published in 2015. This is another welcome step towards ensuring compliance with the principles in the Data Protection Act and further boosting customer confidence in in cloud computing technologies.

Why the new standard?

The development of the new standard is a direct response to one of the key goals announced in the 2012 European Cloud Computing Strategy (the “Strategy”). The Strategy was published by the European Commission with the aim of promoting the rapid adoption of cloud computing in all sectors of the economy in order to boost productivity. The Commission’s own Cloud Standards Roadmap talks about concerns over security as often being cited as a barrier to migrating data to the cloud. Under current rules, liability for breach of data protection rules rests with the data controller therefore, an auditable standard for cloud service providers who process personal data is crucial to demonstrate the supplier’s resilience and hence enable a customer to meet its own regulatory obligations on data security. The need for a recognised benchmark was further endorsed by the Information Commissioners’ guidance on Cloud Computing, published in 2012. The guidance states that when selecting a cloud service provider, the data controller must choose a processor providing sufficient guarantees about the technical and organisation security measures governing the processing to be carried out, and must take reasonable steps to ensure compliance with those measures. Audited compliance to a standard would be the appropriate method to ensure that data controllers comply with its data protection obligations and could be written into the contract between a cloud services supplier and a customer.

The new ISO 27017 and 27018

In response to the need for a cloud computing security standard the International Organisation for Standardisation (“ISO”), which is already responsible for benchmark standards for due diligence on data processors, is developing two cloud specific standards, ISO 27017 and ISO 27018. The two standards are due for official release in 2015.

The new standards are based on the familiar standards of ISO 27001 and 27002. ISO 27001 provides a framework of security controls that can be adapted and applied to an organisation of any size to create a security standards framework. ISO 27002 provides for the practical implementation of the ISO 27001 framework in an organisation. The 27001 and 27002 standards apply generally to the operation of ICT systems. The two new standards under development apply 27002 specifically to cloud computing.

ISO 27017 deals with the application of the ISO 27002 specification to the use of cloud services and to the provision of cloud services. It will recommend cloud-specific information security controls to supplement those recommended by ISO 27002.

ISO 27018 deals with the application of 27002 to the handling of Personally Identifiable Information (“PII”) and will serve as a code of practice for PII protection in public clouds which act as PII processor.

For more detail see this link to the ISO’s website.

Claire Walker

With the Heartbleed web vulnerability in the tech headlines, the practical guidance issued recently by EU regulators on when to alert individuals to data breaches (and on preventive steps to reduce the risk of breaches occurring in the first place) is particularly timely. Datonomy highlights some of the key recommendations on when to make the difficult judgement call over notification.

Why the new guidance? Does it apply to your organisation?

The recent Opinion issued by the EU’s Article 29 Working Party (the body made up of national data protection regulators) concerns the ever-topical issue of personal data breach notification. Specifically, it sets out the regulators’ collective view on when data controllers should alert data subjects to a personal data breach which is likely to adversely affect those individuals’ personal data or privacy.

The guidance sets out good practice for “all controllers”. Strictly speaking the obligation to report data breaches only applies to communications services providers under current rules; however in practice, handling a data breach is a business-critical issue for all organisations. The illustrations and in the guidance are drawn from a wide range of contexts.  As well as analysing the triggers for notifying individuals that their data has been compromised, the guidance sets out practical steps to reduce the risk of breaches occurring and/ or to mitigate their severity. It is therefore a must-read for all in house counsel and their colleagues in the IT function – both in devising a data reach response plan, and in designing systems to reduce the risk of vulnerabilities in the first place.

A quick recap on breach notification obligations – current and future

As reported by Carsten on Datonomy last year, communications service providers (CSPs) are already subject to reporting obligations under EU Regulation 611/2013. CSPs’ obligations are two fold:

  • to report all data breaches to the regulator (within 24 hours); and
  • to notify the data subject “without undue delay” when the breach is “likely to adversely affect the personal data or privacy” of that individual.

Notification to the affected individual is not required if the CSP has implemented “appropriate technological protection measures” to render the data unintelligible to any person who is not authorized to access it. The Regulation defines what constitutes “unintelligible”, by reference to encryption and hashing. It does not set out specific standards but it authorises the Commission to publish a separate indicative list of technological protection measures that are sufficient for that purpose

As Datonomy readers will be aware, these notification obligations are likely to be formally extended to all data controllers, regardless of sector, under the draft EU Data Protection Regulation.

However, notification of data breaches, both to the regulator and to affected individuals, is already an important practical consideration for all organisations from a damage limitation point of view. While not risk –free, voluntary notification to the regulator and to individuals may help to mitigate the sanctions imposed by a regulator where a data controller has suffered a data breach as a result of falling short of data security obligations under the UK Data Protection Act.

Wide ranging illustrations of when a breach “likely to adversely affect” a person’s privacy

The guidance sets out seven different and wide-ranging breach scenarios. These include: loss of laptops containing medical data and financial data; web vulnerabilities exposing life insurance and medical details; unauthorised access to an ISP’s customers’ details including payment details; disclosure of hard copy credit card slips; unauthorised access to subscribers’ account data both through unauthorised disclosure and through coding errors on a website. Whilst not exhaustive, these worked examples do provide useful analysis of the different types of harm which could trigger the obligation to notify individuals.

The guidance breaks the analysis down onto three different categories of data breach, and gives illustrations of the adverse privacy effects of each type. These are:

Confidentiality breach: unauthorised disclosure of or access to personal data which can lead to ID theft, phishing attacks, misuse of credit card details, compromise of other accounts or services which use the same log in details  and a wide range of other detrimental effects on the individual’s family and private life and work prospects. Most of the examples focus on confidentiality breach.

Availability breach: accidental/ unlawful destruction or loss – which can lead to disruption and delay and even financial loss. (The illustrations of financial loss, and the consideration of “secondary effects” in a data availability context will also be of interest to those negotiating liability provisions in commercial deals which involve the handling or personal data.)

Integrity breach: the alteration of personal data – which can lead to serious consequences for medical and customer data.

The distinction is significant, particularly as the need to notify individuals about confidentiality breach can be mitigated or eliminated by the use of appropriate encryption – see below.

The guidance also stresses the need to consider likely “secondary effects” of a breach which may not appear in itself to adversely affect privacy. The example given here is of the hacking of a music service website. While the direct effect may be limited (leak of names, contact details and users’ musical preferences) it is the secondary effect – the fact that passwords have been compromised, and that users may use the same passwords across other accounts – which creates the need to notify individuals of the breach.

Prevention better than cure: practical steps to avoid the need to report breaches to individuals

In relation to each scenario, the guidance sets out examples of appropriate safeguards to reduce the risk of such breaches occurring in the first place and/ or mitigating the privacy impact. As noted above, notification to individuals is not required if a data controller can satisfy the regulator that the data has been rendered unintelligible. Common themes which run through these practical recommendations include:

  • Encryption: First and foremost, the guidance emphasises the need for appropriate, state of the art encryption with a sufficiently strong and secret key
  • Secure storage of passwords: salted and using a state of the art cryptographic hash function – simply hashing passwords is unlikely to meet the “unintelligible” data exemption for notification.
  • Password policies: requiring stronger password choices for users, and requiring password resets whenever passwords are compromised.
  • Vulnerability scanning: to reduce the risk of hacks and other breaches
  • Regular back ups: to mitigate against the effects of availability breach
  • Systems and process design: to reduce the risk of breach and /or mitigate its effects –the examples given include dissociation of medical information from individuals’names.
  • Access controls: Limiting global access, and restricting access to databases on a “need to know” and “least privilege” basis – including minimising access given to vendors for system maintenance.
  • Staff training e.g. on how to delete data securely data.
  • Incident management policies: the guidance also highlights the importance of good incident management policies in limiting the duration and effects of data breaches.

Be proactive and plan!

The new Opinion provides organisations with helpful guidance on making the difficult judgement call over when to notify customers and other individuals about breaches of their personal information. Perhaps even more importantly, it sets out some of the minimum preventive standards that regulators expect data controllers to adopt in order to demonstrate that they have implemented “appropriate” security measures under the current rules. The Opinion urges data controllers to “be proactive and plan appropriately”. The guidance will help organisations decide when they need to alert individuals – but it is having a crisis management team and a (rehearsed) action plan in place that will enable a calm and swift response, should a data breach arise.

Matthias Vierstraete

On 8 April 2014 the European Court of Justice ruled that the Data Retention Directive 2006/24/EC interferes in a particularly serious manner with the fundamental rights to respect for private life and to the protection of personal data. The Directive is declared invalid.

A.    The Directive

Directive 2006/24/EC strives for harmonization of the Member States’ national legislations providing for the retention of data by providers of publicly available electronic communications services or of a public communications network for the prevention, investigation, detection and prosecution of criminal offences. The initial intention was that service and network providers would be freed from legal and technical differences between national provisions.

The Directive and national laws implementing the Directive were often criticized. The main argument being that massive data retention was said to endanger the right to privacy. The advocates of the rules, however, argued that these rules were necessary for authorities to investigate and prosecute organized crime and terrorism.

B.    The Court of Justice

By way of preliminary rulings referred to the Court of Justice of the European Union, the Irish High Court and the Austrian Constitutional Court asked the Court of Justice to examine the validity of the Directive, in particular in the light of two fundamental rights under the Charter of Fundamental Rights of the EU, namely the fundamental right to respect for private life and the fundamental right to the protection of personal data.

  • Analysis of the data to be retained

The Court of Justice verified the data which providers must retain pursuant to the Directive. This data includes data necessary to trace and identify the source of a communication and its destination, to identify the date, time, duration and type of a communication, to identify the location of mobile equipment, the name and address of the user, the number called, IP addresses, etc. The Court observes that the retention of this data makes it possible to know the identity of the participants in communications, to identify the time of the communication, the place from where the communication took place and the frequency of communications with certain persons (§26).

This data, according to the Court allows very precise conclusions concerning private lives of persons whose data has been retained, such as habits of everyday life, places of residence, movements, social relationships and social environments frequented.

  • Analysis of the interference with fundamental rights

The Court comes to the conclusion that both requiring the retention of the data and allowing competent national authorities to access those data constitutes in itself interference with the fundamental right to respect for private life and with the fundamental right to the protection of personal data (respectively articles 7 and 8 of the Charter of Fundamental Rights of the European Union) (§ 32 – 36).

The Court agrees with the Advocate General when it states that the interference is “particularly serious. The Court in this respect holds that “the fact that data are retained and subsequently used without the subscriber or registered user being informed is likely to generate in the minds of the person concerned the feeling that their private lives are the subject of constant surveillance” (§37).

This interference is according to the Court not only serious, but moreover it is not justified. Besides the fact that the retention of data as required by the Directive does not as such adversely affect the essence of the respect for private life and protection of personal data (content of the communications as such may not be reviewed) and the Directive genuinely satisfies an objective of general interest (public security), the Court is of the opinion that the Directive has exceeded the limits imposed by the proportionality principle (§69):

  • The Directive covers all persons and all means of electronic communications as well as all traffic data without any differentiation, limitation or exception being made in the light of the objective of fighting against serious crime (§57);
  • The Directive fails to lay down any objective criterion by which to determine the limits of the access of the competent national authorities to data and their subsequent use (§60);
  • The data retention period is set at between a minimum of 6 months and a maximum of 24 months without any distinction being made between categories of data and not stating that the determination of the period must be based on objective criteria (§63 – 64);
  • The Directive does not provide for sufficient safeguards to ensure effective protection of data against the risk of abuse and against unlawful access and use (§66);
  • The Directive does not require data to be retained within the EU and thus does not meet the Charter’s requirement that compliance control by an independent authority is ensured.

The Court of Justice thus declares the Directive invalid.

C.    What’s next?

Following the Court’s invalidation of the Directive, one could wonder how this will affect European legislation and national legislation.

  • Europe

The invalidity ruled by the Court applies from the day where the Directive entered into force. It is as if the Directive never existed.

The European Commission stated in a first reaction that it “will now carefully asses the verdict and its impacts”. It is not clear whether the Commission will draft new legislation replacing the invalidated Directive. Taking into account the fact that the current Commission’s term only runs until 31 October 2014, it is not much anticipated that new law will be put forward soon.

  • Member States

Member States having transposed the Directive into national laws may now consider the future of these laws.

In case their national law is a literal transposition of the now invalidated Directive, the national laws meet with the same fate. One may consider that in such situation Member States should redraft their laws in order to be in line with the relevant Directives (95/46/EC and 2002/58/EC) and the Charter of Fundamental Rights of the European Union.

If national law deviates from the Directive, Member States should assess whether the deviations are in line with the relevant Directives (95/46/EC and 2002/58/EC) and the Charter of Fundamental Rights of the European Union.

The Court of Justice’s ruling may also have an impact on national cases concerning the legality of national laws implementing the Directive, as there are several cases pending before the constitutional courts.

  • Austria and Ireland are obviously at the basis of the European Court of Justice’s ruling, following their constitutional courts’ requests for a preliminary ruling concerning the validity of Directive 2006/24/EC;
  • Belgium: On 24 February 2014, the Belgian “Liga voor Mensenrechten” and “Ligue des droits de l’Homme” together filed a complaint before the constitutional court in order to obtain cancellation of the Belgian law implementing the Directive. The complaint was funded through crowdfunding. Following the Court of Justice’s ruling, some political parties already asked government to take the necessary steps and to amend the current legislation;
  • Bulgaria: In 2008, the Bulgarian Constitutional Court found part of the national law incompatible with the right to privacy;
  • France: In 2006, the French Constitutional Court ruled that French law provisions similar to those provided for in the Directive are not contrary to the constitution. However, in December 2013, the French data protection authority (CNIL) reacted vigorously against a new law enabling certain ministries, including French secret services, access to data retained by telecommunications operators, internet and hosting service providers, without prior approval from a judge. On that occasion, the CNIL called for a national debate on surveillance issues which could be influenced by the recent ECJ’s ruling.
  • Germany: The German Constitutional Court already declared the German implementing act unconstitutional in 2010;
  • Romania: In 2009, the Romanian Constitutional Court declared the national law on data retention unconstitutional as breaching, among others the right to privacy and the secrecy of correspondence;
  • Slovakia: In 2012, a complaint was filed before the constitutional court in order to assess the conformity with the constitution;
  • Spain: The Directive was implemented into national laws in 2007. The Spanish data protection authority (AEPD) had voiced its reservations about the Directive and requested the Government to accompany the implementation of these rules with measures curtailing the impact on data subjects’ privacy;
  • Sweden: In May 2013, Sweden was ordered to pay the European Commission 3 million EUR because Sweden had failed its obligation to timely implement the Directive;
  • United Kingdom: As yet there has been no official comment from the UK government or the Information Commissioner on the ruling of the Court of Justice. Controversial 2012 proposals for a Communications Data Bill to overhaul and significantly extend the UK’s data retention obligations were already in the political long grass – and the Court of Justice’s ruling means they are likely to stay there as we understand it.

Obviously the current situation creates uncertainties and we understand the issue will be very much discussed in Brussels (and elsewhere) in the coming weeks.

Sylvie Rousseau and Matthias Vierstraete, Olswang Brussels

Jai Nathwani

With cyber-security tipped as one of the top tech trends for 2014 a lot has already been written about the controversial data security breach proposals in Europe. But what is happening elsewhere in the world? We hear from one of Datonomy’s Asia correspondents, Olswang Partner Elle Todd

Datonomy was pleased to lead a discussion and mock cyber security breach scenario alongside the local chapter of the IAPP in Singapore last week where such issues are gaining a lot of attention and interest.

The engaging session, attended by a variety of practitioners, followed the unfortunate exploits of a fictitious international e-commerce company faced with an anonymous threat from an individual claiming that they had managed to obtain the customer database and would release it to the blogosphere. As the morning unfolded, more facts and problems emerged for the company and the audience discussed how best to respond to the potential disaster from PR, cyber risk management and legal perspectives.

Security breaches on the rise

Singapore has not been immune to cyber and other security breaches in real life in recent years. The most high profile recent incident occurred in November last year when the Singapore Prime Minister’s website was hacked by hactivist group Anonymous. Another concerned the theft of wealthy client data from Standard Chartered Bank in December 2013.

From this July there will be some new legal changes that companies will need to navigate in thinking through their response if they fall victim to such an incident.

New rules and regulations

As Datonomy has reported Singapore will get its first ever comprehensive data protection legislation with effect from 2 July this year. Containing many provisions and principles which will be familiar to those versed in European law, the new Act does contain requirements on organisations to “protect personal data in its possession or under its control by making reasonable security arrangements to prevent unauthorized access, collection, use, disclosure, copying, medication, disposal or similar risks”. Beyond that, however, there are no obligations on notification in the event of a cyber or other security breach. A company will therefore need to ensure that its systems are adequate so that a breach does not give rise to regulatory fines or other censure but does not have a legal obligation to notify the regulator or customers.

The financial services industry in Singapore does not get off so lightly however since the Monetary Authority of Singapore (the financial services regulator) has published new rules which come into force just the day before this new personal data protection act. The MAS rules will require banks to “make all reasonable effort to maintain high availability for critical systems” and notify the Authority “as soon as possible but not later than 1 hour, upon the discovery of a relevant incident” being a system malfunction or IT security incident which has a severe and widespread impact on the bank’s operations or materially impacts the bank’s service to its customers.

A five year masterplan

Looking at the developments in Singapore is of interest internationally however not just because of developments in legislation but the significant resources it is fostering and investing in to support organisations in fighting cyber threats.

Last year the Singapore Government announced a ‘Five Year National Cyber-Security Masterplan’ which will focus on three key areas:

-       Enhancing security of critical infocomm infrastructure;

-       Increasing efforts to promote infocomm security adoption among end-users and businesses; and

-       Growing Singapore’s pool of security experts in collaboration with educational institutes and industry partners.

Another key related and exciting development is the scheduled opening of Interpol’s Global Complex for Innovation in Singapore later this year. This impressive new building will act as a cutting-edge research and development facility as well as providing innovative training, operational support and partnerships. This is Interpol’s first HQ outside of Lyon, France.

Datonomy will be watching the emergence of the results of such innovation with interest since they could well prove useful in Europe as the debates continue there.

Posted on behalf of Elle Todd, partner, Olswang Asia

A budget for Big Data

Claire Walker - March 20th, 2014
Claire Walker

Even Datonomy is jumping on the Budget news bandwagon this morning, having spotted the Chancellor’s announcement that the Government is to invest £42m in Big Data and algorithm research. The new institute will be named in honour of wartime code breaker Alan Turing.  Meanwhile, Datonomy’s tax colleagues have been analysing the wider business impact of the Chancellor’s other announcments – you can find their Budget Blog at this link http://blogs.olswang.com/budgetblog/ and see this link for the BBC’s coverage of the new Alan Turing Institute http://www.bbc.co.uk/news/technology-26651179

UK ICO’s new approach to public complaints

Katharine Alexander - January 29th, 2014
Katharine Alexander

The ICO recently announced “subtle but significant” changes in its approach to data protection complaints about businesses made by the public. Consumer facing brands will want to stay on the right side of the law anyway – what will the changes mean in practice, and when does a business run the risk of enforcement action?

 The ICO has launched a Consultation entitled ‘our new approach to data protection concerns’, running from 18 December 2013 to 31 January 2014, seeking to collect the views of ICO regulated organisations. The proposed changes are planned to take effect from 1 April 2014.

 Why is the ICO’s approach changing?

The ICO received 40,000 written enquiries or complaints, and 214,000 phone calls in 2012/13 from members of the public. In only 35% of these instances, had data protection legislation actually been breached. The ICO is therefore encouraging individuals to address their concerns to the organisation complained about. The approach to data protection concerns is therefore being streamlined, in a bid to allow the regulatory body to focus on serious contraventions, and repeat offenders who breach the legislation.

When will the ICO take action in response to a complaint?

Businesses still need to take care. Once an individual has raised a complaint with the organisation, if they are not satisfied with the outcome, they may still send their complaint, and the organisation’s response, to the ICO. The ICO will keep a record of complaints in order to identify and take action against patterns that emerge. If the organisation complained of is a repeat offender, or it is a serious breach, enforcement action will still be taken.

What does this mean for responsible brands?

This is therefore good news for compliant organisations, with existing systems in place to respond to queries and resolve complaints, as not much will have to change. In addition, any positive initiative or strategy used by an organisation may be published on the ICO website.

However, this does not mean that businesses can be blasé. Subject access requests and data protection complaints are often a symptom of wider customer dissatisfaction. It must not be forgotten that in today’s world, enforcement comes not only in the form of the ICO, but in the reputational damage caused to brands by individuals complaining through social media. In some instances, this could be more far reaching than some enforcement action by the ICO. Reputational damage will be further cemented, with the ICO publishing the number of breaches by an organisation on their website.

Organisations with an opinion on this matter have until 31 January to respond to the ICO’s consultation. Following the consultation, the ICO’s new approach will take effect on 1 April 2014.

Recap on UK enforcement powers and enforcement policy

Even though the changes to complaint handling may not be big news for the majority of companies, it may be helpful to recap on the circumstances when the risk of enforcement could arise. The ICO has no powers to award compensation to the public, but can take a range of enforcement actions against organisations.

Details of ICO enforcement can be found here, and Datonomy has previously highlighted the changes to their policy last year. According to the ICO, they have served over 5000 decision notices since 2005, and published 27 undertakings in 2013. They may also impose fines of up to £500,000 in the most serious cases, to act as ‘both a sanction and a deterrent’ (according to their enforcement policy).

In order to impose a monetary penalty, the ICO must be satisfied that:

  1. there has been a serious contravention of section 4(4) of the Act by the organisation,
  2. of a kind likely to cause substantial damage or substantial distress, (i.e. one of the data protection principles) and is either,
  3. which is deliberate or,
  4. the organisation knew or ought to have known that there was a risk that the contravention would occur, i.e. reckless, and that such a contravention would be of a kind likely to cause substantial damage or substantial distress, but failed to take reasonable steps to prevent the contravention.

The ICO enforcement policy can be found here. It details that although sometimes action is taken as a result of a complaint of an individual, the initial drivers also include issues of general public concern, concerns of a novel or intrusive nature, or concerns which become apparent through the ICO’s other activities. It also details criteria for when they are likely to take action. These factors include:

  • the scale of detriment to an individual the breach is likely to have,
  • the number of individuals adversely affected,
  • whether enforcement action could stop an ongoing adverse impact, and
  • whether the attitude and conduct of the organisation in question suggests a deliberate, willful or cavalier approach to data protection issues,

amongst others.

As indicated by the enforcement section of their website, the ICO are transparent in their enforcement action, in line with their first Guiding Principle in their enforcement policy. Therefore, the threat is not only a potential pecuniary penalty, but in some cases more crucially, reputational damage to a company. In addition to the enforcement notices and undertakings detailed above, the ICO will further ‘name and shame’ organisations with poor data protection practices, by publishing the number of complaints made about an organisation.

The wider context

The Consultation has been broached at a time when data protection is a hot topic. 2013 saw cyber security continue to be a global concern, with many hacks (for example at Target, and Adobe) obtaining personal data including credit and debit card details of individuals. Both cyber-security and data protection legislation is being discussed at EU level. In recent weeks we have seen regulators in France, Spain and the Netherlands impose the maximum fines available on Google over its 2012 privacy policy changes – but these are a drop in the ocean for a company like Google. Datonomy waits with interest to see what formal action the UK’s ICO takes against Google.

Larger fines are on the horizon. New EU privacy laws, which have now been delayed, could enable data protection authorities to fine companies the greater of €100 million or two/or even five per cent of global revenue.

With the new approach to complaints to data subjects, far from loosening its grip on data protection enforcement, the ICO is simply targeting its action on breaches by bigger players. The moral of the story? To ensure your organisation has good data protection and information rights practices, and keep your customers happy.

Ross McKean

The EU’s ambitious plans for a radicalisation of data protection laws have suffered a serious set-back. EU justice commissioner Viviane Reding finally conceded in a speech at a meeting of EU justice and home affairs ministers in Athens last week that the draft General Data Protection Regulation will not be agreed during the current term of the EU Parliament.

The most recent delay has been caused by the EU Council of Ministers failing to reach agreement before starting negotiations with the EU Parliament and the Commission, with several Member States demanding significant changes to the proposals.

New timetables have been proposed and optimistic statements made that there will still be a new data law by the end of this year.  However, the reality is that any prediction about the substance or process to agree the draft Regulation post this May’s parliamentary election season is guesswork at best.   Fundamental differences remain among Member States as evidenced by the failure of the Council to reach consensus to start formal negotiations.  In addition, new MEPs will have their own political agendas and priorities and may be wary of becoming embroiled in the long running saga of the draft Regulation. 

The proposals have proved to be some of the most controversial to come out of the Brussels legislative machine in years, with over 4,500 amendments proposed to the original text.  The negotiations to date have consumed vast resources and exhausted goodwill.  Even in Member States which have historically been seen as backers of the proposals, support is waning.  Poland’s Inspector General for the Protection of Personal Data, Wojciech Wiewiorowski said last week that support in Poland is dropping because the Regulation, announced two years ago by the Commission is taking too long.

Germany, regarded by many as setting the high water mark for data protection, although supportive of the proposals, wants to see a broad carve-out for the public sector to ensure that German authorities can continue to collect and process personal data without having to comply with the uniform standards.  Germany’s stance has been criticised by German MEP Jan Albrecht and rapporteur for the draft Regulation who said “obviously the German government is against European-wide common rules.  This behaviour is irresponsible against the EU citizens.” 

Irresponsible or not, Germany’s position demonstrates the challenge that the EU Council and a newly constituted Parliament will face to reach agreement on the text of the Regulation.  If data protection friendly Member States such as Germany can’t be persuaded to support the proposals, then what prospects are there to build consensus among all 28 Member States?  The UK, Denmark, Hungary and Slovenia are calling for a watering down of the proposals and re-rendering them as a Directive, which would afford more discretion to individual Member States to interpret the new requirements.

Albrecht must be concerned that his magnum opus may never become law.

Google loses crucial jurisdiction battle in the UK

Ross McKean - January 17th, 2014
Ross McKean

As Datonomy reported (see below), Google has been fined by French and Spanish data protection authorities following almost two years of toing and froing with European data protection regulators over its consolidated privacy policy.  The tiny fines and are unlikely to change Google’s privacy practices.

However, Google now has a larger headache to deal with following the judgment of Mr Justice Tugendhat in the English High Court, handed down yesterday (Judith Vidal-Hall and Others v Google Inc in the Queen’s Bench Division, Case number: HQ13X03128).  The claimants, represented by Olswang the lawfirm behind Datonomy, are a group of users of Apple’s Safari internet browser.  

The Safari users group claim that Google Inc illegally tracked and gathered information about their browsing activities by implementing a workaround to the default Safari browser block on third party cookies. 

Under the Civil Procedure Rules (the procedural rules for parties to civil litigation in the English courts), the claimants needed the permission of the High Court to serve proceedings out of the jurisdiction on Google, Inc, a Delaware incorporated company.  Google Inc applied in August 2013 for an order declaring that the English courts had no jurisdiction to try the claims and to set aside the service of the claim form. 

The High Court disagreed, finding in favour of the UK claimants holding that there was a serious issue to be tried in relation to claims for misuse of private information and in relation to various breaches of the Data Protection Act 1998 and that the claimants had established that the UK is the appropriate jurisdiction in which to try the claims.       

Although there was no conclusive view given on the merits, Google Inc now faces the prospect of defending a claim in the English High Court which in contrast to data protection regulators, enjoys considerably more fire power to impose remedies and ensure that judgments are complied with.  Any award of damages, even if relatively small, could result in a significant liability for Google when multiplied by the millions of Safari users in the UK.

Embattled Google continues to defend its privacy policy

Blanca Escribano - January 17th, 2014

 

Almost two years have passed since Google introduced controversial changes to its privacy policy in March 2012, by merging more than 60 separate policies for Google’s numerous services into a single privacy policy.  Since then European data protection regulators, initially through the Article 29 Working Party and more recently through a task force of data protection authorities from six Member States including the UK, France, Germany, Italy, Spain and the Netherlands, have demanded that Google takes steps to bring its new policy into line with European data protection laws.  There has been much rattling of regulatory sabers and for the most part nonchalant shrugs from the Mountain View based tech giant, which has responded to the coordinated regulatory offensive by saying that its new policy “respects European law and allows us to create simpler, more effective services.” 

 The Spanish and French data protection watchdogs have now taken matters further by imposing formal sanctions on Google Inc, fining the company Euro 900,000 and 150,000 respectively for breaching Spanish and French data protection laws.

 For an organisation that reported revenues of 50 billion dollars in 2012, these fines are miniscule and highly unlikely to have any effect on Google’s privacy practices.  The CNIL also required Google to publish a notice of the CNIL’s decision on its French search landing page www.google.fr for 48 hours.  This may have been a rather more effective deterrent to dissuade Google from continued non-compliance with French data protection laws given the sanctity of its search landing page and its prominence to French Google users.  However, Google announced on Monday this week that it has appealed the decision of the CNIL which means that the requirement to publish the notice is likely to be suspended pending the outcome of that appeal.

 The UK’s data protection authority, the ICO, after saying in July last year that “failure to take the necessary action to improve the policies’ compliance with the [UK] Data Protection Act by 20 September will leave [Google] open to the possibility of formal enforcement action”  has yet to make any further announcement.  Requiring Google to post a notice on the UK search landing page would be a first for the ICO, and would almost certainly be appealed by Google.  However, fines alone are unlikely to change Google’s behaviour so regulators will need to think more creatively about effective remedies.

Datonomy will be keeping a close eye on the next moves in this game of regulatory cat and mouse.

New UK guidance on making mobile apps privacy compliant

Katharine Alexander - January 6th, 2014
Katharine Alexander

With privacy and security concerns about apps regularly in the headlines, developers and brands commissioning mobile apps should factor in the important new guidance issued recently by the ICO. The guidance and practical illustrations are also relevant to other online platforms e.g. smart TVs and games consoles.

The Information Commissioner’s Office (ICO) has recently released guidelines for app developers to help them ensure apps comply with data protection laws. The guidance was released in the run-up to Christmas – when app sales soar (the ICO cites the statistic of 328 million apps downloaded in the UK on Christmas Day 2012). The guidance is timely, with privacy a worldwide concern: in the US, the  SpongeBob Squarepants app and Jay-Z’s Magna Carta app are two recent examples which have attracted adverse attention over alleged  lack of  privacy compliance, while in the UK security vulnerabilities in the SnapChat app have been in the news.  With app based games aimed at children currently under the scrutiny of the UK consumer enforcement authorities (see this article), the regulation of apps looks set to continue to be a hot topic in 2014.

Why the new guidance? Who needs to comply?

Launching the new guidance, the ICO’s office cites a survey (of 2,275 people) by YouGov, which has shown that 62% of app users are concerned about data security, and 49% have decided not to download an app due to data privacy worries. As described by Simon Rice (Group Manager at the ICO for the technology team) in his blog, this statistic demonstrates that compliance with the guidance makes commercial sense for app developers, as well as reducing legal risk.

The ICO’s guidance emphasises the need for privacy to be factored in at the design stage – and not just an afterthought addressed in a privacy policy. The Data Protection Act 1998 is technology neutral, and applies just as much to online activities such as apps, as well to offline data collection. What is valuable about the ICO’s very practical new guidance – and the numerous worked illustrations which run through it – is that it applies the principles of the DPA very specifically to the mobile app context. The document seeks to address some of the particular challenges of privacy compliance for apps – including space constraints, and the range of personal data to which apps typically have access which make privacy such a concern, such as access to a user’s location, the microphone, emails, SMS and contacts.

Datonomy readers may recognise that the ICO’s guidance is a more user-friendly version of the 30 page opinion published in February 2013 by the EU’s Article 29 Working Party (the body made up of national data protection regulators). That Opinion looked not only at compliance issues for developers, but also for OS and device manufacturers, apps store and other parties in the app ecosystem.

As Datonomy readers will be aware, the ICO guidance does not have the force of law, but is in effect a benchmark for compliance with existing rules in a particular context. With such targeted guidance available, it will be more difficult for organisations deploying apps to plead ignorance of their legal obligations.

All organisations and individuals involved in the development and use of apps should review current and new apps for privacy compliance in the light of the new guidance. Aspects of the guidance – particularly in relation to providing information and gaining consent – will also resonate with other online services, such as games consoles and smart TVs.

As with all data protection issues, a party’s exact compliance obligations will depend on understanding exactly what personal data is involved, who is the data controller and what the risks to individuals’ privacy are. Developers and commissioners therefore need to consider these issues at the design stage in order to minimise and manage their legal risk – and preserve the commercial value of customer data collected.

Basic DP concepts applied to the app ecosystem: personal data; data controllers

The most fundamental question is what – if any – personal data the app is processing. Personal data is anything which can identify, or together with another piece of information to hand can identify, a living individual. In the mobile environment, this can extend from something obvious such as a name or address, to something more specific such as a device IMEI number. The guidance gives useful illustrations and suggestions for data minimisation in order to reduce risk.

The next key issue is to identify the data controller (or data controllers) in the particular app scenario, since legal liability rests with them. This is the person or organisation who decides how personal data is dealt with. The guidance provides useful analyses of who may be the data controller in various scenarios, including social media apps, reviews and ad funded games. This will always be fact dependent. The guidance includes a reminder that the data controller(s) will be subject to the full range of normal DPA obligations e.g. registration with the ICO; transparency information; and the requirement to respond to data subject access requests. Where personal data is shared with another entity which processes it on the controller’s behalf, the normal requirements for minimum contractual protections apply. They must also be careful to demonstrate adequate protection when transferring data outside of the EEA.

What data to collect

The guidance on collecting data via apps includes:

  • only collect the minimum data necessary for the app to perform its function;
  • never store data for longer than necessary;
  • pay extra attention if the app is aimed at children not old enough to understand the significance of providing personal data;
  • allow users to permanently delete their personal data and account; and
  • ensure you have informed consent to collect usage or bug report data, otherwise use anonymised data. If using anonymised data, ensure that the minimum data necessary is still the first step, and anonymise from there.

The ICO recommends data controllers use a privacy impact assessment to ensure compliance.

Informing users and gaining consent – good practice for privacy notices

Complying with the DPA Principles on information and consent poses particular challenges in the mobile environment, where space constraints and consumers’ expectations of convenience and user friendliness make it impracticable to provide detailed privacy notices. In order to achieve this, app developers should:

  • use plain English;
  • use language appropriate for the audience (e.g. children);
  • clearly explain the purpose of collecting the personal data;
  • make privacy information available as soon as possible before the app begins to process personal data; and
  • use a layered approach – detailing the key points in summary, with access to more detail if the user wants it. Containing a privacy policy in one large document may be difficult for a user on a mobile app, on a small screen.

The guidance provides a number of very useful, short, privacy notices which illustrate how information and consent requirements can be complied with, despite the challenges.

The guidance also gives more specific advice, such as:

  • use colour and symbols;
  • highlight unexpected or onerous actions and highlight differences between platforms;
  • make use of just-in-time notifications, which are provided immediately before the data is processed, for example when requesting to use someone’s location for GPS, or when using new features of an app for the first time; and
  • ensure consent is obtained if the app passes data onto any other organisations, ensure it is clear if the app is supported by advertising, and give information on any analytics used.

It is always important to be as clear and transparent as possible. However, there is no need to state the obvious to a reasonably-informed user. The ICO uses an example of an app used to deliver orders – the need for a delivery address is obvious. They also state that if the information is given in the app store, there is no need to repeat this at a later stage (unless onerous or unexpected, as above).

Users should also be given an element of control over the use of their data – with granular options, and ensuring it is easy to review and change personal data settings from one obvious settings location.

Good data security practice for apps

The 2 pages devoted specifically to security include the following recommendations – highlighting that developers should adhere to up to date good security practices both in design of the app, and of the central servers the app communicates with:

  • ensure passwords appropriately salted and hashed on any central server;
  • use encrypted connections for usernames, passwords, sensitive information;
  • use tried and tested cryptographic methods;
  • avoid writing new code where well established implementations can be sued instead; and
  • take particular care where the app accessed data from other apps or locations.

The guidance also highlights examples of vulnerabilities specific to mobile apps, for example inter-app injection flaws, and failing to check or misconfiguring SSL/ TLS.

Other important legal compliance issues

In addition to compliance with data protection principles, the guidance provides a helpful checklist of the consumer protection rules which  app developers must also comply with:

Datonomy comment

As the ICO’s press release reminds us, ‘compliance is not a bolt-on included in the final phase of a product’s development, but is an aspect of an app’s design that should be considered at the start of the process”.

Datonomy agrees – and the ICO’s targeted guidance and illustrations are certainly a step in the right direction. Datonomy readers may also be interested in this recent article by our colleague Matt Pollins which looks at the wider legal landscape for the growth of the app.

Yesterday (12/12/2013), a serious blow was dealt to one of the fundamental building blocks establishing the legal framework for retention of data for law enforcement across Europe.  Advocate General Pedro Cruz Villalón (AG) at the Court of Justice of the European Union (ECJ) delivered an opinion stating that the Data Retention Directive (DRD) is, as a whole, incompatible with the individual’s right to privacy in the Charter of Fundamental Rights of the European Union. The opinion has potentially profound implications for law enforcement agencies and for service providers subject to the retention requirements across Europe. The opinion is here.

Background

The DRD requires Member States to implement laws requiring telephone or electronic communications service providers to collect and retain traffic data, location data and the related data necessary to identify the subscriber or user of the services “in order to ensure that the data is available for the purposes of the investigation, detection and prosecution of serious crime” (Article 1(1) of the DRD).  Providers are not required to collect and retain content data i.e. the data communicated itself by subscribers or users of the services. Members States are required to ensure that the data is held for periods of not less than six months and not more than two years from the date of the communication. Only competent national authorities are to be permitted access to the data.  For more information about data retention requirements, go here.

Key takeaway for service providers

Service providers should watch this space and keep their own compliance programmes under review. For service providers wrestling with retention requirements, the opinion means that doubt will remain about the correct way to build a compliance programme. If the ECJ agrees with the AG, new legislation would need to be developed though the practical impact on service providers with respect to the types of data to be collected and any reduction in retention periods is unclear.

What did the AG say?

-       The AG considers that the purposes of the DRD are legitimate.

-       However, the AG is concerned that the retained data will include a lot of information about an individual’s private life and identity. There is a risk that the data may be used for unlawful purposes. The risk may be greater because the data is not retained or controlled by the competent national authorities but by the providers and the providers do not have to retain the data within the relevant Member States.

-       The AG said that the DRD does not provide minimum guarantees for access to the data and its use by the competent national authorities. (i) A more precise definition of “serious crime” would help to define when competent authorities are able to access the data. (ii) Access should be limited to judicial authorities or independent authorities. Any other access requests should be subject to review by judicial authorities or independent authorities so that access is limited to only the data that is strictly necessary. (iii) Member States should be allowed to prevent access to data in certain circumstances e.g. to protect individuals’ medical confidentiality. (iv) Authorities should be required to delete the data once used for the relevant purposes. (v) Authorities should be required to notify individuals of the access, at least after the event when there is no risk that the purpose for accessing the data would be compromised.

-       Finally, the AG said that he could not find sufficient justification for not limiting the data retention period to one year or less.

What does this all mean?

-       For now the existing requirements remain but may be subject to review. The AG’s opinion is not binding on the ECJ or indeed on any Member State.  Nevertheless, the opinion carries weight and in many cases the ECJ has gone on to follow opinions delivered by the AG.  The Judges of the ECJ are still deliberating and judgment will be given at a later date.

-       The AG also proposed that the effects of stating that the DRD is invalid should be postponed so, even if the ECJ agrees with the AG, the ECJ could allow the EU legislature a reasonable period to adopt remedying measures, so that the DRD is no longer incompatible with the Charter of Fundamental Rights.

Anne Brandenburg

After lengthy discussions, the European Parliament’s Committee on Civil Liberties, Justice and Home Affairs (“LIBE”) agreed this Monday (22 October 2013) on a compromise text of the draft General Data Protection Regulation (“GDPR”). The proposal still has a mountain to climb as opinions between the different EU institutions remain deeply divided. However, Monday’s vote is significant as it gives the European Parliament (“EP”) a mandate to start the next phase of negotiations with Member States.

The GDPR was published by the European Commission 21 months ago in January 2012 and has proved to be one of the most controversial proposals ever to come out of Brussels, with lobbyists proposing over 4000 amendments to the Commission’s text.

Background

The compromise text was adopted by the LIBE Committee on a 49-1 vote with three abstentions. The EP’s press release is here and includes some radical proposed changes to the Commission’s draft.

Datonomy has taken a look at some of the key proposed changes which include the following:

  • Territorial Scope: Under the draft GDPR, the Regulation applies to all processing of personal data “in the context of the activities of an establishment of a controller or a processor in the Union” and to the activities of controllers not established in the Union where the processing activities relate to offering goods or services to data subjects in the in the EU or the monitoring of their behaviour. The EP’s compromise text seeks to add to the reach of this provision in two ways: it adds the passage “whether the processing takes place in the Union or not”, and makes clear that the targeting of EU citizens is caught even where no payment is required for the goods / services offered.  With this amendment, the Parliament tries to cover in particular data processing activities that take place in a cloud and/or overseas.
  • Fines: The Parliament harmonized the fines for a violation of the GDPR. According to the Commission’s draft, such fines could amount to between 0.5% of an enterprise’s annual worldwide turnover or EUR 250,000 and 2% of an enterprise’s annual worldwide turnover or EUR 1,000,000, depending on the provisions breached. The compromise text harmonizes those categories and increases the maximum fines for GDPR breaches up to 5% of an enterprise’s annual worldwide turnover or up to EUR 100,000,000 – whichever is greater.
  • Right to be forgotten and erasure: The controversial right to be forgotten is endorsed, and reinforced with an obligation on data controllers to take all reasonable steps to have that data erased by third parties. A right to have data erased following a court order is also added.
  • One-stop-shop: The compromise text confirms the one-stop-shop principle of the GDPR which provides that only the data protection authority of the country in which the business is located is competent for supervising such businesses’ data protection activities. On the other hand, data subjects have the right to lodge a complaint with a supervisory authority in any Member State if they consider the processing of their personal data is not in compliance with the GDPR.
  • Certification: According to the compromise text, controllers and processors within and outside the EU may ask any supervisory authority within the EU to certify that their processing of personal data complies with the GDPR. If this is the case they will be granted a “European Data Protection Seal” which allows for data transfers between businesses with such a seal even if one of them is based in a country that does not have an adequate level of data protection.
  • Data protection officer: The compromise text changes the requirements for appointing a data protection officer (“DPO”). While the draft GDPR required a DPO if an enterprise has 250 or more employees that carry out processing of personal data, the compromise text only relates to the number of data subjects concerned and requires the appointment of a DPO if personal data of more than 5,000 data subjects are processed in any consecutive 12-month period. Furthermore, the compromise text requires a DPO where the core activities of the controller or processor consist of the processing of sensitive personal data, location data or data on children or employees in large scale filing systems.
  • Breach notification: Good news is that the compromise text widens the time frame in which a personal data breach must be reported to the supervisory authority from 24 hours to a reporting that takes place “without undue delay”.

Procedure and what is next

The LIBE Committee’s vote gives lead Rapporteur Jan Phillipp Albrecht a mandate for negotiations with the Council in order to reach a common agreement on the final wording of the GDPR which negotiations shall preferably be concluded prior to EU Parliament elections in May 2014. The next meeting of the Council’s Justice Ministers on the data protection reform will take place on 6 December 2013. And an indicative plenary sitting of the Parliament is scheduled for 11 March 2014.

It is expected that during the inter-institutional negotiations, the compromise text will be further amended as certain aspects in the current version seem too radical to be supported by the Council (e.g., fines will probably be one of the parameters to become adjusted).

Accordingly, despite the LIBE Committee’s vote, there is still a long way to go before the new GDPR will formally be adopted, and it remains to be seen what the final detail of the reforms will look like, and whether the Commission will achieve its aim of getting the measure adopted at EU level before the European elections in the Spring. Datonomy will continue to monitor the progress and keep its readers updated on the future development of data protection in Europe.

Katharine Alexander

With recent reports of ever more daring cyber-attacks on the banking system, and claims that cyber criminals are exploiting weaknesses in the supply chain to hack major corporations, Datonomy looks at the current EU proposals on reporting security incidents which are aimed at tackling the problem – and the concerns and flaws identified by industry and by legislators.

What’s new? Some recent developments on the NISD

Datonomy readers will be familiar with the proposal for a new EU Directive on Network and Information Security (NISD) unveiled by the Commission in February, and set for its first reading in the European Parliament in early 2014. The aim of the new measures is to boost security by imposing new standards, and auditing and reporting requirements on market operators – including key infrastructure providers (e.g. energy companies) and, more controversially, ecommerce platforms and social networks.

Our earlier summary of those proposals can be found here.  But what do organisations in those sectors think of the proposals? To inform its negotiating stance in Brussels, the UK Government has been taking soundings (from May to June The Department for Business Innovation & Skills (BIS) held a Call for Evidence, seeking information about the impact the NISD could have on UK Stakeholders) and on 6th September, BIS published a summary of these responses.

The consultation drew responses from 88 organisations in the various sectors targeted by the new rules, including  Media, Banking, Transport, Energy, Health, Telecommunications, Providers of Information Society Services, Aerospace and Defence. Their concerns and comments make interesting reading for other organisations in those sectors who are keen to future proof their systems (and supply chain arrangements) for potential new obligations and sanctions – and, of course, for compliance and cyber security professionals, whose services will be in even greater demand if  the proposed  regime comes into force.

Over the Summer the draft Directive has also come under scrutiny from the EU institutions, with many of the same concerns, interestingly, echoed in the draft report by Andreas Schwab, the proposal’s Rapporteur, and during a debate in the European Council in June.

In addition, earlier this month, a major briefing note was published by one of the European Commission’s DGs. Datonomy readers with an appetite for all 172 pages of this report will find analysis of security breach trends, as well as further critique of the NISD’s proposals.

What are the key issues and concerns?

The overarching theme in all these documents is scepticism about whether the proposed breach notification requirements are proportionate or indeed effective in terms of encouraging information sharing and/ or reducing organisations’ vulnerability to attack.

The BIS Summary of Reponses categorised evidence into 14 key aspects of the Directive. To spare Datonomy’s busy readers having to read all 49 pages, some of the main themes stemming from the responses are as follows:

EU harmonisation – but what about the global picture?

Whilst there was overall support for a harmonised and non-fragmented regime for cyber-attack reporting, many respondents stressed the need for the measures to stretch further than the EU, as cyber security is a global issue.

Which businesses would be caught?

The proposed reach of the new obligations is one of the most controversial dimensions of the NISD. Stakeholders sought clarification on the ‘extremely broad’ definition of ‘Market Operators’ in Annex II of the Directive, and why these sectors have been targeted. This was foreseen by the European Council in June 2013, who perceived the need for ‘detailed discussions’ relating to the definition of ‘market operators’.  In general, stakeholders wanted the scope to be narrowed, so that businesses that in fact do not have an impact on critical infrastructure are not unintentionally (and unnecessarily) caught. Schwab agrees and suggested this should be ‘limited to infrastructures that are critical in a stricter sense’, and consequently suggested removing providers of ‘information society services’ (ISS) from the obligations. The focus should remain on energy, transport, health and finance.  

The current draft of the NISD includes all the following players within a non-exhaustive indicative list of ISS: ecommerce platforms; internet payment gateways; social networks; cloud computing services; and app stores. It is inclusion of this potentially diverse range of businesses that has attracted the most criticism. Objections include the complexity of Internet and cloud value chains; the risk of generating data which is disproportionate to the benefits to be gained; and the stifling of innovation. One stakeholder however argued that the ambit should be wider – and that software developers and hardware manufacturers should not escape the new obligations.

Mandatory vs. voluntary reporting of cyber incidents

The strongest recurring theme was animosity for mandatory reporting. The Explanatory Memorandum for the proposed Directive argues that ‘the current situation in the EU reflecting the purely voluntary approach followed so far, does not provide sufficient protection against NIS incidents and risks’. However, stakeholders object to the idea of mandatory reporting for a number of reasons.

Firstly, many organisations already have reporting mechanisms in place, and so insisting on further mandatory reports would create perceived unnecessary work and potential for duplication in reporting, and would therefore be inefficient. Comments included:

‘within the UK there are already a number of effective information sharing forums, both formal and informal, which should be encouraged and not subject to greater regulatory pressure’.

The Report from the Economic and Scientific Policy Department of the European Parliament, on behalf of the Committee for Industry, Research and Energy states that the obligations burden those ‘already talking to regulators and perhaps already sharing certain types of cyber security information as part of their obligations towards sector-specific regulators’.

Schwab’s report also  addressed this, and states that the proposal for National Competent Authorities ‘does not adequately take into account already existing structures’, and therefore the designation of more than one competent authority per Member State should be allowed.

Moreover, Stakeholders would prefer a voluntary trust-based approach to reporting mechanisms of NIS incidents. They fear that a mandatory obligation would actually decrease the amount of notifications, and encourage a ‘tick-box’ mind-set, and a ‘compliance culture’. One stakeholder said:

‘…it’s vital that the companies do not adopt a ‘tick-box’ approach to security and understand that truly effective cyber security is a combination of having the right people, processes and technologies in place’. 

The Debate in Council also addressed ‘why a legislative, rather than a voluntary approach’ was being used, and the fact that Member States required further justification of this.

Another criticism levelled at the Directive is that mandatory reporting would penalise and disincentivise organisations with more advanced NIS systems, who by definition will detect, and therefore need to report, more attacks. 

Schwab also commented on this, stating that ‘potential sanctions should not disincentivise the notification of incidents and create adverse effects’, and therefore, where a market operator has failed to comply with the Directive, but not intentionally or by gross negligence, there should be no sanction.

When should notification be triggered? The meaning of ‘significant’ – a sectoral test?

Stakeholders identified the threshold for the obligation of reporting to be triggered as another key measure in the Directive, and required clarification to the meaning of ‘significant’. Without clarification, stakeholders could not assess the impact the Directive could have on their businesses.

‘Significant’ is too broad a term; one stakeholder suggested narrowing the definition to ensure a breach would have to be ‘an incident that is not a routine or accidental breach of information technology compliance management policies but is anomalous and has the ability to create significant harm’. However, to exclude accidents would be to invalidate the aim of the proposed Directive given in the explanatory memorandum, which references the increase in the number and severity of incidents, including human mistakes. Schwab suggests adding a clear criterion for incidents which must be reported, which, if taken into account, and depending on the definition, may help resolve some concerns.

In addition, stakeholders thought that the definition of a ‘significant impact’ should be determined sector by sector, in order to ensure that ‘thresholds to trigger reporting of incidents are appropriate to the sector’.  

Yet more new regulatory bodies?

Particularly with regard to developing a Computer Emergency Response Team (CERT) and a National Competent Authority (NCA), stakeholders were concerned about the framework being too slow, especially considering that it took three years for the US CERT to have effect. There are also concerns that introducing another regulator could add more ‘confusion and complexity’ to the reporting process.

Stakeholders were also concerned that the NCA could publicise security incidents which had been reported, without the permission of the reporting organisation. Comments reflected concerns about loss of reputation, and the lack of an opportunity to remedy their systems. This may, again, act as a disincentive to voluntarily report breaches, alongside the ability of the NCA to impose sanctions.

What are the next steps?

The Call for Evidence has certainly given the UK Government plenty of food for thought as it prepares to negotiate in Brussels. BIS states that it may require further evidence from stakeholders in the future, in order to negotiate an instrument that ‘does not overburden business…; that encourages economic growth and innovation; and that fosters positive and sustainable behaviour change’. Therefore, businesses in the affected sectors should look out for further opportunities to inform and influence these proposals. The UK already has a number of voluntary initiatives up and running as part of its 2011 Cyber Security Strategy.

The first reading for the Directive is scheduled for 4th February 2014, according to the Procedure File found on the European Parliament website here. If the initial responses from businesses and Parliament and the Council (the institutions with power to determine the fate of the proposals) are anything to go by, the Directive has a long way to go before it is adopted.

There is no denying that cyber security is an issue. In the last few days alone, this Datonomist  has been reading coverage of   a cyber-plot to ‘steal millions of pounds by hijacking London high street bank’s computers’ (four men are appearing in court on 27th September as a result), and a report by the insurers Allianz about how hackers are accessing the computer systems of the large corporations via access to their smaller suppliers.

Will the mandatory auditing and reporting requirements in the Directive ever become law, and if so who will they apply to? It is too early to say for sure. But in the meantime, security incidents which are getting ‘bigger, more frequent, and more complex’ will surely focus minds on improving information security throughout the supply chain – won’t they?

 

Carsten Kociok

Datonomy considers the Germany authorities’ reaction to the PRISM affair, and the wider practical consequences this could have for international transfers being made under the auspices of U.S. Safe Harbor and model contracts.

After the reports about extensive surveillance activities by foreign and European intelligence services, especially by the American National Security Agency (NSA) and the UK Government Communications Headquarters (GCHQ) and possible transfers of personal data to them by American companies, European data protection authorities are raising their voices. In a letter dated 13 August 2013, the chairman of the Article 29 Working Party expressed his deep concern to the Vice-President of the European Commission, Viviane Reding, urging her to seek for more clarification from the U.S. as well as announcing the intention of the European data protection authorities to conduct own investigations regarding the compliance of foreign and European intelligence programs with EU data protection principles. Concrete actions have however not been taken at an European level yet.

Germany – USA transfer: no new authorisations under Safe Harbor

In Germany, data protection authorities already went a step further or at least announced to do so. “Data protection supervisory authorities will not issue any new permission for data transfer to non-EU countries (including for the use of certain cloud services) and will examine whether such data transfers should be suspended.” With this statement in a press release from 24 July 2013, the Conference of Federal and State Data Protection Commissioners in Germany aroused attention all over Europe. Rumors were spreading whether the German authorities even wanted to “suspend” the EU – U.S. Safe Harbor agreement, which serves as a vital base for transatlantic flows of personal data.

In their press release Federal and State Data Protection Commissioners call on the German government to provide a plausible explanation of how the unlimited access of foreign intelligence services to personal data of persons in Germany is effectively limited in accordance with the principles of Safe Harbor and standard contractual clauses for data that is transferred to countries outside of the European Union. They also address the European Commission and demand a suspension of its decisions on Safe Harbor until further information and notice is provided.

A recap on adequacy – transfer of personal data to the USA

Under the data protection directive 95/46/EC, the transfer of personal data by an European based controller to a third country, which does not ensure an adequate level of protection, is prohibited. In order to ensure an adequate level of protection for personal data that are transferred to the U.S., in 2000, the U.S. government and the European Commission developed the Safe Harbor principles (2000/520/EC) which allow U.S. American based companies to take part in a self-certification program, supervised by the Federal Trade Commission (FTC). The companies will have to comply with to several requirements regarding the processing of personal data. Data transfers to these companies will then automatically be covered by an adequate level of protection. As an alternative to the Safe Harbor regime, the European data exporter and the U.S. data importer can agree on standard contract clauses (Annex of decision 2000/87/EU) previously published by the European Commission. By using these clauses, an adequate level of protection will also be assumed. Permissions by national data protection authorities are generally not required in these cases.

Suspension of data transfers by national authorities

According to their press release, the German authorities will not “issue new permissions” for data transfers to non-EU countries and will examine whether such data transfers should be suspended on the basis of the Safe Harbor framework and the standard contractual clauses. This announcement deserves some clarification:

Firstly, it has to be emphasized that the national data protection authorities may not suspend the whole Safe Harbor principles or the underlying decision of the European Commission. This falls into the European Commission’s area of responsibility. Indeed, the Commission is currently undertaking an assessment of the Safe Harbor principles. Ms. Reding expects a result until the end of this year.

However, the competent authorities within the Member States may exercise their existing powers to suspend data flows to a certain organization that has self-certified its adherence to the principles in order to protect individuals with regard to the processing of their personal data. But in addition, further requirements have to be fulfilled, such as the determination that there is a substantial likelihood that the principles are being violated and the continuing transfer would create an imminent risk of grave harm to data subjects. The same basically applies to the standard contractual clauses.

Several uncertainties

German national authorities regard these requirements to be fulfilled. Hence, from their point of view, they may use their existing powers to suspend data flows to the U.S. However, whether the principles of Safe Harbor are really violated, is highly questionable (as the full and clear details of the surveillance activities still remain hidden) and would have to be examined closely, especially by the European Commission and a special committee, formed by representatives of the Member States.   

Other European data protection authorities do not conform to the view of their German counterparts. The Data Protection Commissioner of Ireland, for example, does not believe that there are grounds for an investigation. And in the UK, the Information Commissioner’s Office (ICO) commented the Article 29 Working Party’s letter to the European Commission by saying that he is “taking a keen interest” in the issue, but until now, has not taken any concrete actions. In Belgium, the Commission de la Protection de la Vie Privée (CPVP) has not yet published any statement in this respect.

German authorities practise what they preach

According to a statement of the data protection commissioner of Berlin, Alexander Dix, from 25 July 2013, the Commissioner is currently not taking applications to authorize transfers to the U.S. any further, or requests information from the applicants as to the measures they take in order to prevent foreign intelligence services to access the information. The data protection commissioner hints however to the possibility, that if a “U.S. provider offers encrypted means of storing data in a cloud, that would be a technical alternative to increase security”.  

It has to be kept in mind that a suspension of data transfers to a U.S. company could result in commercial disadvantages and perhaps economic damages for German based companies, which rely on transatlantic transfers of personal data to the U.S. The statement from the data protection commissioner of Berlin shows that data transfers may still be legally possible, but companies will have to make more efforts than before to convince the authorities of an adequate level of protection.

Outlook

German data protection authorities live up to their fame of tough privacy watchdogs in the European Union. Nevertheless, uncertainties remain whether suspensions of data transfers will in fact be made and whether can legally be justified.

On another level, it will be interesting to observe if and how the German government and the European Commission conform to the demands of the European data protection officers and whether they will adapt or even suspend the existing rules of Safe Harbor or the standard contractual clauses. Finally, the ongoing examinations by the Art. 29 Working Party should be observed carefully, as their conclusions may well have an impact on international data transfers from the EU to the U. S. Regardless of the next events, datonomy readers should follow these developments closely, as the impacts for international business must not be underestimated.

Rebecca Davis

On 3rd September, the new EU Directive 2013/40 on attacks against information systems came into force, requiring Member States to beef up national cybercrime laws and sentencing. The Directive updates and replaces the previous Framework Decision in this area and introduces new measures including criminal offences for attacks using malicious software, and increased sentencing of up to 5 years’ imprisonment for the most serious offences. The new measures are illustrative of the EU’s increasingly aggressive stance in tackling cyber-crime – but how different is the new legislation to that already in force? Datonomy explores.

Why the new Directive?

Last week on 3 September, the new EU Directive 2013/40 on attacks against information systems came into force. The Directive was proposed in 2010 as a replacement to the previous Framework Decision 2005/222/JHA, which criminalised various activities in relation to attacks on information systems, including illegal access to information systems, and illegal interference with systems and data. Following various high-profile cyber-attacks since the passage of the Decision, including a 2009 botnet attack that successfully infiltrated the computer systems of the UK, German and French defence forces, the EU was concerned that such existing legislation was inadequate to prevent cyber-crime and so considered further measures were required. 

What’s new?

The text of the new Directive is similar to the previous Decision, and contains almost identical offences in relation to illegal access to information systems and interference with systems or data. As in the Decision, there is again an offence for involvement in incitement, aiding, abetting or attempting such offences. “Information systems” is broadly defined to include any device or group of devices which automatically process computer data by means of a programme, as well as any data stored, processed, retrieved or transmitted by such device(s). The new Directive however now introduces new offences for “illegal interception” of non-public transmissions of computer data (Article 6), and for the production, sale, procurement for use, import or distribution of tools intended to commit cyber-crime offences (Article 7). The latter is primarily targeted at the use of botnets and malicious software, which the European Parliament highlighted as a particular concern in the Directive’s Preamble, citing the potential use of such tools to gain remote access to large numbers of computer systems and potentially cause significant disruption and damage. To support this, new penalties of up to 5 years’ imprisonment are introduced for the most serious systems or data interference offences, either where carried out within the framework of a criminal organisation, or where such attacks cause significant damage or affect key infrastructure. A new penalty of up to 3 years’ imprisonment is also introduced for such offences where carried out through the use of tools specifically designed for such purpose.   

In addition to more harshly penalising cyber-crime, the Directive also focuses on improving and strengthening police and judicial co-operation across the Union to counter such attacks. Citing both the frequently cross-border nature of cyber-crime, and the “significant gaps and differences in Member States’ laws and criminal procedures” in this area, the European Parliament has implemented a number of measures designed to facilitate more wide-scale reporting and enforcement. In addition to the pre-existing requirement that Member States implement national contact points in relation to cyber-security, Member States are therefore now also required to use the existing G8 and Council of Europe network of 24/7 contact points to help combat cyber-crime, and must respond within 8 hours to any urgent requests for assistance. They must further collect statistics and data on cyber-attacks, which will be transmitted to the European Commission for consolidated review and to help prevent such attacks in the future. 

How will UK law need to change?

Whilst many of the new measures have already been implemented in the UK through previous amendment to the Computer Misuse Act 1990 (“CMA”) in 2005, it is likely that the new Directive will require further changes to UK legislation before its deadline for transposition on 4 September 2015. Although the CMA already contains an offence in relation to the use of tools for the commission of computer misuse offences (under a new section 3A inserted under the Police and Justice Act 2006) for example, its current sentencing provisions run to a maximum of 2 years, and will likely need increasing to take into account the new penalties. Although the offence of illegal interception of telecommunications data is similarly already provided for under section 1 of the Regulation of Investigatory Powers Act 2000 (“RIPA”), this only concerns data transmitted over a public information network and may therefore need amending to include transmissions over private networks. Despite this however, it is unlikely that the Directive will require fundamental changes to existing UK legislation and its amendments to the previous Framework Decision are finally of a more supplementary and enhancing nature than representing a fundamental change.

View All Posts