Blanca Escribano

Datonomy takes a look at the recent recommendations in the Article 29 Working Party Opinion on the Internet of Things, and what these mean for players in the value chain.

Consumers’ fear of potentially intrusive new technologies is often cited as one of the main barriers to the adoption of the Internet of Things.

Regulators in the US and Europe are starting to get to grips with the issue. As Datonomy readers will be aware, the Article 29 Working Party recently issued an Opinion on the topic, with recommendations on how to embed privacy compliance at every stage of the IoT value chain.

In this paper on the Olswang website here I consider the key privacy and security challenges posed by a connected world, and analyse the latest best practice for suppliers – from device manufacturers, through to app developers and providers of operating systems.

Stakeholders who can demonstrate privacy compliance and ethical practices will be best placed to win consumers’ trust and gain competitive advantage in this brave new and connected world.

Olswang Cyber Alert October 2014 published

Ross McKean - October 27th, 2014
Ross McKean

As Datonomy readers may know October is Cybersecurity Month – a good time to read the second edition of Olswang’s Cyber Alert. There is no doubt that cyber security is rising up the international as well as the business agenda. NATO recently adopted an amendment to its charter to put cyber attacks on the same footing as armed attacks – see paragraph 72 of NATO’s Declaration.

In this edition:

  • In our lead article, EJ Hilbert, Managing Director, Cyber investigations, Kroll EMEA, considers the true cost of cybercrime;
  • In our standards and benchmarks section we consider the new ISO standard for processing PII in the cloud, new standardisation guidelines for cloud computing SLAs and look at the UK’s new certification scheme Cyber Essentials.
  • On our regulatory radar in this edition we  track the  progress of EU legislation on data and cyber breach notification, and draft US legislation and look in depth at new cyber security legislation in France and Germany and proposals to strengthen criminal penalties in the UK. We also look at a first of its kind ruling by the French data protection regulator, the CNIL, over supply chain security breaches, and at the impact UK fines are having on security compliance.
  • In our threat vectors section we highlight just some of the breaches and threats which have been in the headlines over the summer.

We hope Datonomy readers will enjoy the Cyber Alert. There is a printable PDF version of it here.

Threat Vectors

Tom Errington - October 22nd, 2014
Tom Errington

A small selection of the cyber threats and statistics that have made recent headlines.

  • Sources including censorship watch dog GreatFire have alleged that the Chinese authorities are staging a “man-in-the-middle” attack on Apple’s iCloud, just days after the iPhone went on sale in China. The attack is designed to intercept user’s iCloud account usernames and passwords, using a fake login site that looks exactly like the Apple iCloud login site. Read more from The WHIR and ITProPortal.
  • A new bug, which could be affecting hundreds of millions of computers, servers and devices using Linux and Apple’s Mac operating system, has been discovered. System administrators have been urged to apply patches to combat the bug, which has been dubbed “Shellshock”. Read more from the BBC.
  • US companies Home Depot, Supervalu and JPMorgan Chase & Co have all been hit by high profile cyber attacks.
  • Mark Boleat, head of policy for the City of London, has echoed comments made by New York’s financial regulator Benjamin Lawsky that an “Armageddon style” cyber attack will trigger the next global financial crisis by making a major bank “disappear”. Mr Boleat also said that the City of London police had uncovered a huge underground economy, and a huge underground network” capable of conducting movie-style cyber attacks. Read more from The Telegraph.
  • As has been widely reported, there has been an extremely targeted hack against celebrities, resulting in numerous nude photographs being temporarily floated in the public domain. In the fallout, cyber-thieves reportedly sent out fake notification messages to iCloud users to trick people into handing over their login details.
  • Similarly, 13 GB worth of photos from popular mobile phone app Snapchat have been dumped online. The attack has been dubbed “The Snappening” and was carried out by the use of insecure third-party software designed to let users store “disappearing” snaps. Many are blaming Snapchat for the breach. Read more from The Independent.
  • Security firm Hold Security has announced the “largest data breach known to date”, after a Russian gang dubbed “CyberVor” stole over 2 billion credentials. More details here and here.
  • As ZDNet reports, new research published by FireEye claims that 68% of the most popular free Android apps could become a pathway for cybercriminals to lift sensitive data.
  • An interesting blog by CBR highlights six cyber security trends to watch out for during the rest of 2014, which includes more focus being placed on cyber education and an increase in infrastructure targeting by hackers.
  • The “very alarming” level of cyber threats organisations face is unlikely to fall for at least 10 years, says Suleyman Anil, head of cyber defence at the emerging security challenges division of NATO. Mr Anil asserted there are three prime reasons for this; cyber crime is low risk with the promise of high profits, there has been an increase in opportunity to attack systems and most worryingly, there is growth in state-sponsored cyber attacks. Read more here.

 

Claire Walker

As  reported  in our first edition, there are two proposals making their way through the Brussels legislature which will change the legal landscape for the reporting of cyber attacks. These are the draft Network and Information Security Directive, which will impose reporting obligations on providers of critical infrastructure, and the draft General Data Protection Regulation which will impose data breach reporting requirements on all data controllers. The summer has seen much institutional change in the EU, first with the European Parliament elections in May, the start of Italy’s Council Presidency in July and now with the reorganisation of the European Commission and appointment of a new Commission President and Commissioners with effect from 1 November.  The summer has seen little procedural progress, although trilogue negotiations on the NISD have now begun, and on the GDPR the Council (representing the Member States) has, according to this Council press release, just reached a broad consensus on the security and breach provisions in Chapter IV of the draft Regulation (although the Council has not yet agreed its position on the whole proposal).  We will continue to monitor progress in our Cyber Alert.

We summarise the state of play – as at 22 October 2014 – on both proposals in a table available here

Thibault Soyer

With the text of the draft Network and Information Security Directive (“NISD”) still being negotiated between EU institutions, and the national transposition deadline for the Directive likely to be 18 – 24 months from the date of EU adoption, some Member States are pre-empting the new regime with national legislation of their own. France has already implemented the principles enshrined in the draft Directive via its Military Programming Act, which was published at the end of 2013.

 Overview

France has already implemented many of the principles enshrined in the Draft NISD into national law. The French Government published its strategy on Information systems and defence in February 2011. This included reviewing and where necessary strengthening cyber laws. As a result, the government passed Article 22 of Act n°2013-1168 dated 18 December 2013 (the “Military Programming Act”) which sets out several obligations applicable to vitally important operators (“VIOs”) which are comparable to those imposed by the Draft NISD on operators of critical infrastructures.

 It should be noted that Article 22 of the Military Programming Act has not yet come into force; various decrees and Ministerial orders which will spell out the detail of the regime have not yet been published – for example, those specifying security standards applicable to VIOs, the notification procedure, criteria defining an “incident” triggering the notification obligation and conditions and limits of “inspection” powers of the Prime Minister.

 The French National Agency for the Security of Information Systems (“ANSSI”), i.e. the regulatory authority which has been empowered to define implementing and enforcement measures of Article 22, is currently working with the French government as well as with public and private entities to define the application conditions of this framework. The implementing decree had been announced by the ANSSI to be due by Autumn 2014. As of the date of publication of this article, however, no such decree has been published.

When published, the decree will set out general principles, and following such publication, ministerial orders will be published to define sector-specific rules (if any) and implementation deadlines. At a cyber security conference in September, the ANSSI director indicated that France was “the first to go down this road. Other countries have tried, without succeeding” and that implementation conditions remain “unclear”, even at the NATO level (therefore not providing a reference framework for the ANSSI).

 NISD vs Military Programming Act – how do they compare?

 Below we highlight the key similarities and differences between the French legislation and the proposed NISD. Note that there are significant differences between the Commission’s original draft of the NISD published in February 2013 and the amended text approved by the European Parliament in March 2014. It remains to be seen what the final compromise text of the NISD agreed by all three EU institutions will look like. As things stand, here’s how the new French regime compares to the proposed EU-wide regime.

Key similarities:

  • Breach notification deadlines: the Draft NISD (as amended by the European Parliament) requires breach notification “without undue delay” (Article 14 (2)) and the Military Programming Act requires notification “without delay”.
  • Audits: the broad obligation for VIOs to subject themselves to security Audits under the NISD (as originally proposed by the European Commission, Article 15(2)) is similar to the “inspection” obligation under the Military Programming Act. However, the EP’s text has significantly watered down the audit requirement.

Key differences:

  • Scope: the notions of VIOs in the Military Programming Act and of “vitally important sectors” under the relevant French legislation are slightly broader than the scope of “critical infrastructure” (in the sense of the Council directive 2008/114/EC) and of “market operators” in the Draft NISD (see the table for more detail).
  • Inspection and audit: the extent of inspection/auditing powers of VIOs by the French Prime Minister is deeper than the equivalent proposals under the EP’s version of the Draft NISD.
  • Sanctions: the French law includes specific sanctions for a VIO’s failure to comply with any of the obligations specified in Article 22, following a formal notice (up to EUR 750.000 for corporate entities). However, such formal notice is not required prior to imposing a fine in case of failure by a VIO to notify the Prime Minister “without delay” of a cyber-breach.
  • Notification triggers: no materiality threshold for cyber security incidents triggering the notification requirement is yet provided by the Military Programming Act, compared to the “significant impact” threshold and criteria included in the European Parliament’s proposed version of Article 14(2) of the Draft NISD.
  • Notification to the public: whereas the Draft NISD (European Parliament’s version, Article 14 (4)) provides for precise criteria and conditions for notification to the public of cyber security incidents, the Military Programming Act remains silent on this possibility.

For further details on the similarities and differences between the Draft NISD and the Military Programming Act, please refer to the comparative table available here.

UK: impact of ICO fines on data security

Tom Errington - October 22nd, 2014
Tom Errington

The ICO has published a review of the impact of its civil monetary penalties (CMPs), the vast majority of which have related to security breaches. The review canvassed the views of representatives from 14 organisations who had received a CMP and 85 peer organisations who had not. The findings suggest that overall CMPs are effective at improving data protection compliance. However some respondents felt that there was a lack of transparency about how CMPs have been calculated and some showed a lack of understanding of just what poor practices trigger the CMP threshold.

UK: Cyber security certification scheme launched

Katharine Alexander - October 22nd, 2014
Katharine Alexander

UK: Cyber security certification scheme launched

Following the consultations on the requirements for a preferred standard for cyber security, which concluded in November 2013 (background information here), the Government has launched a new cyber security certification scheme. The scheme focuses on five main controls for basic cyber hygiene:

  • boundary firewalls and internet gateways;
  • secure configuration;
  • access control;
  • malware protection; and
  • patch management.

Businesses can apply for a “Cyber Essentials” certificate (based on independently verified self-assessment) or a “Cyber Essential Plus” certificate (offering a higher level of assurance through external testing). The scheme is designed to be affordable and offers a snapshot of the organisation’s cyber security effectiveness on the day of assessment. Guidance on meeting the Cyber Essentials requirements can be downloaded from the government-approved cyberstreetwise website here, and a summary of the scheme can be found here. Vodafone has become the first telecoms company to gain the UK ‘cyber essentials plus’ accreditation.

Claire Walker

These new guidelines were published in June by the Cloud Select Industry Group.

Forming part of the European Commission’s wider Cloud Computing strategy which was unveiled in 2012, the guidelines have been described as a first step towards standardised building blocks for terminology and metrics in cloud SLAs. They aim to improve the drafting clarity and customer understanding of cloud SLAs. European Commission Vice-President Viviane Reding said: “[the] new guidelines will help generate trust in innovative computing solutions and help EU citizens save money. More trust means more revenue for companies in Europe’s digital single market.”  The 62 page guidelines – created by a drafting team which included participants from IBM, Amazon, Microsoft and T-Systems – deal with service levels relating to availability, reliability, security, support services and data management, and take into account the guidance of the Article 29 Working Party.

US: Controversial cyber security bill proposed

Tom Errington - October 22nd, 2014
Tom Errington

In July the Senate Intelligence Committee approved a bill for the Cybersecurity Information Sharing Act (“CISA”) that would encourage companies to share information about threats with each other and the federal government. The bill has been controversial, especially in the wake of Edward Snowden’s revelations about access to US citizens’ data, as it would give the NSA wider powers to access, retain and use data for “a cybersecurity purpose”. This is rather broadly defined as “the purpose of protecting an information system or information that is stored on, processed by or transiting an information system from a cybersecurity threat or security vulnerability”. Indeed, an open letter from a number of privacy, civil liberties and open government groups has been published criticising the bill. Further coverage can be found here, here and here. The bill is expected to see a full vote in the Senate this year. The text of the bill is available on the Congress website here.

 In August this year (to not a great deal of fanfare), ISO published a new security standard for cloud services: ISO/IEC 27018Information technology – Security techniques – Code of practice for protection of personally identifiable information (PII) in public clouds acting as PII processors (“ISO 27018”).  Datonomy reported in May this year, that this new standard was on its way. This publication is a welcome step towards ensuring compliance with the principles of privacy laws and further boosting customer confidence in in cloud computing technologies.

 Here are Datonomy’s questions and answers on this new security standard.  

 What’s the aim of ISO 27018?

 The standard’s aim is to create a common set of security controls that can be implemented by a public cloud computing service provider that is processing personal data on behalf of another party.   

 How is ISO 27018 structured?

 The standard is based on (and follows a similar structure to) ISO/IEC 27002 – Information technology – Security techniques – Code of practice for information security controls (“ISO 27002”).  In short, ISO 27018 tailors ISO 27002 for use by a public cloud computing service provider.  The structure breaks down into three key parts:

  1. ISO 27018 provides a reference to ISO 27002 where the controls in ISO 27002 are applicable to cloud computing service providers processing personal data. 
  2. ISO 27018 sets out additional guidance and/or information for these controls, where necessary for cloud computing service providers processing personal data. 
  3. There are additional controls (and associated guidance) in the Annex to the standard which are not covered in ISO 27002.

 What’s in ISO 27018?

 The main section of ISO 27018 covers the same  areas as ISO 27002: Information security policies; Organization of information; Human resource security; Asset management; Access control; Cryptography; Physical and environmental security; Operations security; Communications security; System acquisition, development and maintenance; Supplier relationships; Information security incident management; Information security aspects of business continuity management; and Compliance.

 The Annex to ISO 27018 covers additional areas: Consent and choice; Purpose legitimacy and specification; Data minimization; Use, retention and disclosure limitation; Openness, transparency and notice; Accountability; Information security; and Privacy compliance.

 From a legal perspective, ISO 27018 can been seen as having elements of a controller to processor agreement and elements of technical and organizational security measures.

 Who can use ISO 27018?

 ISO states that “ISO 27018 is applicable to all types and sizes of organizations, including public and private companies, government entities, and not-for-profit organizations, which provide information processing services as PII processors via cloud computing under contract to other organizations.

 How can ISO 27018 be used?

 The organizations listed above can use the standard to select applicable controls when implementing a cloud computing information security management system and/or as a guidance document for implementing these controls. Like ISO 27002, ISO 27018 does not specify what controls are applicable to what organization.  This is not surprising as it would be near impossible to do so. However, to circumvent this issue, ISO/IEC 27001 requires a risk assessment to be performed to identify what controls are required and to what extent it should be applied. A new ISO 27017  that is still in the pipeline might fill this gap.

 Providers that comply with ISO 27018 will definitely have a better selling argument as they confirm compliance with important data protection standards. There are also good arguments that a self-audit by a provider under ISO 27018 is accepted as proof of compliance with technical and organizational measures (as required, for example, under EU law for data processing agreements).

 What are the limitations of ISO 27018?

 Most of the controls in the standard will also apply to a controller of personal data.  However, the controller will, in most cases, be subject to additional obligations, not included in this standard.

  1. Cloud beyond personal data.  ISO 27017, which has not been published yet, will deal with the application of ISO 27002 to the use of cloud services and to the provision of cloud services generally.  ISO 27018 is focused on cloud services that process personal data.
  2. Legal nuances.  We will have to see if this ISO standard is widely adopted. It is being  heavily promoted by cloud giants.  The standard addresses, broadly, the key obligations in privacy laws around the world (and there are of course large similarities).  However,  there are nuances in privacy laws around the world.  The standard does not address all of these.  Therefore customers and providers alike will still have to consider those nuances. 
  3. Additional sector rules.  There are often additional rules to privacy laws that ISO 27018 doesn’t deal with.  Many readers will be familiar with additional relevant rules imposed in particular industry sectors e.g. the financial services industry, the public sector, the health sector and the education sector.  Customers and providers in these sectors will still have to consider these additional rules.

 Conclusion

 This is a helpful standard for the cloud industry.  ISO 27018 is not a management standard (c.f. ISO 27001) and therefore is unlikely to be certified against.  The same is true for ISO 27017.  However, it provides a useful reference guide for customers and suppliers alike – it is the first global standard of its kind and is a suitable means for globally operating providers to demonstrate their data protection/privacy compliance – instead of having to cope with different national standard in various jurisdictions.  If this standard is adopted and accepted widely, then customers and providers can use this standard to evaluate what protections are in place and, more importantly, what’s missing!

Christian Leuthner

On August 19, 2014, more than one year after the first draft bill of an IT Security Act, the German Federal Ministry of the Interior has published the new draft bill of the Act, aimed at boosting the security of information technology systems. The full title of the legislation is “Entwurf eines Gesetzes zur Erhöhung der Sicherheit informationstechnischer Systeme” (IT Sicherheitsgesetz) (“IT Security Act”). The new rules are still subject to change but look likely to come into force in early 2015.

General overview

In fact, the IT Security Act will not be an individual law, but will amend the Act on the Federal Office for Information Security, the Telecommunication Act, the Telemedia Act and the Act on the Federal Criminal Police Office as well as the Act on the German Federal Office of Information Security. The IT Security Act contains five central topics and provides for:

  • IT security in companies (see A. below)
  • Protection of individuals/citizens while using networks (see B. below)
  • Securing federal IT (see C. below)
  • Strengthening the German Federal Office for Information Security (Bundesamt für Sicherheit in der Informationstechnik “BSI”) (see C. below)
  • Extension of competences of the Federal Criminal Police Office (Bundeskriminalamt “BKA”). (see C. below)

The aim behind the IT Security Act is to turn German IT systems and critical infrastructures into the safest systems in the world.

A. IT security in companies

Scope: which organisations are caught by the new rules?

Under the new IT Security Act, providers of critical infrastructures (“CI Providers”) shall implement an acknowledged standard of technical measures to secure their IT systems and inform authorities about certain attacks or IT incidents without undue delay. “Critical Infrastructures” will be defined by a separate regulation to be enacted once the IT Security Act is in place. According to the proposed amendment of the Act on the German Federal Office of Information Security critical infrastructures shall, however, include establishments, plants and parts of it in the sectors of energy, IT, transport and traffic, health, water, nutrition, finance and insurance. Provider of critical infrastructures considered to be micro, small and medium-sized enterprises in the meaning of Recommendation 2003/361/EC are excluded from the scope of the IT Security Act.

Expressly excluded from the term “critical infrastructures” are federal communication technologies that are used for internal communication between authorities and for communication towards third parties. This exclusion is subject to criticism of industry associations because the the IT Security Act does not apply to the biggest critical infrastructure in Germany.

Furthermore, currently there is a lot of criticism as the term “Critical Infrastructures” requires concrete definition. To date it is not fully clear which organisations are covered by the new rules.

  1. Security standards

CI Providers, except providers of public telecommunication networks or public telecommunication services, will be obliged to implement adequate organizational and technical precautions and other measures to protect the IT systems, their components and the processes that are mandatory to provide the critical infrastructure (“Security Measures”). Security Measures must be fully implemented two years after the IT Security Act comes into effect. CI Providers will be obliged to prove those Security Measures every two years by providing sufficient audit reports or certificates.

CI Providers in common sectors and their relevant sector associations may propose their own standards for Security Measures that substantiate the general Security Measures.

  1. Notification duties

CI Providers must designate at least 15 individuals as contact points for warning and alerting (Warn- und Alamierungskontakte). CI Providers are obliged to notify the BSI via those designated contacts without undue delay in case of interference or impairment that could lead to breakdown or the impairment of the critical infrastructure. In such cases, providers may notify the BSI on an anonymous basis. The notification must contain the technical framework of the CI Provider used to provide the critical infrastructure, the Security Measures implemented and the sector of the provider. In cases where the interference or impairment has already led to a breakdown or impairment, an anonymous notification is not permitted.

  1. Further obligations

Companies in the same sector may designate one common contact person managing the communication between CI Providers and the BSI.

Third parties may request information on Security Measures and on security incidents from the BSI unless CI Providers have legitimate interest in non-disclosure or if the disclosure of such information would impact material security interests of the general public. CI Providers have to consent in the disclosure of information with regard to actual breakdowns or impairments as well as on the result of the regular audits.

B. Protection of individuals/citizens while using networks

Protection of individuals/citizens is mainly aimed at telemedia and telecommunication providers. Telemedia providers and telecommunication providers shall implement an acknowledged standard of technical security measures.

Telecommunication providers are obliged to inform the Federal Network Agency (Bundesnetzagentur) in case of impairment of the telecommunication networks that could lead to significant security incidents, e.g. unauthorized access to users’ systems. The Federal Network Agency may request a detailed report in case of an actual incident from the providers. In case of an incident, the Federal Network Agency either may inform the public itself or oblige the provider to do so.

Providers are obliged to inform their customers about incidents on the providers’ data processing systems (e.g. malware and cyber-attacks) and provide them with information and, if applicable, software or applications to remove or combat such malware or cyber attacks.

In addition, Telemedia providers are permitted to process and use usage data, telecommunication providers may process and use inventory and traffic data to identify, limit or eliminate impairments.

C. Securing federal IT and strengthening federal authorities

BSI is entitled and obliged to determine minimum security requirements for federal authorities to secure federal IT networks. BSI is entitled to assess IT systems and services to and publish the results for the purpose of improving IT systems and services. BSI is no longer only entitled to inform the public on malware but also on the loss of data. The BKA is entitled to investigate in more kinds of cybercrime, such as espionage of data, computer fraud.

D. Olswang comment and outlook for the new act

The IT Security Act is a step in the right direction – creating high security standards where necessary, increasing protection for individuals and providing assistance for individuals to self-help with regard to security incidents. The opportunity to notify the BSI anonymously of security incidents leads to a fair balance between the reputation of providers and the protection of individuals and general public. However, as long as the term ‘critical infrastructures’ is not defined, it is not fully clear what companies are in the scope of the IT Security Act. Furthermore, the provider of the biggest critical infrastructure of all, the Federation, is excluded from this scope. Furthermore, the costs of implementation which providers will face are currently not easy to quantify. Industry associations assume costs of more than one billion Euros triggered by the implementation and maintenance of the technical and organizational measures under the IT Security Act.

In addition, there are privacy concerns with regard to the processing and use of usage data by telemedia providers and processing and use of inventory and traffic data by telecommunication providers. These companies are entitled to store a lot of information under the cloak of IT security and share this information with federal authorities, i.e. what was denied by the ECJ with regard to the Data Retention Directive.

The IT Security Act is now subject to interdepartmental coordination and will then be discussed with stakeholders of the relevant industries and stakeholders of society. A formal procedure with regard to the participation of stakeholders does not exist. Industry associations will likely request their members to provide feedback and raise issues so that the IT Security Act can be amended. According to a statement made previously by the German Federal Ministry of the Interior, the IT Security Act shall be enacted in early 2015. The content of the final draft will be eagerly awaited.

It is interesting to see that Germany is introducing its own legislation on cyber security ahead of the formal adoption of the EU Network and Information Security Directive, which is still to be agreed by the EU institutions and Member States. However, the EU Network and Information Security Directive and the IT Security Act cover similar topics. The IT Security Act can be seen as Germany’s position in further discussions about the EU Network and Information Security Directive. Next steps, and the final text of the Act, will be monitored with interest by Datonomy.

 

 

Thibault Soyer

CNIL’s recent ruling against Orange has wider lessons for all data controllers who rely on processors and sub processors to process personal data. Datonomy’s correspondent in Paris analyses the issues.

Facts

In its deliberation dated 7 August 2014 (but only published on 25 August), the CNIL issued, for the first time, a public warning (i.e no fine has been imposed on Orange, but the sanction consists in the publication of CNIL’s ruling on its website) against a telecoms operator on the basis of personal data breach requirements (pursuant to Article 34 bis of the French data protection act 1978). On 25 April 2014, Orange notified the CNIL of a technical failure in one of its marketing sub-processors, resulting in the leak of personal data (name, surname, birth date, email address and phone number) concerning 1.3 million subscribers. Following this notification, the CNIL investigated Orange and its processors’ premises and found that Orange had not fulfilled its obligation to ensure the security and confidentiality of personal data with such sub-processor, despite the fact that the security breach had been adequately notified and dealt with by Orange.

Sanction grounds

The focal point of particular interest in this decision is that, although Orange was found to be compliant with personal data breach requirements, notably by having notified the CNIL and data subjects “forthwith” of the breach, this notification brought the attention of the French privacy watchdog to the security and confidentiality measures imposed by Orange on its subcontracting chain. The key issues highlighted by CNIL were as follows:

  • although its first (main) processor had complied with security and confidentiality measures imposed on it contractually, Orange had not ensured a back-to-back of the security and confidentiality provisions in the agreement between the processor and its sub-processors;
  • Orange had not conducted any security audit on the version of the marketing application specifically developed by its sub-processor, which would have allowed it to identify the security breach; and
  • Orange did not sufficiently protect customers’ personal data when updating and sending them to its processors (by non-encrypted emails).

Lessons to be learned and security standards to be set to anticipate data breaches

This case stresses the utmost importance for electronic communications operators to be proactive and plan appropriately, notably by complying with the high preventive standards that regulators expect data controllers to adopt in order to demonstrate that they have implemented “appropriate” security measures under the current data breach rules, as indicated in the recent March 2014 Data Breach Opinion issued by the EU’s Article 29 Working Party. On top of that, this ruling shows how important it is for electronic communications operators to impose security obligations at least as stringent as those applicable to them on their processors and sub processors. For further information on this WP 29 Opinion, please see the report by my fellow Datonomist Claire issued in April this year, and the coverage of the underlying Regulation by Carsten in July 2013.

 

Andreas Splittgerber

Our quarterly IT and data protection newsletter keeps you informed of current legal issues, decisions and events in the technology sector in Germany. We hope you enjoy reading.

This edition covers the following topics.

I.          Canvas Fingerprinting – Tracking without Cookies

II.          District Court of Berlin: WhatsApp must provide terms and conditions in German, and improve the legal notice

III.          „No-Spy decree“ of the German Federal Ministry of Interior requires guarantee in procurement procedures

IV.          German Supreme Court: Collection of minors’ personal data for marketing purposes in the course of a competition is not permitted

V.          ECJ: Copies on the user’s computer screen as well as in the ‘cache’ of a computer’s hard disk, created in the course of viewing a website, do not infringe copyright

This is the link to the full version.

Mel Shefford

With the awareness that future cyber-attacks could have very serious consequences, the Government has proposed amendments to the Computer Misuse Act 1990. In this post we look at the current offences under the Act as well as recent amendments proposed by the Serious Crime Bill.

In August 2013, the outgoing US Secretary of Homeland Security Janet Napolitano gave a farewell speech in which she warned: “Our country will, at some point, face a major cyber event that will have a serious effect on our lives, our economy and the everyday functioning of our society.”

Her message vocalised what governments, businesses and organisations around the world are well aware of: as we become increasingly reliant on technology, and as systems become even more interconnected and complex, the risk of a serious cyber-attack increases. And whilst we currently associate cyber-attacks with access to personal data and damage to commercial interests, in the future the impact could be even more serious. For example, future attacks could result in major damage to the economy, national security, the environment and/or human welfare.

With this in mind, the British Government has been ramping up efforts over the past few years to tackle cyber-crime. For example, in 2011 it launched the National Cyber Security Strategy; in 2013 the National Cyber Crime Unit started operations; and £860 million has been committed until 2016 to boost the UK’s cyber capabilities. More recently, BIS announced the Cyber Essentials scheme to help businesses protect themselves against cyber-attacks.

Most Datonomy readers will be well aware of how important it is for organisations to be proactive about preventing data breaches, and how devastating the consequences can be if a breach does occur. But what are the consequences for hackers who are caught?

Offences under the Computer Misuse Act 1990

In the UK, the hacker might be guilty of  one or more of the following offences under the Computer Misuse Act 1990:

  • Obtaining unauthorised access to computer material (for example, using another person’s ID and password to log onto a computer and access data). The maximum penalty is a 2 year prison sentence and/or an uncapped fine (Section 1).
  • Obtaining such access in order to commit or facilitate the commission of another offence, such as theft of funds or data. The maximum penalty here is a 5 year prison sentence and/or an uncapped fine (Section 2).
  • Obtaining such access in order to intentionally or recklessly impair the operation of any computer, a program or the reliability of data held on a computer; prevent or hinder access to any program or such data; or enable such impairment, prevention or hindrance. This offence carries a maximum penalty of 10 years in prison and/or an uncapped fine (Section 3).
  • Making, supplying or obtaining articles for use in any of the above offences. This carries a maximum 2 year prison sentence and/or an uncapped fine (Section 3A).

The Serious Crime Bill

In June, the Queen announced the Serious Crime Bill which (among other aims) seeks to amend the Computer Misuse Act so that serious cyber-attacks are properly punished. In particular, there is a concern that the current custodial penalties – which have been described as “woefully inadequate” by a member of the House of Lords – are not sufficient for serious cyber-attacks. The two main changes proposed by the Bill are as follows:

(1)   The creation of a new offence to cover serious cyber-attacks

This new offence would be committed where a person knowingly, and intentionally or recklessly, commits any unauthorised act in relation to a computer which causes or creates a significant risk of serious damage to human welfare, the economy, the environment or national security in any country.

An act causing damage to “human welfare” would be something causing loss to human life; human illness or injury; disruption of a supply of money, food, water, energy or fuel; disruption of a system of communication; disruption of facilities for transport; or disruption of facilities relating to health.

Commission of this offence would be punishable by up to 14 years’ imprisonment and/or a fine, except where the act causes loss to human life, human illness or injury, or serious damage to national security, in which case the penalty is life imprisonment and/or a fine.

The Home Office has acknowledged that no cyber-attack has occurred to date which would engage this new offence. However, the idea is to ensure that there are substantive penalties if a serious attack were to occur in the future. Indeed, the Home Office anticipates – and no doubt hopes – that the number of prosecutions for this offence will be minimal.

(2)   Implementation of the EU Directive on Attacks Against Information Systems (2013/40/EU)

This Directive is designed to ensure that the EU has minimum rules on cyber offences and sanctions, and to ensure co-operation between EU member states in relation to cyber-attacks. The UK is already compliant with the Directive, except for the following two aspects:

  • Tools for the commission of an offence

The existing Section 3A offence of making, supplying or obtaining articles for use in another offence under the Act requires the prosecution to prove that the defendant obtained the tool with a view to it being supplied for use to commit or assist in the commission of the other offence. The Bill seeks to amend this offence so that it covers circumstances where an individual obtained a tool with the intention to use it themselves to commit or assist in the commission of a separate offence. Given the increasing ease with which individuals can now obtain malware, the Home Office hopes that this amendment will be instrumental in helping to avoid cyber-attacks in the first place.

  •  Extension of the extra-territorial jurisdiction of the Act

The Directive requires EU member states to establish their jurisdiction over cyber offences which are committed by their nationals. The Act currently requires the prosecution to demonstrate a “significant link” to the UK for the section 1 and 3 offences, essentially being that the defendant or computer was in the UK at the time of the offence. To conform to the Directive, the Bill extends the list of possible significant links to the UK to include the defendant’s nationality. This would mean that a UK national could be prosecuted for an offence where the only link to the UK is her/her nationality, provided that the offence is also an offence in the jurisdiction where it took place.

The legislative timetable and process

The Bill started in the House of Lords, and at the time of writing, the House of Lords report stage – where the Bill will be examined in more detail and the Lords will vote on proposed amendments – is scheduled to commence on 14 October. After a third reading at the House of Lords, it will then be considered by the House of Commons. The EU implementation aspects will need to be in force on or before 4 September 2015 in order to meet the EU transposition deadline, but the rest of the Bill will no doubt be subject to more scrutiny.

Handbook on European data protection law

Andreas Splittgerber - June 6th, 2014
Andreas Splittgerber

The European Union Agency for Fundamental Rights has published a Handbook of European data protection law, to which I was a contributor.

This handbook is designed to familiarise legal practitioners who are not specialised in the field of data protection with this area of law. It provides an overview of the European Union’s and the Council of Europe’s applicable legal frameworks.

The Handbook can be found here.

Olswang publishes first edition of Cyber Alert

Jai Nathwani - June 6th, 2014
Jai Nathwani

The first edition of Olswang’s Cyber Alert, a regular round up of regulation, best practice and news from our international cyber breach and crisis management team has been published.

Please click here for a printable PDF version.  In this first edition we cover:

In the last few months we have seen news headlines ranging from the international operation against the GameOver Zeus botnet, to  state-sponsored hacking, arrests over the BlackShades malware, and the release of the latest Information Security Breaches Survey, not to mention continued concern over the Heartbleed vulnerability, so there is much for businesses to consider. Click here for a summary of some of the latest headlines.

It is also worth mentioning the European Court of Justice’s Google Spainruling in May, which is arguably the most profound internet case of this decade and which continues to send shockwaves through the tech sector. Whilst Google Spain does not relate to cybersecurity specifically, it does establish that in some circumstances a non-European company is answerable to the European courts and accountable under European data protection laws, including the requirement for appropriate technical and organisational measures to be in place to protect personal data. Read Olswang’s analysis of Google Spain here.

Claire Walker

Last week’s  seismic  decision in the Google Spain case continues to generate many column inches of comment and will no doubt continue to do so for some time. Datonomy’s colleagues in  Olswang’s international privacy team have just published a paper  considering the practical implications of this decision in the round.  You can access it at this link. The paper considers:

  •  Google’s practical options in terms of next steps
  • the implications for individuals’ rights
  • the implications for online publishers
  • what it means for the Right To Be Forgotten under the new EU Regulation
  • the impact on  wider “data debates” over other technologies such as email scanning and Google Glass
  • what it tells us about the workings of Europe’s highest commercial court, and tactical tips for bringing referrals on points of EU law.

The paper is also available in PDF here.

Google loses historic case on right to be forgotten

Marcos García-Gasco - May 14th, 2014
Marcos García-Gasco

The Court of Justice of the European Union (“CJEU”) made a historic ruling  in the case of Google v Spain [Case C‑131/12]. The CJEU ruled that Googleis responsible for the processing that it carries out of personal data which appear on web pages published by third parties.

The decision is something of a surprise given that it goes against the Advocate General’s Opinion delivered last year, and indeed is quite a bold statement by the CJEU on what it sees as the future of data protection in the internet age and the legal responsibilities of search engines.

Background

The case arose after a complaint that was brought against Google by a Spanish individual, Mario Costeja González, to the Spanish Data Protection Authority (AEPD). Mr González had been the subject of an auction notice for unpaid debts that was published in a widely-read newspaper in Spain around a decade ago.  Despite the time that had elapsed since this initial publication, this was still featured prominently in a Google search for Mr González’s name.  Mr González argued that this was in breach of the EU Data Protection Directive (the “DPD”) as the data was not current and that in such circumstances, there should essentially be a “right to be forgotten.” 

The AEPD agreed, and Google subsequently appealed to the Spanish National High Court which in turn referred questions on the meaning of the DPD to the CJEU.

The decision

Despite the Opinion of Advocate General Jääskinen, who considered last year that search engine service providers should not be considered responsible for third-party content on the basis of the DPD for personal data appearing on web pages they process, the Grand Chamber of the ECJ in the judgement published today has concluded as follows:

  • The activities carried out by Google, namely ‘finding information published, indexing it automatically, storing it temporarily and making it available to the public’, must be classified as ‘processing of personal data’ for DPD purposes.  Furthermore, the operator of the search engine must be regarded as the ‘data controller’ regardless of the fact that they have no control over the underlying data itself. 

 

  • Google falls within the territorial scope of the DPD as its Spanish subsidiary is intended to promote and sell advertising space directed at the citizens of that Member State, which is sufficient to be considered ‘established’ in that Member State.

 

  • Google must remove links to third-party websites displayed from a search of an individual, where those websites contain personal data relating to the individuals  concerned.  This is subject to certain exceptions, such as public figures, and to achieving a proper balance between the data subject’s fundamental rights and the right to information.

Conclusion

This decision has far-reaching consequences for Google in Europe.  The bar to when there would be a public interest in search engines processing data relating to individuals in the form of search returns seems very high and where the data relates to an individual who is not a public figure, it is rather doubtful that this could ever be permitted. 

There is no clear view how Google will respond to the judgment, but there must be a significant possibility that it will have to establish an elaborate administrative system to deal with individuals who complain about it using their data and sophisticated technical means to ensure that this is blocked. 

It is also unclear how the ruling will affect the ongoing negotiations of the General Data Protection Regulation.  Early drafts of the Regulation included a broad “right to be forgotten” though the latest draft has watered this down somewhat.  Commissioner Viviane Reding who is championing the draft Regulation welcomed today’s ruling saying it was a “strong tailwind” to the proposed data protection reforms in Europe.  The reality is that the Regulation still faces a rocky road before it is passed and some commentators are already questioning why we need new rules for a right to be forgotten at all given today’s ruling.

What is clearer is that the ruling is a sign of the CJEU’s reluctance to allow non-EU-based multinationals to evade European laws where they are clearly otherwise established here.

Jai Nathwani

The ISO is developing specific new security standards for cloud services, which are expected to be published in 2015. This is another welcome step towards ensuring compliance with the principles in the Data Protection Act and further boosting customer confidence in in cloud computing technologies.

Why the new standard?

The development of the new standard is a direct response to one of the key goals announced in the 2012 European Cloud Computing Strategy (the “Strategy”). The Strategy was published by the European Commission with the aim of promoting the rapid adoption of cloud computing in all sectors of the economy in order to boost productivity. The Commission’s own Cloud Standards Roadmap talks about concerns over security as often being cited as a barrier to migrating data to the cloud. Under current rules, liability for breach of data protection rules rests with the data controller therefore, an auditable standard for cloud service providers who process personal data is crucial to demonstrate the supplier’s resilience and hence enable a customer to meet its own regulatory obligations on data security. The need for a recognised benchmark was further endorsed by the Information Commissioners’ guidance on Cloud Computing, published in 2012. The guidance states that when selecting a cloud service provider, the data controller must choose a processor providing sufficient guarantees about the technical and organisation security measures governing the processing to be carried out, and must take reasonable steps to ensure compliance with those measures. Audited compliance to a standard would be the appropriate method to ensure that data controllers comply with its data protection obligations and could be written into the contract between a cloud services supplier and a customer.

The new ISO 27017 and 27018

In response to the need for a cloud computing security standard the International Organisation for Standardisation (“ISO”), which is already responsible for benchmark standards for due diligence on data processors, is developing two cloud specific standards, ISO 27017 and ISO 27018. The two standards are due for official release in 2015.

The new standards are based on the familiar standards of ISO 27001 and 27002. ISO 27001 provides a framework of security controls that can be adapted and applied to an organisation of any size to create a security standards framework. ISO 27002 provides for the practical implementation of the ISO 27001 framework in an organisation. The 27001 and 27002 standards apply generally to the operation of ICT systems. The two new standards under development apply 27002 specifically to cloud computing.

ISO 27017 deals with the application of the ISO 27002 specification to the use of cloud services and to the provision of cloud services. It will recommend cloud-specific information security controls to supplement those recommended by ISO 27002.

ISO 27018 deals with the application of 27002 to the handling of Personally Identifiable Information (“PII”) and will serve as a code of practice for PII protection in public clouds which act as PII processor.

For more detail see this link to the ISO’s website.

Claire Walker

With the Heartbleed web vulnerability in the tech headlines, the practical guidance issued recently by EU regulators on when to alert individuals to data breaches (and on preventive steps to reduce the risk of breaches occurring in the first place) is particularly timely. Datonomy highlights some of the key recommendations on when to make the difficult judgement call over notification.

Why the new guidance? Does it apply to your organisation?

The recent Opinion issued by the EU’s Article 29 Working Party (the body made up of national data protection regulators) concerns the ever-topical issue of personal data breach notification. Specifically, it sets out the regulators’ collective view on when data controllers should alert data subjects to a personal data breach which is likely to adversely affect those individuals’ personal data or privacy.

The guidance sets out good practice for “all controllers”. Strictly speaking the obligation to report data breaches only applies to communications services providers under current rules; however in practice, handling a data breach is a business-critical issue for all organisations. The illustrations and in the guidance are drawn from a wide range of contexts.  As well as analysing the triggers for notifying individuals that their data has been compromised, the guidance sets out practical steps to reduce the risk of breaches occurring and/ or to mitigate their severity. It is therefore a must-read for all in house counsel and their colleagues in the IT function – both in devising a data reach response plan, and in designing systems to reduce the risk of vulnerabilities in the first place.

A quick recap on breach notification obligations – current and future

As reported by Carsten on Datonomy last year, communications service providers (CSPs) are already subject to reporting obligations under EU Regulation 611/2013. CSPs’ obligations are two fold:

  • to report all data breaches to the regulator (within 24 hours); and
  • to notify the data subject “without undue delay” when the breach is “likely to adversely affect the personal data or privacy” of that individual.

Notification to the affected individual is not required if the CSP has implemented “appropriate technological protection measures” to render the data unintelligible to any person who is not authorized to access it. The Regulation defines what constitutes “unintelligible”, by reference to encryption and hashing. It does not set out specific standards but it authorises the Commission to publish a separate indicative list of technological protection measures that are sufficient for that purpose

As Datonomy readers will be aware, these notification obligations are likely to be formally extended to all data controllers, regardless of sector, under the draft EU Data Protection Regulation.

However, notification of data breaches, both to the regulator and to affected individuals, is already an important practical consideration for all organisations from a damage limitation point of view. While not risk –free, voluntary notification to the regulator and to individuals may help to mitigate the sanctions imposed by a regulator where a data controller has suffered a data breach as a result of falling short of data security obligations under the UK Data Protection Act.

Wide ranging illustrations of when a breach “likely to adversely affect” a person’s privacy

The guidance sets out seven different and wide-ranging breach scenarios. These include: loss of laptops containing medical data and financial data; web vulnerabilities exposing life insurance and medical details; unauthorised access to an ISP’s customers’ details including payment details; disclosure of hard copy credit card slips; unauthorised access to subscribers’ account data both through unauthorised disclosure and through coding errors on a website. Whilst not exhaustive, these worked examples do provide useful analysis of the different types of harm which could trigger the obligation to notify individuals.

The guidance breaks the analysis down onto three different categories of data breach, and gives illustrations of the adverse privacy effects of each type. These are:

Confidentiality breach: unauthorised disclosure of or access to personal data which can lead to ID theft, phishing attacks, misuse of credit card details, compromise of other accounts or services which use the same log in details  and a wide range of other detrimental effects on the individual’s family and private life and work prospects. Most of the examples focus on confidentiality breach.

Availability breach: accidental/ unlawful destruction or loss – which can lead to disruption and delay and even financial loss. (The illustrations of financial loss, and the consideration of “secondary effects” in a data availability context will also be of interest to those negotiating liability provisions in commercial deals which involve the handling or personal data.)

Integrity breach: the alteration of personal data – which can lead to serious consequences for medical and customer data.

The distinction is significant, particularly as the need to notify individuals about confidentiality breach can be mitigated or eliminated by the use of appropriate encryption – see below.

The guidance also stresses the need to consider likely “secondary effects” of a breach which may not appear in itself to adversely affect privacy. The example given here is of the hacking of a music service website. While the direct effect may be limited (leak of names, contact details and users’ musical preferences) it is the secondary effect – the fact that passwords have been compromised, and that users may use the same passwords across other accounts – which creates the need to notify individuals of the breach.

Prevention better than cure: practical steps to avoid the need to report breaches to individuals

In relation to each scenario, the guidance sets out examples of appropriate safeguards to reduce the risk of such breaches occurring in the first place and/ or mitigating the privacy impact. As noted above, notification to individuals is not required if a data controller can satisfy the regulator that the data has been rendered unintelligible. Common themes which run through these practical recommendations include:

  • Encryption: First and foremost, the guidance emphasises the need for appropriate, state of the art encryption with a sufficiently strong and secret key
  • Secure storage of passwords: salted and using a state of the art cryptographic hash function – simply hashing passwords is unlikely to meet the “unintelligible” data exemption for notification.
  • Password policies: requiring stronger password choices for users, and requiring password resets whenever passwords are compromised.
  • Vulnerability scanning: to reduce the risk of hacks and other breaches
  • Regular back ups: to mitigate against the effects of availability breach
  • Systems and process design: to reduce the risk of breach and /or mitigate its effects –the examples given include dissociation of medical information from individuals’names.
  • Access controls: Limiting global access, and restricting access to databases on a “need to know” and “least privilege” basis – including minimising access given to vendors for system maintenance.
  • Staff training e.g. on how to delete data securely data.
  • Incident management policies: the guidance also highlights the importance of good incident management policies in limiting the duration and effects of data breaches.

Be proactive and plan!

The new Opinion provides organisations with helpful guidance on making the difficult judgement call over when to notify customers and other individuals about breaches of their personal information. Perhaps even more importantly, it sets out some of the minimum preventive standards that regulators expect data controllers to adopt in order to demonstrate that they have implemented “appropriate” security measures under the current rules. The Opinion urges data controllers to “be proactive and plan appropriately”. The guidance will help organisations decide when they need to alert individuals – but it is having a crisis management team and a (rehearsed) action plan in place that will enable a calm and swift response, should a data breach arise.

View All Posts