GDPR: A Brief Overview

GDPR: A Brief Overview

Over a year ago the GDPR (General Data Protection Regulation of April 27th 2016) was approved and will become mandatory to the European Union members starting May 25, 2018.

That leaves a little less than a year to become compliant with the regulation, so I wanted to take the opportunity to write an overview about what this regulation is and what its main objectives are.

Let’s start by having a look at how this regulation defines personal data. “Personal data is any information relating to an individual, whether it relates to his or her private, professional or public life. It can be anything from a name, a home address, a photo, an email address, bank details, posts on social networking websites, medical information, or a computer’s IP address,” according to the European Commission.

Here are the main principles the regulation lays out, for collecting data:

  • Processed lawfully, fairly and in a transparent manner in relation to the data subject
  • Collected for specified, explicit and legitimate purposes
  • Adequate, relevant and limited to what is necessary in relation to the purposes for which they are processed
  • Accurate and kept up to date
  • Kept in a form that permits identification of data subjects for no longer than is necessary
  • Processed in a manner that ensures appropriate security including protection against unauthorized or unlawful processing and against accidental loss, destruction or damage

Let’s have a look at the scope of the regulation, which organizations are obliged to adhere to. The regulation defines two figures around the data protection:

  1. The data controller (the organization that is collecting data from EU residents)
  2. The processor (the organization that processes data on behalf of data controller).

The regulation applies if either the controller or the processor are based in the EU or if they collect or process personal data of EU residents.

Let’s review now some of the main changes that the GDPR will effect:

  • It expands the notice requirements to include the retention time and the contact information for the data protection officer
  • Valid consent must be explicit for the data collected and the purposes of said data. Data controllers must be able to prove “consent” (opt-in) and consent may be withdrawn
  • People will have the right to question and fight decisions affecting them that have been made automatically by using algorithms
  • Implementing measures must be designed into the development of business processes for product and services which meet the principles of data protection by design and data protection by default
  • Will be the responsibility of the data controller to implement and demonstrate the compliance even when the processing is carried out by a third party

The new regulation also obliges organizations to appoint a Data Protection Officer for all public authorities or when the core activities of the data controller or processor consist of operations that require regular and systematic monitoring of data subjects on a large scale, as well as when they need to process personal data on a large scale.

Another significant aspect of the new regulation is the notification of a personal data breach to the data subject when the breach is likely to result in a high risk to their rights and freedoms. The notification will need to describe in clear and plain language the nature of the breach and the likely consequences of the breach as well as the measures taken or proposed to address it.

This notification can be avoided if the controller has implemented appropriate technical and organizational protection measures, in particular those that render the personal data unintelligible to any person who is not authorized to access it, such as encryption.

Finally, let’s have a look at administrative fines, since it’s also a major change. It’s important to know that infringements of the regulation can be subject to administrative fines up to 20 million Euros or up to 4% of the total worldwide annual turnover of the preceding financial year, whichever is higher. In order to determine the quantity, the nature, gravity and duration of the infringement will be taken into account. Regulators will also take into account the nature, scope and purpose of the situations as well as the number of data subjects affected and the level of damage suffered by them.

Also, the intentional or negligent character of the infringement, the technical and organizational measures implemented, as well as any action taken by the controller or processor to mitigate the damage suffered by data subjects is considered. Also considered are previous infringements, the degree of cooperation in order to remedy it and mitigate the possible adverse effects. These instances will become known to the supervisory authority.

In any case, 20 million Euros or up to 4% of the total turnover is a really respectable amount that I’m sure will be good motivation for those companies that manage sensitive personal data to invest on being compliant with the GDPR and implement the needed technical and organizational controls to decrease the risk of having a personal data breach.

What about your company? Is it already working on implementing those controls and moving forward to get compliant with the GDPR?

Link to the law: http://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=CELEX:32016R0679#d1e6226-1-1

Join the Scrum – Retrospective as a Security Tool for Continuous Improvement

Join the Scrum – Retrospective as a Security Tool for Continuous Improvement

Continuous improvement is a fundamental part of any security standard or security management system, so during my career I have had the opportunity to implement, manage or audit different approaches to implement it.

As in the last years I’ve also been exposed to agile development methodologies, I have equally had the opportunity to see close up how continuous improvement can be managed by using the retrospective ritual as part of the scrum ceremonies.

A scrum retrospective is basically a meeting where all the team considers about how the last sprint was, identifying what went well and what didn’t go well. The goal is to come away with a proposal of how to modify the process that everybody on the team agrees with and is committed to for the upcoming sprint.

Sounds simple and it is. And it is this simplicity that surprised me when I realized how powerful and efficient this continuous improvement tool is: by investing 30 minutes on a bi-weekly meeting I saw tangible improvements and results on the morale and the efficiency of the team as well as the process the team was using.

From my point of view, the main advantages of using retrospectives are:

  • Doesn’t require a big investment; 30-60 minutes at the end of each iteration is enough.
  • Being a bottom-up approach has the extra advantage that helps to motivate the team since it’s empowers decision making and is self-organized.
  • The changes are tried on short cycles – if a proposed change does not work, it’s easy and quick to revert to the previous situation

Essentially, the scrum retrospective was created to improve development efficiency and scrum team motivation, so it’s true that it only works well with small teams, less than 8-10 members. However, I’m fully convinced this tool is also applicable to other fields within the security sector to improve the efficiency of any work team and improve security management processes.

We can imagine, for example, security consultants, penetration testers, MSSP or security integration teams using this retrospective approach to drive improvement on the processes used to manage their services or projects as well as being a tool to improve the communication and the motivation of those teams’ members.

It can be a really useful tool as well for a corporate security team by using retrospective as a way to improve the efficiency of the internal security teams. It also helps identify aspects that didn’t work well in the last cycle and find initiatives to improve those aspects in the near future. In order to do this, security professionals could use retrospectives on top of the post-mortem analysis on security incidents to get a wider perspective on, for example, the security incidents that happened during the last cycle (week, month, year…). The two key questions to be asked on those security retrospectives could be:

  • What went well on my security processes during the last iteration? What security incidents have been effectively detected, contained, eradicated or recovered? What are the security controls that helped us to be successful there?
  • What went wrong on my security processes during the last iteration? What security incidents were detected too late or were not properly contained, eradicated or recovered?

By analyzing the answers to these two main questions, the security team will better placed to select which initiatives should be accorded high priority and implemented during the next cycle – improving system efficiency being the basis for implementing a continuous improvement policy that provides tangible results.

In conclusion, it’s true that retrospectives can’t completely replace other traditional approaches for continuous improvement. But I’m convinced that it’s a really effective tool to be applied to a very wide range of situations so it should be always part of the toolbox of any team, service or project manager, not only in the development sphere but also in the security one.

Deception is the New Black

Deception is the New Black

Concepts acquired from the military field are everywhere in cybersecurity – think defense in depth, situational awareness, intelligence, counter-intelligence… The list is long. In this post, I’m going to talk about one of them – deception – not because it’s new, but because I think it’s going to become really important in the upcoming years.

Deception as a concept applied to cybersecurity has been around for a while.  It’s the idea behind honeypots, honeynets and honey tokens. What’s new is that these products are maturing, allowing simple but customized deployment and scalability  thus making them suitable for corporate environments. The generic name given to these new products is Distributed Deception Systems (DDS).

To better understand the benefits of these solutions, we need to put on the shoes of an attacker. We’ll choose one who has successfully compromised a computer – by spear phishing, for example – in a network where deception points have been deployed across a wide range of vectors such as  user accounts, office documents, network services, mobile phones, servers, printers, etc.

As an attacker, invisible, blanket deceptions can be a real nightmare. We can’t know which of the services and systems that we can see from the system we’ve compromised are authentic and which ones are deceptive. And the same thing happens with the local accounts that we can see and try to re-use on a different system.

In effect, we’re taking decisions in the dark.  We’re blind.  So, the result is that any wrong step we make, can trigger an alert that compromises all our attack efforts and puts our mission in jeopardy.

This is a real game changer.  Previously, target organizations could never gain the upper hand.  We were free to make all the attempts we wanted to and only needed one single success to win the battle. Defenders, however, needed to be successful all the time – one single mistake and they could lose the match.

Deception irons out some of the asymmetry in cyberwarfare. Thanks to deception technology, one single mistake will unmask us and enable the defenders to detect the attack before it’s too late.

Another big benefit of the DDS solutions for organizations is that the false positive ratio is usually really low since most of the time, only a real attacker will fall into the deception. On top of this, the quality and amount of information that the alerts provide is much richer than that obtained by traditional security solutions. Full control over what the attacker is allowed to do enables the organization to keep the attacker busy in order to obtain additional valuable information.

Because of this, these new deception solutions not only allow security teams to decrease the time to detection (TTD) after an asset has been compromised, but also will increase the attacker’s time to compromise (TTC), as they will require more time to find their objective and to figure out how to reach it without being deceived.  Defenders increase the time they have to react to an attack and to learn from the attacker actions in order to be better prepared against future malicious activity, or even find out who’s behind the attack. And this can definitely make the difference between a headline-hitting mega-breach and just another failed attack.

We need to keep in mind that this breed of deception technologies is not going to replace the traditional approaches of detection and prevention but needs to be seen as a way to complement them by providing an extra layer of security that comes into play when the attacker lands on their target network.

So, watch this space.  For the reasons I’ve listed, I’m convinced that in the upcoming years we’ll be seeing a massive adoption of those DDS technologies on corporate environments as well as a big increase on presence and maturity on the deception solution market.