/*
A Lack of Industrial Security

A Lack of Industrial Security

Almost a decade has gone by since I performed my first risk analysis of a nuclear plant and discovered a completely new world. Since then, security professionals will have heard a lot more about the current state OT security (or lack thereof). Operational Technology designates systems specially designed to monitor or make changes to physical processes; these systems are often called Industrial Control Systems (ICS).

It doesn’t matter if we’re referring to Supervisory Control and Data Acquisition (SCADA) systems or Programmable Logic Controllers (PLCs), the fact is security was never considered during the design of OT or ICS systems and the protocols they implement. These systems were not built to be interconnected with traditional IT networks. Their security relies on physical “air gaps” and physical access control to the plants or locations where these systems are implemented.

It’s clear that the risks impacting OT systems have grown exponentially during the last 10-15 years. Additionally, we’ve seen an increase in the attack surface and potential impact of an outage or catastrophic system failure. Risks in this area continue to grow as businesses require interconnectivity between IT and OT networks to enable organizations to provide remote access for engineering, operation, support or monitoring activities.

OT networks often leverage standard commercial off the shelf (COTS) technologies such as Microsoft Windows, SQL Servers, and TCP/IP based networks along with customized ICS/OT hardware. Using these COTS solutions often makes the critical systems vulnerable to the same security risks and issues that IT systems face. In fact, the situation is arguably worse, because often patching is not possible due several operational constraints and availability requirements. These constraints often include the potential of losing vendor support if the underlying COTS software or systems are upgraded or the reality that many of these systems cannot be taken offline or rebooted in order to apply patches because they must keep running 24x7x365.

Another reason it’s not possible and often dangerous to run standard vulnerability scanning products is due to the inherent fragility of those systems and the problems that unexpected traffic can cause to them. To complicate matters further, the non-TCP/IP protocols used within these OT networks are often proprietary protocols where authentication or encryption are not present.

In short, these are technologies built with out-of-date operating systems with dozens (or hundreds) of well-known vulnerabilities, built using an insecure network and communications protocols. These technologies must now be interconnected to the corporate IT systems due to business requirements but the systems cannot be scanned, patched, or secured using traditional security solutions and methodologies. OT/SCADA systems are currently used to monitor and operate everything from factory production chains to the critical infrastructure required to deliver electricity to the masses. What could possibly go wrong here?

The risks highlighted above are not just theoretical, in the past few years we have seen a significant increase in the number of attacks specially designed to target ICS/SCADA systems such as:

  • 2010 Stuxnet was uncovered. Stuxnet is worm-like malware that targets PLCs designed to enrich uranium. Stuxnet looked for specific Siemens PLCs connected to very specific hardware and if found modified the configuration causing centrifuges to spin too fast. Stuxnet was a targeted attack addressing the Iranian nuclear program that famously became the first nation-state backed cyberattack design to cause physical damage to industrial control systems.
  • December 2015, a Ukrainian regional electricity distribution company reported service outages affecting 225,000 customers and lasted for several hours. The outages were discovered to be part of an attack on the power generation systems. Attackers were able to remotely access and control the ICS to cause the outage and delay the restoration efforts.
  • June 2017 Crashoverride was uncovered. This malware specifically targets ICS electric grid components. When Crashoverride infects Windows machines, it automatically maps out the controls systems, records network logs (to later be replayed by operators). Crashoverride is an advanced modular malware framework that can adapt to many protocols and is designed to be stealthy, disruptive, and automatic.
  • December 2017 Triton was discovered. Triton is a new malware strain designed to target ICS systems. Triton was discovered after causing a shutdown of critical infrastructure in Saudi Arabia. This malware targets Schneider Safety Instrumented Systems (SIS) controllers. By modifying these SIS controllers, the attackers are able to increase the likelihood of system failures resulting in physical damage to the ICS.

In addition to all these security challenges, we also need to be looking towards the future and prepare for the evolution of ICS and now “IOT” systems. I’m confident that, as we have seen in other industries like finance or telco in the past, ICS and SCADA vendors will move towards providing cloud-based offerings for some of their systems. I really think that in the near future we will be talking about Historian-, HMI-, PLC- or even Control-as-a-Service approaches.

With this risk landscape and the associated challenges, we can easily understand that CISOs are having a tough time being responsible for their organization’s ICS security programs. CISOs will face challenges not only because OT security is an entirely new world for most security professionals, but also because historically priorities and concerns for IT and OT teams have been quite different. The stringent operational and availability requirements placed on OT systems often create difficulties when traditional security teams need to work closely with OT engineers.

Furthermore, when we talk about risks and incidents in ICS we need to keep in mind that the potential damage is going beyond financial losses or reputational damage. Attacks in this space could very likely result in physical losses, severe damage to the environment or even the tragic cost of human lives.

Fortunately, it’s not all bad news since the industry is working diligently to design solutions to help mitigate these risks. New best practices and guidelines have been published such as the ISA/IEC-62443 (Formerly ISA-99), a series of standards and guides on how to implement secure ICS.

Additionally, vendors have recently built technologies to identify anomalies or potential intrusions through passively monitoring traffic that then monitors OT networks? It’s important to note that machine learning approaches will struggle to become operational and effective in traditional IT networks, though they work perfectly well on OT networks.

Machine learning works well in OT environments because the traffic and the communications are very consistent and predictable. These tools are not only useful for security professionals to receive easily understandable alerts on potential threats but are also helping OT teams to gain a new level of visibility within their operational technology network and assets that they’ve never had before. They have clear operational advantages. This allows organizations to both improve their detection capabilities while also providing the OT engineering staff tangible benefits. I believe that working closely with the OT teams to show them the operational capabilities of these OT security solutions will lead to better communication and cooperation between OT and IT teams.

All in all, while protecting and hardening ICS networks is an incredibly difficult challenge for any CISO, there are still paths for the success to be followed. I think the efforts should be put on identifying the potential risks, focus heavily on network segmentation including limiting the potential paths of connectivity between OT and IT networks using one-way data diodes. Finally, building a smart security monitoring approach that not only enables the identification of security threats but also provides visibility and added value to the operational team will be a key factor to success.

Do you want to learn more? Click here to read our new Operational Technology whitepaper.

2017: The Rise of Ransomware Worms

2017: The Rise of Ransomware Worms

2017 has been a pretty “interesting” year from an information security perspective. We have had plenty of big security events such Cloudbleed, the CIA Vault7 leaks, Shadow Broker’s exploits and post-exploitation tools publication, hacking of Macron’s campaign for the French presidency, Equifax, Uber, Deloitte, Nicehash, and even the DoD AWS breaches.

But in this post I want to focus on the main Ransomware cases we saw last year because they were much more impactful than the ones of previous years.

Since 1989 when the AIDS Trojan was released, ransomware has evolved a lot. Specially in the last few years where we can see an exponential evolution for ransomware in terms of complexity and impact that ransomware campaigns have had worldwide.

Legacy ransomware was quite basic and mainly relied on the victim’s lack of a backup, fear and hurry to pay. But in the last few years we’ve seen a trend of rapidly evolving ransomware variants that continue to grow in complexity. To ensure the highest number of paying victims, ransomware authors have begun to adapt the ransom messages to the victim’s language.  We’ve also seen ransomware as a service, allowing criminals without the skills or knowledge to stand up successful ransomware campaigns, we’ve even seen ransomware that allows you to avoid the payment if you infect other victims.

On the other hand, society has changed in a way that makes ransomware much more impactful. We rely much more on smart phones and computers. The data these devices store has become more valuable for users and organizations. Additionally, the Internet of Things (IoT) has come to stay, so we’ll see more and more devices affected by ransomware in the future.

But if we look specifically into 2017 we can find a new big trend for ransomware: the capability to automatically spread themselves laterally within the network of their victims. Ransomware authors have successfully automated lateral movement techniques which were previously used by advanced adversaries.

On April 14th, 2017, the Shadow Brokers group published an exploitation framework developed by the Equation Group. This framework included the incredibly effective and advanced EternalBlue and EternalRomance exploits that leveraged vulnerabilities on the windows SMB protocol to gain administrative access into the targeted system. These exploits where a key reason for the success of the most impactful ransomware campaigns from 2017, as we will explore in this post.

On May 12th, 2017, the “WannaCry” (Wanna Cryptor) ransomware became a worldwide issue. It spread quickly and effectively, affecting more than 300,000 systems in at least 150 countries. This ransomware encrypted the files of the victim and spread laterally through an organization’ network by using the EternalBlue exploit. Even considering the huge economic impact that Wannacry resulted in, we were lucky because the ransomware was only capable to propagate laterally on Windows7 and Server 2008 systems, and not in WindowsXP or Windows10.

On the other hand, WannaCry had implemented a “kill switch” mechanism. During the infection phase, it queried DNS for a specific domain and only attempted to move laterally to new systems if the domain was not answering. When Marcus Hutchings (AKA MalwareTech), a security researcher, registered and sinkholed the domain, the WannaCry ransomware stopped spreading as a worm.

The fact that the WannaCry ransomware was buggy, didn’t use unique bitcoin wallet addresses per infection (a key “security” measure used by most ransomware variants today to make it difficult for researchers to track payments made to the authors), and had this “kill switch” mechanism caused some security researchers to speculate about the possibility of WannaCry being a test that started that was accidentally released to the wild. On the other hand, last December, the U.S. assistant to the president for homeland security and counterterrorism attributed this ransomware to North Korea, who vehemently denied being responsible for the cyber attack.

A month and a half after WannaCry, we wake up with a new surprise: Petya/NotPetya. Petya was a ransomware variant in use since April 2016. The Petya ransomware was unique because rather than searching and encrypting specific files (like most ransomware), it replaced the infected machine’s boot loader and encrypts the master file table to lock the access to the computer or the data on it until the ransom is payed. The ransomware strain seen on June 26th, named NotPetya and which original infection vector appears to have been a malicious update from a Ukrainian financial software firm, re-used quite a bit of the Petya ransomware code with significant improvements and differences.

First of all, NotPetya is not truly a functional ransomware strain since even if you pay, you can’t unblock the access to the victim’s system. Due to this, it appears that the purpose of this malware was not to make money but rather to impact the availability of data and services. Second, much like the WannaCry ransomware campaign, NotPetya implemented mechanisms to automatically spread itself by using the EternalBlue exploit. However, NotPetya was also effective against organizations that had already applied patches that prevented the use of the EternalBlue and other Equation Group exploits. The NotPeyta ransomware used common threat actor techniques to retrieve cached passwords from already infected systems to move laterally within the network and infect additional systems by abusing PsExec and WMI protocols.

Because NotPetya appears to have been designed to cause damage to customer systems, it is much more effective than WannaCry, but masquerading as a standard ransomware campaign points to the likelihood that it was developed by a very skilled and resourced group. The potential goal of the campaign becomes clearer when you examine the impact of the Notpetya campaign. Most of the organizations impacted by NotPeyta where located in Ukraine, including airports, public transportation, banks, and Ukrainian government systems.  The Security Service of Ukraine point to the involvement of the Russian Federation special services in the attack.

Finally, on October 24th, 2017 BadRabbit made its debut. This ransomware is a variant of NotPetya that leverage hard coded and stolen credentials to spread across the local network. However, the fact that it didn’t use EternalBlue to spread laterally like WannaCry and NotPetya (it used another Equation’s group exploit called EternalRomance instead) and the fact that a vaccine to prevent the infection was quickly available the day of the attack have mitigated much of the impact of this last big wave of 2017’s ransomware.

Looking at the impact those ransomware incidents have had we can realize the importance for organizations to implement some basic security controls such:

  • An updated inventory of the computers assets. You can’t protect what you don’t know you have.
  • An effective Vulnerability Management Program to ensure systems are correctly patched for critical vulnerabilities.
  • Access control and proper network segmentation.
  • Do proper Windows hardening and take advantage of the new security controls Microsoft is including on its OS. You can find here a good article from Microsoft on this topic.
  • Have an effective backup strategy to be able to recover the important data in case of disaster but also in case of ransomware infection.
  • Limit user privileges on the endpoints whenever is possible. Notpetya would not have been as effective if users had not local administrator privileges on the endpoints.
  • Limit the internet access from production servers whenever possible.
  • Implement and test an Incident Response Plan that includes ransomware scenarios to avoid any improvisation in a crisis scenario.
  • Use effective Endpoint security solutions able to identify Indicator of Attack/compromise rather than rely only on signature based detection.

In conclusion, 2017 was the year of the of worm-style ransomware such as WannaCry or Notpetya, which affected organizations all over the world and used advanced lateral movement techniques to enable its spread. I think we should expect this trend to continue and evolve in the near future. I believe it’s important for the organizations to get as prepared as possible to prevent and be able to successfully react to such threats.

Sources:

If you’re in Switzerland this January, join us at the SIGS Kick Off in Zurich or the ICT Networkingparty 2018 in Bern. Our focus in 2018 throughout the SIGS .series 2018 will be MSS, and both these events promise to bring together the brightest minds in the IT Security industry to share thinking on 2018 trends.

GDPR: A Brief Overview

GDPR: A Brief Overview

Over a year ago the GDPR (General Data Protection Regulation of April 27th 2016) was approved and will become mandatory to the European Union members starting May 25, 2018.

That leaves a little less than a year to become compliant with the regulation, so I wanted to take the opportunity to write an overview about what this regulation is and what its main objectives are.

Let’s start by having a look at how this regulation defines personal data. “Personal data is any information relating to an individual, whether it relates to his or her private, professional or public life. It can be anything from a name, a home address, a photo, an email address, bank details, posts on social networking websites, medical information, or a computer’s IP address,” according to the European Commission.

Here are the main principles the regulation lays out, for collecting data:

  • Processed lawfully, fairly and in a transparent manner in relation to the data subject
  • Collected for specified, explicit and legitimate purposes
  • Adequate, relevant and limited to what is necessary in relation to the purposes for which they are processed
  • Accurate and kept up to date
  • Kept in a form that permits identification of data subjects for no longer than is necessary
  • Processed in a manner that ensures appropriate security including protection against unauthorized or unlawful processing and against accidental loss, destruction or damage

Let’s have a look at the scope of the regulation, which organizations are obliged to adhere to. The regulation defines two figures around the data protection:

  1. The data controller (the organization that is collecting data from EU residents)
  2. The processor (the organization that processes data on behalf of data controller).

The regulation applies if either the controller or the processor are based in the EU or if they collect or process personal data of EU residents.

Let’s review now some of the main changes that the GDPR will effect:

  • It expands the notice requirements to include the retention time and the contact information for the data protection officer
  • Valid consent must be explicit for the data collected and the purposes of said data. Data controllers must be able to prove “consent” (opt-in) and consent may be withdrawn
  • People will have the right to question and fight decisions affecting them that have been made automatically by using algorithms
  • Implementing measures must be designed into the development of business processes for product and services which meet the principles of data protection by design and data protection by default
  • Will be the responsibility of the data controller to implement and demonstrate the compliance even when the processing is carried out by a third party

The new regulation also obliges organizations to appoint a Data Protection Officer for all public authorities or when the core activities of the data controller or processor consist of operations that require regular and systematic monitoring of data subjects on a large scale, as well as when they need to process personal data on a large scale.

Another significant aspect of the new regulation is the notification of a personal data breach to the data subject when the breach is likely to result in a high risk to their rights and freedoms. The notification will need to describe in clear and plain language the nature of the breach and the likely consequences of the breach as well as the measures taken or proposed to address it.

This notification can be avoided if the controller has implemented appropriate technical and organizational protection measures, in particular those that render the personal data unintelligible to any person who is not authorized to access it, such as encryption.

Finally, let’s have a look at administrative fines, since it’s also a major change. It’s important to know that infringements of the regulation can be subject to administrative fines up to 20 million Euros or up to 4% of the total worldwide annual turnover of the preceding financial year, whichever is higher. In order to determine the quantity, the nature, gravity and duration of the infringement will be taken into account. Regulators will also take into account the nature, scope and purpose of the situations as well as the number of data subjects affected and the level of damage suffered by them.

Also, the intentional or negligent character of the infringement, the technical and organizational measures implemented, as well as any action taken by the controller or processor to mitigate the damage suffered by data subjects is considered. Also considered are previous infringements, the degree of cooperation in order to remedy it and mitigate the possible adverse effects. These instances will become known to the supervisory authority.

In any case, 20 million Euros or up to 4% of the total turnover is a really respectable amount that I’m sure will be good motivation for those companies that manage sensitive personal data to invest on being compliant with the GDPR and implement the needed technical and organizational controls to decrease the risk of having a personal data breach.

What about your company? Is it already working on implementing those controls and moving forward to get compliant with the GDPR?

Link to the law: http://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=CELEX:32016R0679#d1e6226-1-1

Join the Scrum – Retrospective as a Security Tool for Continuous Improvement

Join the Scrum – Retrospective as a Security Tool for Continuous Improvement

Continuous improvement is a fundamental part of any security standard or security management system, so during my career I have had the opportunity to implement, manage or audit different approaches to implement it.

As in the last years I’ve also been exposed to agile development methodologies, I have equally had the opportunity to see close up how continuous improvement can be managed by using the retrospective ritual as part of the scrum ceremonies.

A scrum retrospective is basically a meeting where all the team considers about how the last sprint was, identifying what went well and what didn’t go well. The goal is to come away with a proposal of how to modify the process that everybody on the team agrees with and is committed to for the upcoming sprint.

Sounds simple and it is. And it is this simplicity that surprised me when I realized how powerful and efficient this continuous improvement tool is: by investing 30 minutes on a bi-weekly meeting I saw tangible improvements and results on the morale and the efficiency of the team as well as the process the team was using.

From my point of view, the main advantages of using retrospectives are:

  • Doesn’t require a big investment; 30-60 minutes at the end of each iteration is enough.
  • Being a bottom-up approach has the extra advantage that helps to motivate the team since it’s empowers decision making and is self-organized.
  • The changes are tried on short cycles – if a proposed change does not work, it’s easy and quick to revert to the previous situation

Essentially, the scrum retrospective was created to improve development efficiency and scrum team motivation, so it’s true that it only works well with small teams, less than 8-10 members. However, I’m fully convinced this tool is also applicable to other fields within the security sector to improve the efficiency of any work team and improve security management processes.

We can imagine, for example, security consultants, penetration testers, MSSP or security integration teams using this retrospective approach to drive improvement on the processes used to manage their services or projects as well as being a tool to improve the communication and the motivation of those teams’ members.

It can be a really useful tool as well for a corporate security team by using retrospective as a way to improve the efficiency of the internal security teams. It also helps identify aspects that didn’t work well in the last cycle and find initiatives to improve those aspects in the near future. In order to do this, security professionals could use retrospectives on top of the post-mortem analysis on security incidents to get a wider perspective on, for example, the security incidents that happened during the last cycle (week, month, year…). The two key questions to be asked on those security retrospectives could be:

  • What went well on my security processes during the last iteration? What security incidents have been effectively detected, contained, eradicated or recovered? What are the security controls that helped us to be successful there?
  • What went wrong on my security processes during the last iteration? What security incidents were detected too late or were not properly contained, eradicated or recovered?

By analyzing the answers to these two main questions, the security team will better placed to select which initiatives should be accorded high priority and implemented during the next cycle – improving system efficiency being the basis for implementing a continuous improvement policy that provides tangible results.

In conclusion, it’s true that retrospectives can’t completely replace other traditional approaches for continuous improvement. But I’m convinced that it’s a really effective tool to be applied to a very wide range of situations so it should be always part of the toolbox of any team, service or project manager, not only in the development sphere but also in the security one.

Deception is the New Black

Deception is the New Black

Concepts acquired from the military field are everywhere in cybersecurity – think defense in depth, situational awareness, intelligence, counter-intelligence… The list is long. In this post, I’m going to talk about one of them – deception – not because it’s new, but because I think it’s going to become really important in the upcoming years.

Deception as a concept applied to cybersecurity has been around for a while.  It’s the idea behind honeypots, honeynets and honey tokens. What’s new is that these products are maturing, allowing simple but customized deployment and scalability  thus making them suitable for corporate environments. The generic name given to these new products is Distributed Deception Systems (DDS).

To better understand the benefits of these solutions, we need to put on the shoes of an attacker. We’ll choose one who has successfully compromised a computer – by spear phishing, for example – in a network where deception points have been deployed across a wide range of vectors such as  user accounts, office documents, network services, mobile phones, servers, printers, etc.

As an attacker, invisible, blanket deceptions can be a real nightmare. We can’t know which of the services and systems that we can see from the system we’ve compromised are authentic and which ones are deceptive. And the same thing happens with the local accounts that we can see and try to re-use on a different system.

In effect, we’re taking decisions in the dark.  We’re blind.  So, the result is that any wrong step we make, can trigger an alert that compromises all our attack efforts and puts our mission in jeopardy.

This is a real game changer.  Previously, target organizations could never gain the upper hand.  We were free to make all the attempts we wanted to and only needed one single success to win the battle. Defenders, however, needed to be successful all the time – one single mistake and they could lose the match.

Deception irons out some of the asymmetry in cyberwarfare. Thanks to deception technology, one single mistake will unmask us and enable the defenders to detect the attack before it’s too late.

Another big benefit of the DDS solutions for organizations is that the false positive ratio is usually really low since most of the time, only a real attacker will fall into the deception. On top of this, the quality and amount of information that the alerts provide is much richer than that obtained by traditional security solutions. Full control over what the attacker is allowed to do enables the organization to keep the attacker busy in order to obtain additional valuable information.

Because of this, these new deception solutions not only allow security teams to decrease the time to detection (TTD) after an asset has been compromised, but also will increase the attacker’s time to compromise (TTC), as they will require more time to find their objective and to figure out how to reach it without being deceived.  Defenders increase the time they have to react to an attack and to learn from the attacker actions in order to be better prepared against future malicious activity, or even find out who’s behind the attack. And this can definitely make the difference between a headline-hitting mega-breach and just another failed attack.

We need to keep in mind that this breed of deception technologies is not going to replace the traditional approaches of detection and prevention but needs to be seen as a way to complement them by providing an extra layer of security that comes into play when the attacker lands on their target network.

So, watch this space.  For the reasons I’ve listed, I’m convinced that in the upcoming years we’ll be seeing a massive adoption of those DDS technologies on corporate environments as well as a big increase on presence and maturity on the deception solution market.