Angesichts des hohen Tempos und der Komplexität bei der Transformation von Unternehmen sowie der ständig steigenden Sicherheitsbedrohungen für hybride Umgebungen wünschen sich IT- und Sicherheitsteams vertrauenswürdige Sicherheitspartner, die dabei helfen können, die Visibilität zu erhöhen, die Komplexität zu reduzieren und dem Fachkräftemangel entgegenzuwirken.
Olivier Spielmann, Director of EMEA Managed Security Services, Kudelski Security.
Millionen von Menschen sind von Datendiebstahl im grossen Stil betroffen. Cybersicherheit – die Abwehr dieser Bedrohungen – rückt deshalb überall zunehmend in den Mittelpunkt des Interesses. Zugleich steigt aber auch die Komplexität der Cyberbedrohungen, was die Gefahr für Daten und die Reputation der betroffenen Firmen auf eine neue Stufe hebt. Allein Unternehmen kosteten Cyberattacken im Jahr 2018 weltweit rund 1,5 Billionen US-Dollar. Gleichzeitig passen Organisationen ihre IT stetig den steigenden Erwartungen der Konsumenten an.
Netzwerkperimeter werden dabei immer weiter ausgehöhlt, um eine solche Transformation zu ermöglichen. Um die Sicherheit kritischer Firmendaten weiterhin gewährleisten zu können, müssen IT-Sicherheitsteams über eine unternehmensweite Visibilität verfügen. Dafür benötigen sie vertrauenswürdige Partner, die sie bei der komplexen Verwaltung von Cybersicherheitsprogrammen in Multi-Technologie-Umgebungen unterstützen und helfen, den Investitionswert zu maximieren.
Immer von einem Datendiebstahl ausgehen!
Die Frage ist nicht, ob oder wann ein Datendiebstahl erfolgen wird, sondern wie rasch eine Bedrohung erkannt werden kann, die bereits im Netzwerk ist. Die Geschäftsleitung involviert sich stärker, sie will die Gewissheit, dass das Unternehmen vor den aktuellen Bedrohungen geschützt ist. Dennoch bleiben die meisten Bedrohungen im Schnitt 101 Tage unerkannt. Ein höheres Mass an Informationen ist erforderlich – eine bessere Übersicht über Bedrohungen und Gegner, eine grössere kontextuelle Relevanz und ein dynamisches Verständnis in Bezug auf eine sich wandelnde Bedrohungslage.
Den traditionellen Lösungen von Managed Security Services Provider (MSSP) fehlen die fortschrittlichen Funktionen, die erforderlich sind, um fortschrittliche Gegner zu bekämpfen. Ein effektiver Ansatz für die Erkennung von Bedrohungen darf nicht linear sein, muss Visibilität generieren und die Ad-hoc-Bewegungen eines Angreifers im System widerspiegeln. Dies erfordert spezifische Fachkenntnisse und Fähigkeiten, die kontinuierlich aufgefrischt werden müssen, um immer einen Schritt voraus zu bleiben.
Ansatz Threat Hunting
Egal wie gut Technologie und Prozesse sind, Bedrohungen können dennoch unentdeckt bleiben. Eine fortschrittliche Sicherheitsabteilung erfordert spezialisierte Teams aus Threat Hunters, also Analysten mit der Denkweise eines Hackers, die anomale Aktivitäten und Dateien analysieren, um unbekannte Bedrohungen aufzudecken. Da weltweit knapp drei Millionen Cybersicherheitsexperten fehlen, werden Unternehmen allerdings Mühe haben, die benötigten Talente zu rekrutieren.
Mit der Digitalisierung nimmt die Anzahl der Angriffsvektoren zu. Auch gibt es stetig mehr Plattformen und Anwendungen, die Daten sammeln, speichern und auswerten. Dies macht es schwieriger, die Risiken zu reduzieren und eine hohe Visibilität im gesamten Unternehmen aufrechtzuerhalten. Kritische Infrastrukturen sind für einen wirksamen Einsatz immer stärker vom Internet und von den IT-Umgebungen abhängig. Die Kombination dieser Faktoren bedeutet für Sicherheitsteams eine komplexe Mission, für Angreifer neue Ziele und für Aufsichtsbehörden eine völlig neue Dimension.
Visibilität und Überwachung von Cloud-Plattformen: Laut Gartner werden 75 Prozent der Unternehmen bis 2020 ein Multi-Cloud- oder Hybrid-Cloud-Modell implementieren. Eine Migration in die Cloud mag zwar kurzfristig Zeit und Geld sparen. Jedoch ist der Einsatz der Cloud in Bezug auf langfristige Visibilität und Datensicherheit mit grossen Herausforderungen verbunden, insbesondere in hybriden Umgebungen.
Visibilität und Sicherheitsüberwachung in den Bereichen Operational Technologies (OT) und Industrial Control Systems (ICS): Netzwerke in OT und ICS stellen ein wachsendes Risiko dar. Böswillige Aktivitäten nehmen zu. Den Beweis dafür liefern die steigende Anzahl Bedrohungsaktivitäten von ICS-Angreifern und das Aufkommen von ICS-spezifischer Malware wie Triton oder Trisys. Berüchtigte Angriffe auf kritische Infrastrukturen, darunter Wasser- und Energieversorgungsunternehmen, zeigten, dass hier eine bessere Sicherheit erforderlich ist. Trotzdem haben viele Organisationen immer noch Mühe, die nötige Visibilität zu erreichen, um ihre industriellen Umgebungen wirksam zu überwachen.
In der Cloud verschwimmt die Grenze
Die Cloud macht viele Versprechen: schneller, kostengünstiger, einfacher. «Sicherer» hört man selten in dieser Aufzählung. Vor allem KMUs kann die Cloud zuweilen verunsichern. Olivier Spielmann, Director of EMEA Managed Security Services bei Kudelski Security, weiss Rat. Interview: Coen Kaat
Spielt es eine Rolle, ob ich meine Daten in der Schweiz oder in einer ausländischen Cloud speichere?
Olivier Spielmann: Nein, solange Sie nicht gegen die einschlägigen Vorschriften verstossen und einen guten Vertrag mit Ihrem Cloud-Anbieter haben. Falls Sie Cloud-Services zur Bereitstellung von Businessdienstleistungen nutzen, bleibt die Verantwortung bei Ihnen. Was sich ändert, wenn Ihre Daten in einem anderen Land gespeichert werden, sind die rechtlichen Vorgaben, die im Falle einer Datenpanne oder beim Schutz Ihrer Daten vor Suchvorgängen zur Anwendung kommen. Bei der Speicherung der Daten bei einem Cloud-Anbieter sollten Sie abklären, welche Gesetze gelten und ob diese ausreichend sind.
Die Cloud wird immer hybrider und vielfältiger. Wie erreicht man die für eine sichere Cloud-Umgebung erforderliche Transparenz?
In der Cloud verschwimmt die Grenze zwischen Datenverarbeitung und Datenspeicherung. Cloud-Services werden zwar wegen ihrer Flexibilität, Schnelligkeit und Benutzerfreundlichkeit geschätzt, können aber – gewollt oder ungewollt – zu einem weit offenen Portal zur Offenlegung von Daten werden, wodurch schon riesige Mengen an vertraulichen Informationen preisgegeben wurden. Risiken lassen sich durch die Schulung von Cloud-Benutzer-Teams, die richtige Architektur und Konfiguration von professionellen Cloud-Umgebungen sowie durch die Überwachung von Unternehmens-Clouds auf Konfigurationsfehler minimieren. Alternativ können Unternehmen die Möglichkeiten von Managed-Security-Service-Providern wie Kudelski Security nutzen. Wir überwachen Risiken und Konfigurationen rund um die Uhr und haben die Zeit zur Erkennung von Bedrohungen von durchschnittlich 78 Tagen in vielen Fällen auf wenige Stunden reduziert.
Welche neuen Herausforderungen stellt das IIoT für IT-Sicherheitsdienstleister dar?
IIoT-Umgebungen zu schützen ist nicht dasselbe wie der Schutz von IT-Umgebungen. Industrielle Systeme sind unterschiedlich aufgebaut, sind aber durch ihre Verbindung zu IT-Netzwerken nun ähnlichen Bedrohungen ausgesetzt. Sie bringen neue Risiken mit sich, die sich mit herkömmlichen IT-Sicherheitsmassnahmen nicht beheben lassen. So kann beispielsweise das Scannen eines Produktionssystems mit einem Schwachstellenscanner das System abschalten und damit den Fertigungsprozess stoppen. Darüber hinaus sind IT-Sicherheits-Kompetenzen und -lösungen nicht auf IIoT-Umgebungen ausgerichtet. Anbieter und Dienstleister müssen neue Lösungen bereitstellen, um diese neu exponierten Umgebungen kritischer Dienstleister – zum Beispiel in der Energiebranche – abzudecken. Unternehmen, die ihre Anlagen in einer IIoT-Umgebung schützen möchten, können sich an das Cyber Fusion Center von Kudelski Security wenden, das rund um die Uhr Beratung, Bedrohungsüberwachung, Threat Hunting und Störungsbehebung anbietet.
Wer bewacht die Wächter? Wie gewährleisten Cybersicherheitspartner ihre eigene Sicherheit?
Bei Kudelski Security fordern Kunden uns regelmässig auf, zu beweisen, dass wir robuste Sicherheitskontrollen und angemessene Security-Governance-Prozesse anwenden. Cybersicherheitspartner sollten selbst umsetzen, was sie predigen, indem sie auch in ihren eigenen Umgebungen tiefgreifende Sicherheitskontrollen, eine effiziente Bedrohungsüberwachung, Threat Hunting und Störungsbehebungssysteme implementieren.
Read the original article by clicking here.
“Military intelligence” is no oxymoron. I’m not a career intelligence professional, but I have worked with some of the best intel organizations and operations in the world, including cyber operations and U.S. military intelligence. So, when I need to assess cyber intelligence, I revert to the framework used in a military environment.
The essential basics of any intelligence operation, whatever the sector, cover requirements definition, collection, processing and exploitation, analysis and production and dissemination. So, what particular insights do you examine within this framework used by the best cyber intelligence organizations?
A critical part of any intelligence operation is determining the need. Just saying ‘I need cyber intelligence’ or ‘I am going to create cyber intelligence’ will get you nowhere. A consumer or producer of intelligence needs to understand what is required in order to not only build a collection platform which meets the needs but executes the required collection. If you’re a cyber intelligence organization, the value of your production not only depends on your analysis but is just as dependent, if not more, on your collection.
Another aspect of your needs may be strategic and not just tactical. Strategic intelligence can help when building a network or security architectures or detection capabilities and hunting operations. There are knowledge bases for threat techniques, such as the MITRE Adversarial Tactics, Techniques & Common Knowledge (ATT&CKTM), which can be used to evaluate your defenses or detection capabilities. Some of the best organizations use and build their security operations and detection frameworks from these threat techniques. These organizations use strategic intelligence to protect against threats to things in their vertical, infrastructure or their architecture.
Another part of strategic intelligence is actor and intent. Although intent may be evident in some situations, APTs have a very different intent from a simple ransomware attack. Intent and attribution can be a specific requirement for government and law enforcement to meet their needs, but intent also can be useful in other sectors like critical infrastructure. Understanding the long-term goal or intent of intellectual property theft, denial of service or physical destruction within your sector can go a long way toward understanding your risks, your specific strategic intelligence requirements and the real-time tactical intelligence you require to mitigate those risks.
The size and/or scope of your collection platform capability will determine the size of your output. Single intelligence sources or implementing single-function processes like scraping the web for malicious content or links are valuable but deliver limited intelligence with specific applications. If you only collect, process and analyze malware, it stands to reason that you will only produce malware intelligence. Collection capabilities really come from the ability to acquire unique data. Companies execute collection with various techniques, media and locations. Incident response collects data. Security products collect data. Web and darknet scraping collect data. Intrusion and Network analysis collects data. Hunting collects data. The best intelligence organizations are multi-faceted, so they can fuse together all the intelligence collected from different platforms.
Size and scope of collection are analogous to your own internal network collection and processing. Think about your network Security Information and Event Management System (SIEM). Your SIEM scales in value with more data sources (collection platform) and better correlation (processing) within the platform. If you have one data source, firewalls, for instance, you get collection and correlation from only firewalls. But if you have servers, endpoint detection capabilities, email gateway logs as well as firewalls providing data that you can correlate the information you receive from these multiple sources. When it comes to intelligence collection, companies who have a large platform or multiple platforms provide different intelligence than a provider who scrapes the dark web for specific attributes. Both can be valuable but again this goes back to your need and requirements. The main point to remember: not all intelligence providers are created equal and one big differentiator is the quality of their collection platforms.
The ability to process raw data plays a significant role in an intelligence provider’s ability to produce real-time intelligence. The best intelligence organizations have developed two important capabilities: vast collection and big data analytics. Using, storing and executing complex analytics on large amounts of data is challenging. The future is now when it comes to using artificial intelligence such as machine learning to support operations. The key to success is figuring out which providers are just using “AI” as a buzzword. Data, without good analytics, only yields piles of data with no actionable outcome. The larger and more diverse the data types and structures, the better your data storage and your ability to perform analytics must be. If you understand your provider’s ability to conduct analytics on their collection, you are another step closer to ROI on intelligence.
The goal of intelligence analysis is to figure out what will happen next. Great providers understand they must assess what is happening now and why it’s happening. Intelligence activities include trying to determine the attacker tactics, techniques and procedures. Some attackers use botnets, malware, ransomware. Others use phishing, metasploit or file-less attacks. All these techniques and the tactics of code writing, timing, sequence, targeting, and infrastructure used, need to be collected to find and attribute the most sophisticated threats.
The best nation-state actors develop techniques to look like other nation states. Finding advanced persistent threats (APT) take an enormous amount of data combed through by the best analytics fast enough to find the needle in a field on haystacks. Understanding your provider’s analysis capabilities is very different from knowing their collection methods, analytics and production capabilities. Good analysis comes from years of experience working to get in the mind of the threat actors, to understand their motivation and the goals of those threats. When assessing analysis, look for experience and historic achievements as well as a good methodology for using what they collect to reach conclusions on your requirements.
In some ways, understanding how you will consume threat intelligence or how it will be provided determines your requirements. Understanding how intel is disseminated is key: Are there automated feeds? Do I get an email? Do I read it on a portal? Are indicators of compromise provided? Is it a list of exploits being used against the newest vulnerabilities? How is it structured to be used by my security tools like direct SIEM ingestion?
In its simplest form, the intelligence needs to be actionable by security staff or security tools. In other words, have an actual effect on your defenses. Knowing the Chinese hacked the Office of Personnel Management (OPM), the Russians hacked the DNC, or the latest botnet is spreading across America may be good to know, but how does that help your security staff change your security posture?
What of that is actionable? Does your security team or provider get actionable intelligence and how do they make it useful? Do they have a way to translate data, information and intelligence into a useful defense scheme or execute real-time targeted hunting in your unique environment based on your atmospherics, architectures, vulnerabilities and priorities? How many times have you seen the intel provider send you an email with links to other web articles? Having an intelligence feed because its required by regulation, maybe checking the box, but you must figure out how to use that feed to the max extent possible. How does crawling the web help my situation? Situational awareness about threats is one thing, but actionable intelligence is what reduces risk, finds threats and stops breaches.
Even the best intelligence-producing organizations are producing for a specific need. Know what your needs are, so you can make sure you choose one that gives you actionable intelligence for your particular needs – tactical or strategic. The current landscape for cyber intelligence is vast and confusing. Providers will give you the intelligence they gain based on their own collection, processing, analysis and production capabilities.
Article originally appeared in SC Magazine. Read it here.
The newest buzz word around cybersecurity and managed services is managed hunt operations; the main nuance which might be lost is simple enough, hunting is not new! From platforms to people, everyone is touting the need to find the threats in your network, but security professionals have been looking for and finding threats in networks for 20 years. This “new” concept or theory of hunting has been executed by the best network defenders with the help of sensors, logs, AV, tools, and various scanners for a very long time.
The real trick is going from hunting to search and destroy. While finding historical evidence that attackers have been stealing your Intellectual property for the last four months and remediating may seem to be a success for most threat hunting capabilities. The truth is, discovering threat actors executing commands and watching the techniques is the goal for any modern hunt team. Crushing your advisory in real time as they move laterally, looking to steal intellectual property (IP), Personally Identifiable Information (PII) or Payment Card Industry (PCI) is the dream scenario for any member of your enterprise hunt team.
How many times has your security analyst said, “I can see at this time, this process ran which is an indication of possible blah, blah, blah.” The goal needs to be, “I see the attacker dumping hashes from memory using Mimicatz… I see the active RDP session and the attackers attempt to move laterally from Host 10.X.X.X. I see PowerShell activity on X host not associated with our internal SCCM.”
Active real-time hunting reduces the “find” time from the most recent estimate of about 99 days down to near real time. This real-time hunting takes talent, training, and humans actively executing structured activities to find threat activity. In military terms, some would say it’s a movement to contact. Movement to contact defined by FM 3-0 Operations is a type of offensive operation designed to develop the situation and establish or regain contact. A cyber movement to contact requires not only some of the best behavior-based detection capabilities and best internal collection capabilities but real-time interactive operations within the networks, systems, and hosts.
Other types of hunts we can take from military tactics, techniques and procedures are:
Area Defense: A defensive task that concentrates on denying enemy forces access to designated terrain for a specific time rather than destroying the enemy outright. This type of hunting operation allows us to conserve or use resources to focus on the “crown jewels.” These tactics may include blocking, canalization into the engagement area of the defenders choosing. Some newer deception technologies allow for a more advanced defense as opposed to the honeypot scenario.
Attack: An offensive task that destroys or defeats enemy forces, seizes and secures terrain, or both. Hunting operations within one own’s network which can be categorized as an attack must focus on the threat tools or capabilities, ensure the threat does not own, hold or control infrastructure which is too valuable to be simply wiped and baselined.
Pursuit: An offensive task designed to catch or cut off a hostile force attempting to escape, with the aim of destroying it. Or in other words, making sure the threat knows they were caught and has no way back into the network. Shut the preverbal “backdoor.”
All that being said, hunting needs planning, real-time humans executing operations. Using a military framework may help organize the plan, but either way, get eyes on the threat actions in real time.
As opposed to attacking someone in their network, hunters can find and render any threat attempt useless through understanding tactics and techniques an attacker would use. Once in contact, the hunters must clearly understand what actions to take. If your analysts see real-time activity, have you developed a real-time response to each of the interactive scenarios? Understanding the requirements of not just finding and blocking bad stuff but knowing what tools and actions to take if your hunter sees the active RDP session, finds PowerShell running, sees certain processes running or sees the recon scanning activity is critical.
Thoroughly thought out plans, hunts, hunter actions, responses and activities upon finding the threat is sometimes referred to as hunting maturity level. What level is your organization? Start by developing a plan for real interactive hunting, build hunting goals, train hunters, understand the needed tools so we create a contested environment.
A cursory glance at any MSSP listing shows that the focus of most mainstream network and security operations centers (SOCs) is generally health monitoring, configuration, accounting, performance, security (FCAPS), mean time to repair (MTTR), and the security events as they arise.
It’s not a focus that is enjoying enormous success. According to Gartner, breach activity in 2017 was up by 43.8% year-over-year and the scale and severity of attacks as well as reporting requirements are increasing.
Speed of response is at the heart of the issue. Some of the recent largest-scale breaches, such as OPM, Equifax, Target, etc., may have had a slow decision cycle. And this is where the idea of ‘fusion’ provides an interesting answer. Fusion seeks to make better decisions based on the best available information possible and gain the advantage of having a faster decision cycle than your enemy or threat.
Clearly, the decision maker who has the fastest process to gather the best, most up-to-date information possible is going to have the advantage. This is not a new concept. As retired general Stan McChrystal said “The answer is for leaders to have a process in place that helps them gather relevant information, adequately consider dissenting views from a mix of trusted sources, make a decision, communicate the decision, and act on it. Such a system does not eliminate risk entirely, as real decisions always involve uncertainty and risks, but it does help to ensure that the decision made is well-informed, timely, and the best course of action in an evolving and complex environment.”
The military has evolved in some part due to Gen. McChrystal’s vision for fusion. Put simply, fusing who has the information with who needs the information is critical for timely decision making and action.
In cyber, this is even faster and more important than in any other domain. Before the Internet, the telephone, the telegraph, radio, and carrier pigeon, information traveled at the speed of humans. Think Paul Revere or Pheidippides. Now information travels at the speed of light, so decision cycles are faster. The need for fusion is even more important because of technology, not less important because we have technology. Traditional fusion is intelligence with operations. The critical piece to figure out in any “fusioning” is what needs to be fused. In some organizations fusing Cyber Intelligence and threat activity has led to an evolution on cyber defense, but this still falls short for two reasons.
First, using contextual information not only from IT operations but from business operations adds huge value to the speed of understanding cyber events. The old false positive problem is significantly reduced by knowing up front or in real time the cause of an event in context to operations. Think PowerShell – PowerShell may be legit if done by an Admin yet may be bad if being done by an external RDP connection.
Knowing if SCCM is being used at the same time PowerShell launches is a huge win for fusing IT operations information with security event information. With understanding IT and Business context, event fatigue then becomes minimal and the one event which is almost the same but is missing the business contextual information does not get missed because your only analyst is drowning in useless events.
Second, get rid of the notion that intelligence feeds will solve all problems in real time. “If I could only automate those feeds I’d catch the crook in the act!” If you don’t know and understand your threat through intelligence way before they break the window, you won’t see them or catch them until it’s too late. CrowdStrike estimates the average attacker takes 1 hour and 58 minutes to move laterally in your network. This means you need to have a decision cycle faster than two hours to stop that initial compromise from becoming much worse. Cyber intelligence is knowing the threat, building detection for those threats, and then spending your time hunting for those threats not relying on some automated detection with real-time cyber intelligence.
For cyber decision making, attackers fuse the latest vulnerabilities with techniques and capabilities to exploit those vulnerabilities. For the defender, the fusion comes from having the intelligence information, the network contextual information and the activities that are occurring in real time on the infrastructure. Only then can the defender reduce the decision cycle to an actionable timeframe, block the attacker decisively, contain the damage to critical assets – and hopefully – avoid becoming the next big cyber attack headline.
Automated detection will fail. This is not a FUD (Fear, Uncertainty, Doubt) statement designed to strike fear into the hearts of CISO’s, it’s a fundamental problem that’s unlikely to be solved in my lifetime. This problem is not limited to technology alone, sometimes it’s a failure related to process or people, and sometimes it’s a murky mixture. Add any sort of complexity to the mix and the odds become greatly stacked against us.
Regardless of the reason, these factors can result in a failure to notice something bad happening in our environment and puts us in an awkward position. The investment we made to protect ourselves works as intended, but only most of the time.
As security professionals, is it time to admit that we can’t spend our way out of being vulnerable to a breach; as security vendors and service providers, is it time to admit that we can’t actually stop every breach?
IFTTT (If This Then That) or what?
This doesn’t mean we shouldn’t have great technology, people, and processes helping us to make decisions about the activity going on around us. Air disasters have dramatically and steadily declined over the past couple of decades. This is mostly due to advances in pilot training, the design of the planes themselves and fly-by-wire automation technology that most come equipped with today. However, accidents still happen; airspeed indicators freeze over sending instruments into chaos prompting pilots to chase down problems and react in ways that aren’t necessary to resolve the actual problem thereby making the overall situation worse.
We are in a similar situation, great technology that keeps us safe, well-trained operators following a solid process, and automatic detection of most threats.
At this point our conversation can go in many directions, perhaps we’d talk about Risk Mitigation, Security Control Frameworks, the future of AI and Machine Learning, blockchain, next-gen, virtual reality, etc. but you already hear enough about those. I want to talk about this problem from a Managed Security Services Provider perspective.
Does MSS drive value to its clients and are consumers of Managed Security Services expecting enough of their MSSP?
MSSP’s, in general, are not delivering on their promises. “We are an extension of your team”: hardly, as nearly every time you talk with your MSSP it involves explaining something you’ve already explained many times in the past. “You can take advantage of our wide visibility into a large client base to realize improvements in our detection capabilities for you”: doubtful, most MSSPs don’t have the infrastructure or process in place to ensure this actually happens. “We don’t just throw alerts over the fence to our clients”: no comment necessary here, I imagine.
Truth is that MSSPs struggle to provide value. The majority of MSSPs were created when a client opportunity came up to manage and monitor a technology, and due to this, most are only built to monitor security technology and the alerts it generates. This continues throughout the life of the provider. Got a new technology you need managed? MSS will take it on!
On the other hand, consumers of MSSP services have been conditioned to expect that the value of these services is in the expansion of their security device management and monitoring to 24×7 by a larger set of eyes. This is a great expectation, but what some may not realize is that an MSSP will have the same struggle to contain technology sprawl as any enterprise. The more technology an MSS manages and monitors the harder it is to be effective and efficient at doing so. The complexity of it all becomes overwhelming and service delivery suffers as economies of scale disappear. MSSP’s compete in the same job market as everyone else, so this complexity leads to stress and job dissatisfaction which inevitably leads to analyst turnover, only exacerbating the problem. It might be interesting to note that clients tend to overlook blips in service during the duration of the contract because the value is in the coverage, not the actual outcome of the service. At renewal time, however, the realization that little value was delivered is exposed and many organizations look elsewhere (or internally) for a SOC.
These are just some of the problems with legacy MSSPs (yes, there’s more) and with over a decade of experience working for some of the biggest and best, I consider them lessons learned. When we came to Kudelski Security in 2016 we asked for and were granted the opportunity to stop selling our MSS and take a hard look at our service model and at the MSSP vertical in general. With the lessons learned in mind, we went about the process of rebuilding everything on top of our Cyber Fusion strategy. Sitting together in many (many!) meetings a fundamental and critical objective bubbled up. We need to deliver value to our clients, not just the perceived value based on extending coverage of internal teams but real value based on business outcomes that reduce overall risk. To do this we needed to understand how to contextualize the modern threat, detect a breach quickly, and limit the impact.
Automated detection will fail and we should assume breach, this is the genesis of our strategy to tackle delivering those business outcomes. When we started to work on our infrastructure, our goal was to have the top Threat Monitoring Service in the world. We built in the capability to ingest business context just as easily as we could ingest curated threat intelligence. Luckily Kudelski Security provided us with a team of 30 DevOps engineers dedicated to MSS.
If an organization is monitoring junk, sending that junk to an MSSP doesn’t make it better so we created a set of standard Use Cases which we could deploy regardless of technology as well as the capability to customize Use Cases as needed so our clients could consume alerting with consistency across their environment. We see the network perimeter as deteriorated, so we placed extra focus on the endpoint by developing Managed EDR and Attacker Deception Services, which landed us in the 2017 Gartner MDR Market Guide. By the way, we do have a select set of great technologies we manage as well. This list is kept intentionally small for the reasons we covered above.
If we had stopped there, Kudelski Security would be a great MSSP; we wanted to be greater.
Challenge the MSSP vertical to change.
Fundamentally I want to see all MSSP’s better protect their clients. To induce this market change we provide Threat Hunting as part of our Threat Monitoring Service at no extra cost.
We believe this is what every MSS, every SOC, and every security team should do regularly because automated detection will fail and we must assume breach.
Threat Hunting is an integral part of Threat Monitoring and as such should not be separated on a pricing sheet.
Our hunting is not just marketing lip service either, it comes in 3 flavors and they are all included with our Threat Monitoring.
- We have a set of Threat Hunting use cases which we monitor for anomalies 24/7/365
- We meet Monday – Friday every week to identify noteworthy threats to hunt. It could be based on input from our clients, from what we’ve seen in the intel community, or what we’re seeing with fast-breaking threat events such as notpetya, wannacry, etc.
- We enable every analyst regardless of level to hunt, at any time, based on their hunches and intuition. If you see something interesting, hunt for it.
Our threat hunting is performed by our own MSS Analysts and not a separate professional services team who mostly do point in time projects. We are always hunting, searching for that clue, that breadcrumb, that something is amiss. We’ve found hidden threats otherwise missed by monitoring. Hunting also allows us to continually improve as many of our hunts have resulted in new monitoring techniques. Allowing everyone to hunt has also increased the job satisfaction of our analysts, virtually eliminating turnover.
If it works for us, it can work for everyone and it should be a normal part of your threat monitoring program.
Francisco Donoso, our lead MSS Architect is writing a follow up to this post titled “SIEM is dead, long live SIEM”. He’s got some great content that emphasizes the work we’ve put into the some of the technical ideas behind what we are all about as an MSSP.
Automated detection will still fail, and breaches will still occur, but with our approach, we can contextualize the threat, reduce the time it takes to detect a breach and limit its impact.
MSSPs out in the marketplace, consider this a challenge. We hope you will accept?
Over a year ago the GDPR (General Data Protection Regulation of April 27th 2016) was approved and will become mandatory to the European Union members starting May 25, 2018.
That leaves a little less than a year to become compliant with the regulation, so I wanted to take the opportunity to write an overview about what this regulation is and what its main objectives are.
Let’s start by having a look at how this regulation defines personal data. “Personal data is any information relating to an individual, whether it relates to his or her private, professional or public life. It can be anything from a name, a home address, a photo, an email address, bank details, posts on social networking websites, medical information, or a computer’s IP address,” according to the European Commission.
Here are the main principles the regulation lays out, for collecting data:
- Processed lawfully, fairly and in a transparent manner in relation to the data subject
- Collected for specified, explicit and legitimate purposes
- Adequate, relevant and limited to what is necessary in relation to the purposes for which they are processed
- Accurate and kept up to date
- Kept in a form that permits identification of data subjects for no longer than is necessary
- Processed in a manner that ensures appropriate security including protection against unauthorized or unlawful processing and against accidental loss, destruction or damage
Let’s have a look at the scope of the regulation, which organizations are obliged to adhere to. The regulation defines two figures around the data protection:
- The data controller (the organization that is collecting data from EU residents)
- The processor (the organization that processes data on behalf of data controller).
The regulation applies if either the controller or the processor are based in the EU or if they collect or process personal data of EU residents.
Let’s review now some of the main changes that the GDPR will effect:
- It expands the notice requirements to include the retention time and the contact information for the data protection officer
- Valid consent must be explicit for the data collected and the purposes of said data. Data controllers must be able to prove “consent” (opt-in) and consent may be withdrawn
- People will have the right to question and fight decisions affecting them that have been made automatically by using algorithms
- Implementing measures must be designed into the development of business processes for product and services which meet the principles of data protection by design and data protection by default
- Will be the responsibility of the data controller to implement and demonstrate the compliance even when the processing is carried out by a third party
The new regulation also obliges organizations to appoint a Data Protection Officer for all public authorities or when the core activities of the data controller or processor consist of operations that require regular and systematic monitoring of data subjects on a large scale, as well as when they need to process personal data on a large scale.
Another significant aspect of the new regulation is the notification of a personal data breach to the data subject when the breach is likely to result in a high risk to their rights and freedoms. The notification will need to describe in clear and plain language the nature of the breach and the likely consequences of the breach as well as the measures taken or proposed to address it.
This notification can be avoided if the controller has implemented appropriate technical and organizational protection measures, in particular those that render the personal data unintelligible to any person who is not authorized to access it, such as encryption.
Finally, let’s have a look at administrative fines, since it’s also a major change. It’s important to know that infringements of the regulation can be subject to administrative fines up to 20 million Euros or up to 4% of the total worldwide annual turnover of the preceding financial year, whichever is higher. In order to determine the quantity, the nature, gravity and duration of the infringement will be taken into account. Regulators will also take into account the nature, scope and purpose of the situations as well as the number of data subjects affected and the level of damage suffered by them.
Also, the intentional or negligent character of the infringement, the technical and organizational measures implemented, as well as any action taken by the controller or processor to mitigate the damage suffered by data subjects is considered. Also considered are previous infringements, the degree of cooperation in order to remedy it and mitigate the possible adverse effects. These instances will become known to the supervisory authority.
In any case, 20 million Euros or up to 4% of the total turnover is a really respectable amount that I’m sure will be good motivation for those companies that manage sensitive personal data to invest on being compliant with the GDPR and implement the needed technical and organizational controls to decrease the risk of having a personal data breach.
What about your company? Is it already working on implementing those controls and moving forward to get compliant with the GDPR?
Link to the law: http://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=CELEX:32016R0679#d1e6226-1-1