6 Steps to Effective Data Security

6 Steps to Effective Data Security

In this blog post, we’ll identify where today’s data security programs often fail and look at six steps to effective data security. These cover everything from product definition, minimal viable discovery, and services, to telemetrics, metrics as well as threat detection and response capabilities. If you’ve ever asked the question: ‘How can my company reduce insider threats?’ then read on.

You have probably heard something like this before: to implement any kind of meaningful data security, you must first:

  1. Discover your data
  2. Find out where it lives
  3. Catalog who uses it and who owns it
  4. Map its flows and lifecycle
  5. Determine which regulatory / compliance rules apply to it

These platitudes have existed for so long that they are accepted as truth. Be honest – how long would it take your organization to complete each step? Can you plausibly estimate this? Even if you did complete your data discovery effort, why would anyone in your organization care?

In this blog, we explore the shortfalls of discovery-first data security approaches and describe key principles to help organizations shift to value-centric data security.

The Limitations of Discovery-First Data Security

Imagine a manufacturing company that spent its first 6-12 months finding inventory and storing it. No concrete product plans or capital investment in manufacturing – that would simply work itself out once inventory has been bought, stored, and meticulously catalogued.

Sound like an appealing business plan?

This is the approach taken by discovery-first data security. Begin with a long (and comprehensive!) data discovery cycle. Once data is discovered and cataloged, then perform a risk analysis, and only then begin to implement controls to address data vulnerabilities.

In theory, discovery enables a targeted control approach that protects the most sensitive data and results in less business disruption. In practice, data discovery is complex, expensive, and slow. Common challenges include:

  1. Inaccurate milestone dates: there is no good way to estimate how much data exists to be discovered and how responsive the business will be. Further, this indicates a definite “end date” to data discovery; in reality, as the business creates new data, more discovery is needed.
  2. Long duration: many organizations start building an inventory with a top-down interview process. They reach out to senior leaders from across the company, intending to discover what data their organization handles and who “owns” the data. They soon discover that most leaders ignore them. Leaders who engage are irritated by the ambiguity of the interview or unequipped to answer these questions, leading to unending delegation cycles.
  3. High costs: discovery tools can run hundreds of thousands of dollars, with costs increasing for additional scope (structured vs. unstructured, cloud vs. on-premise). Resources must be dedicated to the discovery team and business units. Finally, organizations need to allot resources to maintain their discovered body of knowledge as new data is created and business units change.

What’s the Alternative? Six Principles of Value-Centric Data Security

Prioritizing the discovery element of data security results in the misuse of time and resources. Instead, organizations should focus on the end goal – practical controls addressing data vulnerabilities and threats. Read on to learn the essential principles to start your journey to value-centric data security.

  1. To produce value, first, define the product

Agile and its cousins, lean/just-in-time manufacturing, were born out of the inefficiency of long planning processes and excessive inventory gathering. Both begin by identifying a goal or product, identifying how the product is delivered, and then optimizing the value chain to produce the product quickly and well.

In software development, the product is code that fixes a problem or provides a service. In manufacturing, the product is the widget produced on the factory floor. This realization subordinates specific elements of the value chain (planning, inventory gathering, testing) to the end goal of delivering a usable product.

Data security products are not:

  • A list of sensitive data and where it lives
  • A list of data owners
  • Data classification definitions
  • Data flow diagrams

These are all fine things, but by themselves do next to nothing to protect data. They only become valuable when mobilized through data security controls and user training. Therefore, data security controls and user training, which either directly protect data or help users do the same, are the product.

2. Practice Minimally Viable Discovery

Discovery data, while not bad, should not be the focus of a data security program since it does not create direct value.

Instead, start by addressing obvious security risks with broad controls suitable for all data. Examples include:

  • Alerting on or blocking data moving to personal cloud storage or email accounts
  • Removable media control
  • Automatic remediation of folders accessible to everyone in the organization
  • Quarantining or purging severely aged data (e.g., 2+ years since last viewed)

Organizations should start conservatively with conditions that are unlikely to disrupt legitimate business activity. Even a cautious approach will address glaring vulnerabilities and generate success stories to fuel further growth.

3. Build Services First and the Controls Will Follow

Successful data security controls are supported by layers of governance and infrastructure to ensure they align with business objectives. These layers comprise a service and include:

  • User experience considerations
  • Communications and knowledge articles
  • Exception processes
  • Metrics
  • Telemetry (e.g., ingress or egress APIs)

For example, a control to alert on uploads to personal webmail accounts should:

  • Provide a pop-up educating the user and linking them to secure collaboration guidance
  • Link to exception processes for legitimate use cases
  • Include metrics to signal user behavior improvements to leadership

Each service can create multiple, unique controls and serve as a landing place for data that is discovered.

4. Use Discovery to Enable Telemetry

Well-designed data security services (data access governance, insider risk management, etc.) can consume inputs from data discovery or classification efforts. While discovery on its own is of little value, the service can operationalize discovery-driven insights. These insights could stem from discussions or data owners or tagging done with labeling technology like Microsoft Information Protection.

For instance, an existing control within a DLP service may alert on uploads to personal webmail. After discovering a trade secret and confirming with a data owner, the existing control could be copied and enhanced with a REGEX identifying the trade secret and trigger a complete block, instead of a simple alert.

5. Use Metrics Intentionally

Security organizations often struggle to demonstrate value from their controls. Can be used to not only improve controls but to demonstrate the value the products are creating. This is especially important for cyber board communications.

Each data security service should entertain the following metrics types:

Improve – internally facing metrics to ensure the service is producing intended results. Examples include:

  • Exception request growth (shows how precisely controls were configured)
  • Time to close (for detective controls)

Impress – upward metrics designed to show the success of your program and obtain more buy-in

  • Volume-based (amount of aged data purged, number of overly permissive ACLs remediated, number of unsanctioned cloud service uploads blocked)
  • Success stories (egregious incidents contained or organizational processes improved due to insights from the service)

Invoke – upward metrics showing service weakness to garner additional funding or support

  • % of environment visible (could be used to support buying additional software)
  • Escalation response time (may highlight unresponsiveness from leadership, requiring re-assignment of responsibilities or additional support from program sponsors)


6. Enhance Insider Risk Management capabilities

Data detection and response capabilities (best manifested in Insider Risk Management) are quickly becoming the predominant data security service. There are a few reasons for this phenomenon:

a. Follow the leader: for close to a decade, the security industry has shifted from a prevent-centric to detect/respond paradigm. This is evidenced by the growth of threat hunting and the literal inclusion of “detection and response” into new product and service names (EDR, MDR, etc.). While discovery and prevention have their place, they struggle to keep up with large, complex, and hybrid operating environments.

b. Boundaryspanning improvements: security services that demonstrate the broadest value statements get the most support. More than any other security service, Insider Risk Management (IRM) is holistic and seeks to understand why employees violate policy instead of just addressing incidents. Insights gleaned from asking “why” can improve not only security controls, but user training, employee retention, and satisfaction, and the alignment of technology offerings with business needs (shadow IT).

c. Scalability: the core of IRM is people and process, meaning that technology is rarely a barrier to entry. No CASB, DLP, UEBA, or SIEM? No problem. Start by assigning responsibilities and building repeatable investigation and escalation processes. Stretch current technology to provide as much incident visibility as possible. As the IRM service matures and gains political capital, invest in technology to increase visibility and integrate it into existing processes.

Want to learn more about maturing your insider risk management program? Download our latest ModernCISO Guide, A Four-Step Framework for Managing Insider Risk, for a deeper dive into the topic. Or contact a member of Kudelski Security’s team of data security experts today info@kudelskisecurity.com.

Attack Surface Reduction: Transforming Discovery and Vulnerability Management for a New Era

Attack Surface Reduction: Transforming Discovery and Vulnerability Management for a New Era

In this two-minute read, Zach outlines three simple things that CISOs and security leaders can do to reduce the modern enterprise attack surface: discovery, contextualization, response.

You can’t secure what you don’t know exists; you can’t hide what you don’t know is exposed.

John Binns, the self-professed perpetrator of this summer’s T-Mobile breach, reminded us of this when he shared the striking image of his entry point: a publicly exposed router. It was the first domino in a kill chain yielding millions of exfiltrated customer records.






Source: WSJ

The Problem: A Story of the Old and the New

The problem is not new, and many organizations believe it addressed by existing vulnerability management and red teaming efforts. However, our old methods have not kept pace with the growth and transformation of what constitutes an organization’s attack surface. Propelling this new challenge are two drivers: first, legacy/forgotten assets; and second, novel/unknown assets.

  • On the legacy front, organizations host heaps of debt from decades-old domains and M&A activity. This means that vulnerability management activity may not include all exposed assets. The assets included produce overwhelming volumes of results, usually prioritized by CVS scores and existing organizational knowledge (e.g., that’s our ERP system, we need to fix that vulnerability) versus granular analysis. This leads to many assets – like overexposed routers – being overlooked.
  • The problem of the new may be even more pressing. SaaS makes shadow IT easy, which expands the perimeter to user identities and data movement across thousands of platforms. If we enumerate only our datacenter and known cloud locations, we miss every “as-a-service” entity our users have made their own.

The Solution: Dedicated Attack Surface Reduction and Data Leak Assessment

More than likely, the router at the root of T-Mobile’s breach was captured by at least one external vulnerability scan and in-scope for multiple red team assessments. But in the face of competing priorities and limited scopes, no-one made their way down the list to discover it. To address this challenge, organizations must dedicate time and resources to comprehensively discovering, contextualizing, and responding to their attack surface.

  • Discovery can no longer be limited to a set of known IP addresses and domains. This means non-intrusively querying external environments and augmenting vulnerability-centric with data-centric analysis to find your data outside of your known environment. Additionally, organizations must enrich discovery with business knowledge, like past M&A activity, to uncover forgotten assets and repositories.
  • Additionally, current methods of contextualization based on CVSS scores and known understanding of criticality need to become more comprehensive. Automation always helps, but at the end of the day, some manual analysis will be needed to vet newly discovered assets and potential data leaks.
  • Finally, organizations should design boundary-spanning response processes to address problems uncovered outside of their known perimeter. For instance, if security discovers a potential source code leak to a personal GitHub account or accidental data exposure from a partner, privacy or legal needs to be engaged for resolution.

In summary, a transformation of the technology landscape requires an equal transformation to secure it. Vulnerability management of known assets, the security industry’s current approach to attack surface management, is an important starting point, but is just incomplete.

To address decades of technical debt and the SaaS-powered reframing of “perimeter” to identity and data, organizations must augment current practices with non-intrusive, comprehensive, and often data-centric discovery approaches.

To truly understand and protect their digital footprints, organizations must reconsider – and discover – what comprises it.