Cybersecurity organizations should partner with business units to create a shared and flexible cloud governance model that better enables responsible cloud adoption.
Businesses cannot (and often will not) wait for security organizations to create inflexible governance frameworks for cloud adoption. After all, the cloud is supposed to be flexible and business-enabling. The high-speed transformation to remote work and the rapid increase in cloud workloads due to COVID-19 highlights that technical agility is key to both enabling business outcomes and surviving times of crisis. In recent months, software-as-a-service (SaaS) collaboration platforms, such as Zoom, have seen a spike in usage, and 59 percent of enterprise respondents to a recent survey by Flexera plan to accelerate cloud adoption because of the pandemic.
However, organizations must still protect their data, preserve user privacy, manage technology costs, and ensure business continuity. In some respects, these demands are even more important during uncertain times. A shaky financial climate increases pressure to control costs; a shift to remote work reshapes the cyber threat profile for the organization. Businesses need to control financial, operational, and security risks when facing the realities of a pandemic.
Accelerated Cloud Adoption in the Age of COVID-19: Why is Cloud Governance Important?
Many stakeholders are concerned with a variety of governance topics, mainly those that focus on the cloud. Cloud governance is a multi-disciplinary approach that ensures cloud resources are designed, delivered, and consumed in a manner that adequately addresses organizational risk. However, organizations face an increasing challenge since execution is usually siloed and often driven by different internal motivations.
Different groups use different tools, techniques, and taxonomy to address their respective needs. Without a coordinated effort, this creates redundant work for the business and undue overhead for cloud consumption. Isolated governance also creates spotty coverage of the various technical and non-technical risks that organizations should address.
Many business units within an organization care about governance:
- Cybersecurity teams care about security governance, ensuring that resources and cloud environments are compliant with regulatory or corporate policies and best practices
- Accounting cares about cost governance, ensuring that OPEX costs can be adequately controlled, tracked, and assigned to the right budgets
- IT and DevOps groups care about operational governance, ensuring that resources are deployed consistently and follow operational best practices
Cloud governance is key to realizing the benefits that cloud offers, including agility and elasticity, while minimizing unintended business or technical risks. However, risks are not limited to any one domain — cybersecurity, financial, or operational — and should be addressed collectively and holistically.
Cloud governance must be robust and adaptable to meet rapidly shifting business demands, not only in cybersecurity. As organizations continue to transform how they conduct business, there is a unique opportunity for cybersecurity stakeholders to take the lead in bringing together their peers from other parts of the business.
Security is often the primary driver for the governance of cloud adoption, so it is logical to have this group lead the charge to build consensus on the topic. A coordinated effort creates efficiency and consistency for these activities, reducing the redundant burden of siloed governance. A consolidated voice also lends credence to the notion that unfettered cloud usage in the name of business agility can ultimately be bad for business.
Guardrails vs. Handcuffs: Defining Flexibility in Cloud Governance
Building consensus for a multi-disciplinary cloud governance approach is necessary but not sufficient to enable business outcomes. A unified governance coalition could simply demand inflexible and prescriptive controls and processes that are incongruent with cloud agility. Just as important is how these teams choose to mitigate the various cloud risks for the organization.
Developing solutions “at the speed of business” requires a more flexible approach. It is important to squarely address the skepticism that additional oversight will create inefficiencies and delays for the business. Dictating strict controls inhibits the agility and creativity of teams. Inevitably, they will seek (and find) a way around the perceived roadblocks. Rather, putting up looser “guardrails” in the cloud enables teams to operate with more freedom, but still within parameters.
This model can provide a nice balance between constricted control and unfettered flexibility. These guardrails can take many forms but are often technical requirements implemented within a cloud platform (e.g., Azure Policy), manifesting well-written cloud security guidelines.
What Does Flexible Governance Look Like?
All major cloud platforms have the concept of tagging — metadata in the form of key-value pairs that can be associated with cloud resources. A cloud policy requires that all cloud compute resources (e.g., servers, containers) include tags for the internal cost center and the deployment environment (e.g., development, staging, production).
We can then configure the cloud platform to enforce this tagging policy for us, ensuring that the teams responsible for deployment will always provide the requisite information — a fairly low friction demand that provides value for different governance stakeholders. For example, accounting can accurately assign the costs to its department, operations can assign the requisite monitoring and service-level agreements for a production workload, and cybersecurity could have contextual information that can be useful in appropriately prioritizing a security incident or vulnerability remediation for different resources.
A Journey, Not a Destination
Following these ideas, organizations should be able to build a shared vision for cloud governance that effectively balances business flexibility with risk mitigation. They can continue to create more sophisticated tagging examples and implement governance-as-code, authoring and managing technical cloud controls like a software development project. They could even envision how this model could effectively govern the democratization of technology, through low-code and no-code development platforms.
The real test for this model is how it can evolve and adapt to change in the organization — as change has most likely already happened.
This article was originally published in Dataversity.
With the rapid pace and complexity of business transformation coupled with ever-increasing threat sophistication targeting hybrid environments, IT & Security teams are looking for trusted security partners who can help increase visibility, reduce complexity, and address critical talent shortages.
Large-scale breaches have impacted millions of people. The once-fringe subjects of ransomware, malware, denial of service attacks and phishing scams have captured public interest, impacted the bottom line, and earned the attention of leaders in public and private institutions around the globe. The increasing sophistication of threats has taken the risks of data and reputational loss to new heights – costing companies an estimated USD 1.5 trillion worldwide in 2018 alone. At the same time, organizations’ computing environments are rapidly transforming to deliver business outcomes for modern consumers in the modern world. Network perimeters continue to erode to enable this transformation and include mobile devices, cloud applications and platforms, operational technologies (OT) such as sensors and controls, and industrial IoT devices (IIoT).
In order to produce these business outcomes while protecting critical assets, data, and reputation, IT & IT security teams need visibility across the enterprise stack. They require trusted cybersecurity partners who can help them reduce the complexity of managing cybersecurity programs in multi-technology environments while maximizing the value of their investments
Challenge the status-quo: every organization should assume breach
The question is not if or when security will be breached – it is how quickly you can identify and mitigate a threat that’s already inside your organization. Executive Boards are more involved and looking for reassurance that the business is resilient against the most current events. To deliver the expected results, threat detection, containment, and remediation must be rapid and effective, but currently, most threats go undetected for an average of 101 days. A deeper level of intelligence is needed – superior visibility into threats and adversaries, greater contextual relevance, and a dynamic understanding of an evolving threat landscape.
Detect Faster, respond efficiently
Traditional Managed Security Services Providers (MSSPs) solutions lack the advanced capabilities required to combat advanced adversaries. An effective approach to threat detection needs to provide visibility and be non-linear, imitating the ad-hoc way an attacker moves through an environment. This requires specific skill sets and capabilities that should be continuously updated to stay ahead of the curve and detect and respond more rapidly to attacks. Such capabilities require a new way of monitoring and detection – a service that combines visibility, expert analysts, threat detection frameworks, and intelligence sharing.
Threat hunting approach
However good the technology and processes are, threats can still get through the net. The most advanced managed security requires dedicated teams of threat hunters – analysts with the mindset of a hacker who will investigate and research anomalous behavior, activity, and files to unearth unknown threats. With an international shortage of cybersecurity professionals close to 3 million worldwide, companies will have difficulty recruiting the talent directly.
Don’t stop at traditional IT security monitoring. Regardless of the environment – cloud, IT, or OT – it needs visibility and appropriate protection
Attack vectors are expanding with digital transformations, making it harder to reduce risk and maintain accurate visibility across the enterprise. The number of new platforms and applications collecting, storing and mining data is on the rise. Critical infrastructure is becoming more reliant on the Internet and IT environments to operate effectively. This combination provides security teams with a complex mission, attackers with new targets, and regulators with a new scope.
- Cloud platforms Visibility and Security monitoring
According to Gartner, 75 percent of businesses will use a multi-cloud or hybrid cloud model for their businesses by 2020. While migrating to the cloud can save time and money in the short term, cloud adoption presents unique challenges when it comes to long-term data visibility and security, particularly in hybrid environments. Businesses need a way of monitoring, detecting and responding to threats regardless of where their data is stored.
- Visibility and Security Monitoring of Operational Technologies & Industrial Systems Controls
Operational Technology (OT) and Industrial Control Systems (ICS) networks represent a growing risk. Malicious activity is increasing, as evidenced by the growth in threat activity from ICS attack groups and the emergence of ICS-specific malware, such as Triton or Trisys. Prominent breaches in critical infrastructures, including water and energy utilities, have highlighted the need for better security. Nevertheless, many organizations still struggle to have the visibility needed to monitor their industrial environments effectively.
Protecting businesses against sophisticated cyber attacks is an ongoing process for IT & IT security teams. Given the complex business drivers, threat landscape, and IT talent shortage, most organizations are working with trusted cybersecurity partners who can bring the critical visibility, solutions, resources, and intelligence to minimize these risks.
- Is my data safe in the cloud? Or would it be safer on premise?
Interview with Olivier Spielmann, Director of EMEA Managed Security Services, Kudelski Security
Information security relies on data confidentiality, integrity and availability. With proper security controls, all three aspects can be protected on-premise or in the cloud. Equally, all three can fail in the cloud or on-premise as well. Transition to the cloud means that solution responsibility is divided. Some parts are delegated to a third party while others remain the company’s responsibility (e.g. data accountability).
One key action is to adapt the security architecture design of your solution to the target environment (cloud vs on-premise) and support it with a solid contractual base. A cloud solution can’t be designed as an on-premise solution – it’s very different, for several reasons, e.g. data ubiquity and elasticity.
Today, data breaches of cloud environments are mainly due to human configuration errors, exposing unprotected data to the Internet.
The widest risk of cloud environment usage for storing company data can be addressed by:
- Properly designing a secure cloud architecture that addresses confidentiality, availability and integrity aspects
- Performing due diligence on the cloud provider
- Putting in place a solid service contract
Whatever the stage of your cloud journey, Kudelski Security has services and solutions to support you – from cloud design, due diligence, security monitoring, to incident response in the cloud.
- Does it really make a difference whether I keep my data in Switzerland or in a foreign cloud?
No, as long as you don’t infringe the relevant regulations and you have a strong contract in place with your cloud provider. If you use cloud services to deliver business services, accountability remains your responsibility.
What does change when your data is stored in another country is the regulation enacted in case of a breach or to protect your data against a search. When storing the data at a cloud provider, the client should find out which governing laws apply and assess whether they are adequate.
- The cloud is becoming more hybrid and varied. How does one maintain the visibility needed for a secure environment?
The cloud is completely fuzzing the borders of data processing and storage. While appreciated for its flexibility, speed and ease of use, cloud services can become a freeway for voluntary or involuntary data exposure and vast amounts of confidential data have been exposed as a result.
Risks can be addressed by training cloud user teams, properly architecturing and configuring cloud professional environments and monitoring company clouds for configuration errors.
Alternatively, companies can use the capabilities of Managed Security Service providers, like Kudelski Security. We monitor risks and configuration 24/7 and have reduced threat detection time from the average of 78 days to a few hours, in many cases.
- What new challenges does the IIoT create for IT-security providers?
Protecting IIoT environments is not the same as protecting IT environments. Industrial systems are built differently yet are now exposed to similar threats through their connection to IT networks. Industrial systems present new threats that can’t be handled by standard IT security measures. For example, scanning an industrial system with a vulnerability scanner may shut it down, stopping the manufacturing process.
In addition, IT security skills and solutions aren’t adapted to IIoT environments. Vendors and service providers need to offer new solutions to cover these newly exposed environments of critical service providers, e.g. energy. Companies looking to protect their assets in an IIoT environment can get support from Kudelski Security’s Cyber Fusion Center, which offers advisory, threat monitoring, hunting and incident response around the clock.
- Who watches the watchmen: How do these cybersecurity partners keep themselves safe?
At Kudelski Security, clients regularly challenge us to demonstrate we’re applying robust security controls and appropriate security governance processes. Cybersecurity partners should always practice what they preach by applying defense-in-depth security controls, threat monitoring and hunting and incident response to their own environments.
This article was originally featured in Netzwoche and can be read by clicking here.
 Maguire, M. Dr. (2018) https://www.bromium.com/press-release/hyper-connected-web-of-profit-emerges-as-global-cybercriminal-revenues-hit-1-5-trillion-annually/.
 Cybersecurity Workforce Study (2018) https://www.isc2.org/Research/Workforce-Study#
In the last but certainly not least in our cloud security series, we’ll be covering technologies. Under this umbrella, we cover both the security requirements and the cloud-native (or third-party) technologies that are needed to implement a “secure-to-be” public cloud.
In literature, there are plenty of ready-to-be-used security frameworks that give great insight into what is required to create a cloud security architecture. CIS Controls or the NIST 800-53 publications are good examples, but also ISO 27002 is quite a useful document to draw security requirements. In our field experience, these 12 domains below of security controls are a good starting-point to cover most needs:
- CI/CD Pipeline
- Access Control
- Audit & Monitoring
- Key Management Solution (KMS)
- Run-time Security
- Data protection (Compliance)
- Incident and Response (I&R)
- Security Operation Center (SOC)
It’s beyond the scope of this article to go through such domains in detail and analyze all requirements, which would be anyway pointless because they vary from company to company. Still, it is interesting here to have a more general overview of the security technology trends that are popular in public clouds.
First, it is important to be aware that leading public cloud providers tend to offer not only managed security services (to provide automated data at rest) but also fully-managed developing suits (like Kubernetes-aaS), where security patching/scanning of the underlying operating systems are completely taken in charge by themselves. All these managed services are of great help to companies, especially the ones that have a small IT department that need to focus on other project deliverables given by the business.
Some would say that traditional security tools from legacy environments could still be used and lift & shifted into the cloud, but in reality, they do not fit well with cloud-native apps designed for the public cloud because they are built with completely different design criteria like depicted in the table below:
Cloud native security tools are not so sophisticated like legacy security tools, which have been developed and improved over the last 20 years. This is by-design because cloud-native means that each security feature is broken down in atomic and decentralized tools.
The golden security rule of “defense-in-depth” ensures that protection is still as efficient as before by extending it to the full-stack of ISO layers. For example, a deep packet inspection (DPI) next-generation firewall may be replaced by traditional layer-4 security groups together with distributed endpoint threat management solutions and together with anomaly detection loggings, etc. It’s not a replacement, but the result of these measures can be the same or even better because there isn’t a bottleneck do-it-all product that pretends to secure all IT environments by itself.
Each security tool used in the cloud is important, but the real added value comes from their cross-layer integration via common APIs and the capability to automate their action based on common attack scenarios reproducible in pre-tested security playbooks. Like we discussed in the Processes section where it was the whole team, and not a single security officer, to assume the security responsibility in an organization. Here it is not a single product that will save your data from being stolen, but rather a collection of security products tightly integrated and automated.
This ends our series on Public Cloud Security, where we introduced and focused on key security challenges and pitfalls that arise when a company gets involved in resourceful and time-consuming projects to drive the refactoring and re-platforming of critical Business workloads.
In the first part of our series, we discussed the myths that arise in organizations by having different groups of people with different perspectives on cloud risks. In part two, we’ll be tackling the Processes.
Not without reason, security has been synonymous with either saying no or missing project deadlines. Therefore, during key app-refactoring projects, security must adopt new processes and find the right balance between the compliance needs of security and the business needs of agile sprints.
An Automation Mindset
First, it is imperative for security to embrace automation and put as many security controls as possible directly into the pipelines. Automated security tools (that are accessible via APIs) can provide many security measures like compliance checks, static and dynamic security checks (SAST or DAST), vulnerability scannings, and more. Although they are prone to false positives, they act as the first line of defense.
Secondly, security officers need to be an integral part of the DevOps community by participating, to the extent of their capabilities, in the coding process and in the creation of “Hacker/Abuse stories” (potential hacker scenario simulations.) This way, they have the opportunity to be listened to by the rest of the team and considered as a cooperative resource inside the organization.
Finally, infrastructure changes (also known as merged-requests) are first validated under the 4-eyes principles and then by automatic compliance checks are defined and run in a production (e.g. check that a file bucket is not publicly accessible) that would alert the support department immediately in case of a failed compliance.
The other takeaway implicit in all the 3 points above is that there is a shift in responsibilities from a single person overseeing security to the whole DevOps team, which ultimately really becomes a DevSecOps team at the moment that is in charge of the security stream collectively.
Rapid Risk Assessment
We all know that security ultimately comes down to risks. Without considering corner cases where the risk assessment procedure has become just “ticking a checkbox.” Quite frankly companies these days do not treat risks properly. This is due to several factors, some being that security teams are always understaffed, overwhelmed or simply missing the necessary technical depth/breadth for these projects. In fact, in these circumstances, sometimes it’s easier to run an external pentest or risk assessment, effectively delegating responsibilities outside of the organization, which could have risk implications in and of it themselves.
Instead, it is more advisable to create an even tighter connection between security and DevOps processes by integrating risk assessments inside pipeline processes. In literature and in the field, there are more and more examples of Agile Risk Assessments (see for example the RRA-project in Mozilla), where single DevOps stories are created for each risk after a short assessment (max 30/45 mins) made by two to three people from technical and business backgrounds.
This way not only is there collective ownership of risks by all members of the team but most importantly, visibility and transparency of risks are finally achieved throughout the whole lifecycle of the project (which wasn’t necessarily the case before.)
Even if all the processes described above were done by-the-book, this might still not be enough if security lacks proper management support: as discussed previously, security is indeed dug in tight into DevOps and Business processes, but it must also have a direct link and sponsorship at the Cx level throughout the whole project to highlight potential risks to the board.
Furthermore, security should be greatly incentivized with bonuses and rewards by management to prioritize, for example, critical bug fixes against other Business/App processes. This will increase the spirit from the engineering teams towards security because it’s not a mystery that security is mostly conceived by many departments as a mere extra burden that sits on top of what they already need to do. This way engineers won’t see security solely as a checkbox-tasks dictated from above, but as a real added value to the organization for protecting core assets, businesses and, ultimately, reputation.
This second article focused on a fundamental aspect of public cloud security: processes. It is in these processes where we see the incubation of many breakthrough ideas that – once established – will totally reshape the concept of the current security landscape and potentially lead the way for the DevOps community to transform other IT and business functions of organizations.
In the third and final article, after introducing the key requirements to make the public cloud secure, we will see how the landscape of security tools has changed to adhere to the new cloud-native design principles. Stay tuned for Technology.
In this three-part blog series on public cloud security, Kudelski Security’s Cybersecurity Architect Giulio Faini, covers the trinity of People, Process and Technology that comprise all good transformation recipes.
Infrastructure-as-a-Service (IaaS) and Platform-as-a-Service (PaaS) are on the radar of every CISO. Not only are they the public cloud services with the fastest growth projected over the next few years, but they’re arguably the best-suited cloud migration approaches for major digital transformations.
Classic security practices designed for on-prem deployments are ill-adapted to apply to the specificity of cloud native services and thus require a deep rethink. Generally speaking, we all agree that security is about three main components, all of which are equally important: People, Processes, and Technology.
Having been a techie for many years, I would be tempted to jump straight to Technology, hoping that the other two would follow consequentially. But the reality is that the People/Human factor plays a vital role for the success of these kinds of projects, and so my article series will begin with this and common misconceptions that need to be addressed in order to move forward with digital transformation.
People – Change your Mindset; Address the Myths
A radical change of mindset is in fact required from all stakeholders in the organization but foremost, from the security staff that usually keeps working across old legacy projects as well as new cloud-specific designs.
The main actors playing a role in cloud security streams (CISO and security officers, DevOps teams, security and system architects, legal and compliance teams, Cx Levels) need to agree on how best to debunk myths based on widely held beliefs about cloud, like the ones listed below this is important: failure to separate fact from fiction will impede innovation and lead to endless delays in your digital transformation.
- Myth #1: Visibility is Lost in the Cloud – There is a common belief among customers, that they will lose sight of their precious data/resources.In reality, public cloud resolves this issue for good. In the public cloud space, even turned-off devices are listed in automatic inventory tools, so there is no more risk in having unknown devices hanging around the domain controller like it was happening on-prem. Every major cloud vendor has an automatic and freeware inventory tool to ease this task.But what about data visibility from the geopolitical point of view? Assuming you trust your vendor (at least as much you have trusted your computer manufacturer so far), then the legal department needs to do its due diligence in signing off contracts, preferably assisted by security teams, to clarify all technical issues.
Finally, for the most conscious minds, cryptography is your friend and can keep your data safe maybe with private keys stored on your premises. If cryptography has been used in the past for e-commerce use-cases, then it can be reused for protecting cloud data when it is located elsewhere.
- Myth #2: No Perimeter Means Weak Security – Legacy-minded people feel reassured when their data is behind a locked door. They’ll try to transfer the perimeter onto the public cloud. But in reality, perimeter security is a red herring: we know that there is more than one entry point into a network and internal attacks pose more of a threat because they can go undetected for a long time.Public cloud approaches the security challenge with the concept of Zero-Trust Architecture: Rely on strong authentication (MFA), short-lived, least privileged credentials and cryptography (which goes just about everywhere).
- Myth #3: Cloud Impacts Availability – There is a widely held belief that Availability would be more complex in Public Cloud because there is an additional dependency on somebody else (i.e. Cloud provider).In practice, this situation can be mitigated by adopting Infrastructure-as-a-Code (IaaC), which makes easier to mirror cloud workloads to DRP locations, which is not often the case for legacy DCs.But what if the public cloud service provider itself fails? It’s not a problem if you choose a multi-cloud strategy from the start of the project. As developers are already aware, the Container Revolution has already started in the IT industry. Critical apps are packaged in wrappers (i.e. containers) that include all the required logic for the app to run autonomously. Containers, standardized by the Cloud Native Computing Foundation (CNCF), can thus easily (i.e. without any modification) be transposed to other public cloud providers. As a result, the availability risk is addressed. Plus, as a bonus, the security requirement itself can also drive savings by allowing you to choose the vendor you want and avoid vendor lock-in.
- Myth #4: It’s Got to Be Perfect Before We Migrate – Legacy-minded people often believe a high-level of security assurance is needed before the program can progress.
To avoid perpetual delays in cloud migration go-live dates, all stakeholders should agree on a baseline security architecture that covers MVP (minimum viable product) requirements. It goes without saying that security is, and always will be, improved constantly over time. But with at least a minimum level of security (e.g. limiting the project initially to a private-facing environment) it’s possible to allow business to start using the cloud infrastructure for their projects.
To conclude, in this first article about public cloud security, we have looked at some common myths that still persist in 2019, especially in the initial phases of cloud migration projects.
Clarify facts with your team as soon as possible to avoid project failure. With some corner cases, get help from a trusted external advisor who can help untie knots and facilitate progress without never-ending discussions.
In Giulio’s next article, he will explain how DevOps movements, which are at the heart of most public cloud migration projects, have deeply changed security processes and unpack the risks for organizations who don’t follow these new IT trends.
Do you have full visibility into your cloud applications and platforms? Are all of your cloud assets securely configured and managed? Can you contain and analyze a cloud attack in an automated way?
Cloud security is top of mind for CIOs and CISOs, faced with a changing technology paradigm in which control and security responsibility has become a shared concern. Widespread adoption of software-as-a-service (SaaS) applications and infrastructure-as-a-service (IaaS) platforms as a means of improving business efficiency naturally leads to an increase in the number and frequency of cloud-based cyber-attacks.
Organizations are challenged to transition legacy systems (and the associated legacy IT management or security practices) to newer cloud paradigms, often inadvertently and unknowingly creating security risks in the process. In order to create an integrated, holistic and workable cloud security strategy, CISOs – particularly public-sector and larger enterprises – must reexamine policies and technology choices against an ever-changing and sophisticated threat landscape.
CISOs are faced with a changing paradigm whereby security responsibility in the cloud is a shared concern between the cloud service provider and customer. With shared responsibility, organizations can leverage the security foundation and, in many cases, cloud-native security tools offered by the providers to focus their efforts on securing operating systems, applications, and data. However, customers must clearly understand what their security responsibilities are and not incorrectly assume these activities are being performed by the cloud platform or application provider.
In this second paper of our Reference Architecture series, we consider cloud security and the relevant protection technologies from some of the industry’s leading vendors. We use the widely recognized National Institute of Standards and Technology (NIST) Cybersecurity Framework (CST) to identify these activities, and categorize them by their respective components from Secure Blueprint, our strategic approach to cybersecurity program management.
To fulfill these cloud security activities and address cloud risks, we highlight cloud protection technologies from leading vendors that work in concert with the native security services from leading IaaS and SaaS providers. We take a clean-sheet approach that presupposes no existing cloud security or management technologies. However, we recognize that most organizations do not start with a blank slate, and in some cases, alternative technologies to the ones that we have highlighted may make more sense based on current IT investments, business needs, regulatory considerations, etc. Organizations can also compare their incumbent risk management activities and technology solutions to identify gaps in their existing cloud protection.
Our aim is to help you to help you make smart technology decisions in an ever-crowded and noisy cloud security market.
To better understand your cloud risk posture and identify gaps that may exist with your current cloud protection technologies, click here to read our Cloud Security Reference Architecture.