by Giulio Faini | Jun 15, 2021 | Cybersecurity
Security has evolved since the days when cybersecurity systems were evaluated by the number of incidents handled by the InfoSec team over a year. IT departments and organizational leadership adopted the attitude that no news (or no data breaches) meant no security problems, so all was well.
That approach wasn’t true then, and it certainly isn’t true now. Over time, the record has proven security to be the business enabler in digital transformation (DX), by most effectively protecting and managing the most valuable asset:data.
DX has been the force behind the rapid pace of innovation. Successful innovators must juggle the uncertainty of DX processes and security risks. One approach is based on the “fail fast, learn fast” rule. Cryptography is an example of this rule: Instead of giving mathematical proof that an algorithm is safe, the community accepts (and considers trusted) a cryptographic scheme because it is very unlikely it could be broken in foreseeable future.
Emphasizing Security in Digital Transformation (DX)
Threat actors, however, are diligent in their attempts to break into new technologies and find ways to get to the data. Security plays a vital role in any data-driven DX by staying ahead of such dangerous threat actors. Here are three examples of how companies in different industries are successfully managing DX, thanks to data security.
- Digital Transformation in the Medical Industry
CheckPoint Cardio invented a wearable device that constantly sends dozens of raw health data (like ECG, pulse, blood pressure, etc.) to a remote center that correlates them in heart-related events and responds in real-time. Medical professionals use this information to treat patients and respond quickly to health emergencies.
But what happens if this medical data ends up in the wrong hands? It not only holds business value to healthcare facilities, but threat actors could manipulate the data that impacts patient health. Also, because there are strict regulations surrounding health data, compromised data could result in massive fines and penalties.
To protect the data, security such as client-side encryption decreases risks caused by third-party providers. The greater emphasis on security of the data makes it easier for hesitant companies to adopt this innovative and useful technology.
- Digital Transformation in the Media/Social Platforms Industry
Facebook always sold information generated by users to external advertisers who, in turn, often resell the same information. The data mining and sharing by Cambridge Analytics showed both the value of the data to outside entities and the security and privacy implications for the Facebook users. Facebook’s reputation was seriously damaged by the scandal.
Since then, the company changed its data security and privacy approach to meet GDPR compliance (in all its technical and organizational measures) and this has become a compelling security requirement for the company to assure its ads-based business to flourish.
- Digital Transformation in the Financial Industry
Consumers increasingly want a fully personalized offering from their financial providers. The more contextualized data the company accumulates from its customers, the easier it is to improve and personalize its service. A traditional data-lake system that follows security and privacy best practices would do the job, but companies are also constantly researching new security tools to best protect consumer data and increase its marketing appetite. One such tool (even with its security and privacy shortcomings) is blockchain technology.
The Challenge of Legacy Systems
DX opens the doors for new revenue opportunities for companies, and data-driven security is designed to enable such DX by keeping the additional information safe. However, organizations that rely on legacy systems lack data-driven security awareness. A Cambridge University research survey reveals that “71 percent of respondents agreed that there are data quality and integrity issues that make it difficult or impossible to implement a data-driven business model, as users quickly abandon apps that provide incorrect information.”
John Chambers, CEO of Cisco at the time, commented that dynamic companies, i.e., the ones who adapt services to customer needs, will gain a competitive advantage. “Forty percent of businesses in this room, unfortunately, will not exist in a meaningful way in ten years,” he said in a keynote address. Additionally, while 70 percent of companies will attempt to go digital, only 30 percent will actually succeed.
At the heart of successful digitalization is data security. But success of DX security is the responsibility of corporate leadership:
- The CEO is the main sponsor of the DX project and is the individual ultimately accountable for its success (or failure) in front of the steering board and investors.
- The CIO (or CTO) reports to the CEO and is accountable for the Business-as-Usual (BAU) IT Operations during all phases of the DX.
- The CISO reports to the CEO and possibly the CIO and is accountable for unforeseen malicious threats happening during or after the DX.
Accountability and Predictability in Secure Digital Transformation
To meet the challenge of a secure digital transformation, leadership needs to emphasize two areas: accountability and predictability.
- Accountability Security should always be a shared-responsibility matter that concerns everybody in the company. To promote that mindset, security behavior should be incentivized with bonuses and rewards. This way employees will not see security solely as checkbox-tasks dictated from above, but as a real added value to the organization for protecting core assets, businesses and, ultimately, reputation.
- Unpredictability This has to do with lateral thinking, but will be treated in my next article, more oriented to the architect and technical fo
Conclusion
Although security should be deemed mandatory by everyone, it is rarely seen as the main enabler for DX itself. As seen by the examples from different industry verticals, security shall own this active role to make business advance in digitalization.
A small change to your mind set can result in a big change to your bottom line. You will save resources from a breach that could disrupt operations, damage brand, or make you go bust. But you will also find new markets and generate new income sources with a security-by-default mindset.
Whether you’d prefer to embrace a potential win or avoid a sure loss, it is definitely worth digging into the topic more.
by Giulio Faini | May 9, 2019 | Cloud Security
In the last but certainly not least in our cloud security series, we’ll be covering technologies. Under this umbrella, we cover both the security requirements and the cloud-native (or third-party) technologies that are needed to implement a “secure-to-be” public cloud.
In literature, there are plenty of ready-to-be-used security frameworks that give great insight into what is required to create a cloud security architecture. CIS Controls or the NIST 800-53 publications are good examples, but also ISO 27002 is quite a useful document to draw security requirements. In our field experience, these 12 domains below of security controls are a good starting-point to cover most needs:
- Inventory
- Authentication
- Authorization
- CI/CD Pipeline
- Access Control
- Audit & Monitoring
- Confidentiality
- Key Management Solution (KMS)
- Run-time Security
- Data protection (Compliance)
- Incident and Response (I&R)
- Security Operation Center (SOC)
It’s beyond the scope of this article to go through such domains in detail and analyze all requirements, which would be anyway pointless because they vary from company to company. Still, it is interesting here to have a more general overview of the security technology trends that are popular in public clouds.
First, it is important to be aware that leading public cloud providers tend to offer not only managed security services (to provide automated data at rest) but also fully-managed developing suits (like Kubernetes-aaS), where security patching/scanning of the underlying operating systems are completely taken in charge by themselves. All these managed services are of great help to companies, especially the ones that have a small IT department that need to focus on other project deliverables given by the business.
Some would say that traditional security tools from legacy environments could still be used and lift & shifted into the cloud, but in reality, they do not fit well with cloud-native apps designed for the public cloud because they are built with completely different design criteria like depicted in the table below:

Cloud native security tools are not so sophisticated like legacy security tools, which have been developed and improved over the last 20 years. This is by-design because cloud-native means that each security feature is broken down in atomic and decentralized tools.
The golden security rule of “defense-in-depth” ensures that protection is still as efficient as before by extending it to the full-stack of ISO layers. For example, a deep packet inspection (DPI) next-generation firewall may be replaced by traditional layer-4 security groups together with distributed endpoint threat management solutions and together with anomaly detection loggings, etc. It’s not a replacement, but the result of these measures can be the same or even better because there isn’t a bottleneck do-it-all product that pretends to secure all IT environments by itself.
Each security tool used in the cloud is important, but the real added value comes from their cross-layer integration via common APIs and the capability to automate their action based on common attack scenarios reproducible in pre-tested security playbooks. Like we discussed in the Processes section where it was the whole team, and not a single security officer, to assume the security responsibility in an organization. Here it is not a single product that will save your data from being stolen, but rather a collection of security products tightly integrated and automated.
This ends our series on Public Cloud Security, where we introduced and focused on key security challenges and pitfalls that arise when a company gets involved in resourceful and time-consuming projects to drive the refactoring and re-platforming of critical Business workloads.
by Giulio Faini | Apr 23, 2019 | Cloud Security
In the first part of our series, we discussed the myths that arise in organizations by having different groups of people with different perspectives on cloud risks. In part two, we’ll be tackling the Processes.
Not without reason, security has been synonymous with either saying no or missing project deadlines. Therefore, during key app-refactoring projects, security must adopt new processes and find the right balance between the compliance needs of security and the business needs of agile sprints.
An Automation Mindset
First, it is imperative for security to embrace automation and put as many security controls as possible directly into the pipelines. Automated security tools (that are accessible via APIs) can provide many security measures like compliance checks, static and dynamic security checks (SAST or DAST), vulnerability scannings, and more. Although they are prone to false positives, they act as the first line of defense.
Secondly, security officers need to be an integral part of the DevOps community by participating, to the extent of their capabilities, in the coding process and in the creation of “Hacker/Abuse stories” (potential hacker scenario simulations.) This way, they have the opportunity to be listened to by the rest of the team and considered as a cooperative resource inside the organization.
Finally, infrastructure changes (also known as merged-requests) are first validated under the 4-eyes principles and then by automatic compliance checks are defined and run in a production (e.g. check that a file bucket is not publicly accessible) that would alert the support department immediately in case of a failed compliance.
The other takeaway implicit in all the 3 points above is that there is a shift in responsibilities from a single person overseeing security to the whole DevOps team, which ultimately really becomes a DevSecOps team at the moment that is in charge of the security stream collectively.
Rapid Risk Assessment
We all know that security ultimately comes down to risks. Without considering corner cases where the risk assessment procedure has become just “ticking a checkbox.” Quite frankly companies these days do not treat risks properly. This is due to several factors, some being that security teams are always understaffed, overwhelmed or simply missing the necessary technical depth/breadth for these projects. In fact, in these circumstances, sometimes it’s easier to run an external pentest or risk assessment, effectively delegating responsibilities outside of the organization, which could have risk implications in and of it themselves.
Instead, it is more advisable to create an even tighter connection between security and DevOps processes by integrating risk assessments inside pipeline processes. In literature and in the field, there are more and more examples of Agile Risk Assessments (see for example the RRA-project in Mozilla), where single DevOps stories are created for each risk after a short assessment (max 30/45 mins) made by two to three people from technical and business backgrounds.
This way not only is there collective ownership of risks by all members of the team but most importantly, visibility and transparency of risks are finally achieved throughout the whole lifecycle of the project (which wasn’t necessarily the case before.)
Even if all the processes described above were done by-the-book, this might still not be enough if security lacks proper management support: as discussed previously, security is indeed dug in tight into DevOps and Business processes, but it must also have a direct link and sponsorship at the Cx level throughout the whole project to highlight potential risks to the board.
Furthermore, security should be greatly incentivized with bonuses and rewards by management to prioritize, for example, critical bug fixes against other Business/App processes. This will increase the spirit from the engineering teams towards security because it’s not a mystery that security is mostly conceived by many departments as a mere extra burden that sits on top of what they already need to do. This way engineers won’t see security solely as a checkbox-tasks dictated from above, but as a real added value to the organization for protecting core assets, businesses and, ultimately, reputation.
This second article focused on a fundamental aspect of public cloud security: processes. It is in these processes where we see the incubation of many breakthrough ideas that – once established – will totally reshape the concept of the current security landscape and potentially lead the way for the DevOps community to transform other IT and business functions of organizations.
In the third and final article, after introducing the key requirements to make the public cloud secure, we will see how the landscape of security tools has changed to adhere to the new cloud-native design principles. Stay tuned for Technology.
by Giulio Faini | Mar 19, 2019 | Cloud Security
In this three-part blog series on public cloud security, Kudelski Security’s Cybersecurity Architect Giulio Faini, covers the trinity of People, Process and Technology that comprise all good transformation recipes.
Infrastructure-as-a-Service (IaaS) and Platform-as-a-Service (PaaS) are on the radar of every CISO. Not only are they the public cloud services with the fastest growth projected over the next few years, but they’re arguably the best-suited cloud migration approaches for major digital transformations.
Classic security practices designed for on-prem deployments are ill-adapted to apply to the specificity of cloud native services and thus require a deep rethink. Generally speaking, we all agree that security is about three main components, all of which are equally important: People, Processes, and Technology.
Having been a techie for many years, I would be tempted to jump straight to Technology, hoping that the other two would follow consequentially. But the reality is that the People/Human factor plays a vital role for the success of these kinds of projects, and so my article series will begin with this and common misconceptions that need to be addressed in order to move forward with digital transformation.
People – Change your Mindset; Address the Myths
A radical change of mindset is in fact required from all stakeholders in the organization but foremost, from the security staff that usually keeps working across old legacy projects as well as new cloud-specific designs.
The main actors playing a role in cloud security streams (CISO and security officers, DevOps teams, security and system architects, legal and compliance teams, Cx Levels) need to agree on how best to debunk myths based on widely held beliefs about cloud, like the ones listed below this is important: failure to separate fact from fiction will impede innovation and lead to endless delays in your digital transformation.
- Myth #1: Visibility is Lost in the Cloud – There is a common belief among customers, that they will lose sight of their precious data/resources.In reality, public cloud resolves this issue for good. In the public cloud space, even turned-off devices are listed in automatic inventory tools, so there is no more risk in having unknown devices hanging around the domain controller like it was happening on-prem. Every major cloud vendor has an automatic and freeware inventory tool to ease this task.But what about data visibility from the geopolitical point of view? Assuming you trust your vendor (at least as much you have trusted your computer manufacturer so far), then the legal department needs to do its due diligence in signing off contracts, preferably assisted by security teams, to clarify all technical issues.
Finally, for the most conscious minds, cryptography is your friend and can keep your data safe maybe with private keys stored on your premises. If cryptography has been used in the past for e-commerce use-cases, then it can be reused for protecting cloud data when it is located elsewhere.
- Myth #2: No Perimeter Means Weak Security – Legacy-minded people feel reassured when their data is behind a locked door. They’ll try to transfer the perimeter onto the public cloud. But in reality, perimeter security is a red herring: we know that there is more than one entry point into a network and internal attacks pose more of a threat because they can go undetected for a long time.Public cloud approaches the security challenge with the concept of Zero-Trust Architecture: Rely on strong authentication (MFA), short-lived, least privileged credentials and cryptography (which goes just about everywhere).
- Myth #3: Cloud Impacts Availability – There is a widely held belief that Availability would be more complex in Public Cloud because there is an additional dependency on somebody else (i.e. Cloud provider).In practice, this situation can be mitigated by adopting Infrastructure-as-a-Code (IaaC), which makes easier to mirror cloud workloads to DRP locations, which is not often the case for legacy DCs.But what if the public cloud service provider itself fails? It’s not a problem if you choose a multi-cloud strategy from the start of the project. As developers are already aware, the Container Revolution has already started in the IT industry. Critical apps are packaged in wrappers (i.e. containers) that include all the required logic for the app to run autonomously. Containers, standardized by the Cloud Native Computing Foundation (CNCF), can thus easily (i.e. without any modification) be transposed to other public cloud providers. As a result, the availability risk is addressed. Plus, as a bonus, the security requirement itself can also drive savings by allowing you to choose the vendor you want and avoid vendor lock-in.
- Myth #4: It’s Got to Be Perfect Before We Migrate – Legacy-minded people often believe a high-level of security assurance is needed before the program can progress.
To avoid perpetual delays in cloud migration go-live dates, all stakeholders should agree on a baseline security architecture that covers MVP (minimum viable product) requirements. It goes without saying that security is, and always will be, improved constantly over time. But with at least a minimum level of security (e.g. limiting the project initially to a private-facing environment) it’s possible to allow business to start using the cloud infrastructure for their projects.
To conclude, in this first article about public cloud security, we have looked at some common myths that still persist in 2019, especially in the initial phases of cloud migration projects.
Clarify facts with your team as soon as possible to avoid project failure. With some corner cases, get help from a trusted external advisor who can help untie knots and facilitate progress without never-ending discussions.
In Giulio’s next article, he will explain how DevOps movements, which are at the heart of most public cloud migration projects, have deeply changed security processes and unpack the risks for organizations who don’t follow these new IT trends.