Our top cybersecurity predictions for 2023

Our top cybersecurity predictions for 2023

It’s the time of year when the industry begins making its top cybersecurity predictions for the year ahead. Gartner, among others, recently released their top 8 cybersecurity predictions for 2023, writing that supply chain and geopolitical issues will continue to dominate cybersecurity.

In this article, our team looks into the proverbial crystal ball to share their top cybersecurity predictions and what initiatives security leaders should prioritize for 2023.

What Cybersecurity Lessons Did We Learn in 2022?

The breaches, hacks, and cyber breakdowns in 2022  taught us many cybersecurity lessons that we can use to improve security in the new year. Lessons learned include:

  • You can’t rely on MFA.
  • Company stakeholders, including VCs and board members, must have insight into their company’s security stance.
  • Don’t sacrifice security for a 1% improvement of your product. Constant re-architecting creates numerous security holes.
  • Continuous security is mandatory for blockchain. Instead of one-time assessments at launch, teams should strive for continuous validation throughout the project lifecycle.

What Are the Top Cybersecurity Predictions for 2023?

The top cybersecurity predictions for 2023 identified by the team of experts at Kudelski security are:

  1. Basic, human-targeted attacks will be the biggest risk to cyber defenses.
  2. Zero trust will replace VPN.
  3. Insider and third-party risk will rise.
  4. Reliance on passwords will decline.
  5. Skepticism around blockchain security and availability will continue.
  6. Quantum-interested companies will need to start assessing risks.

Prediction #1: Basic, human-targeted attacks, like ransomware, phishing, and email attacks will be the biggest risk to cyber defenses.

In 2023, we will see the most basic security attacks — email compromise, active directory attacks, ransomware, phishing, and multi-factor authentication attacks — continue to be the most effective and lucrative for cybercriminals.

Whenever humans are introduced into the security equation, they immediately create holes in the corporate cyber defense system. Phishing and emerging MFA bombing schemes are more sophisticated than ever and will render cybersecurity training ineffective.

“Whenever humans are introduced into the security equation, they immediately create holes in the corporate cyber defense system.”

To combat these attacks, corporate security teams should not trust human factors. Instead, they should adopt an offensive security posture. Detection and response initiatives should focus on preventative features instead of reactive quick fixes.

Will your threat detection and response strategies stand up to advanced threats? Watch our webinar to learn how to improve program maturity.

Prediction #2: Zero trust will replace VPN to secure a distributed workforce.

In 2023, zero trust will replace virtual private networks completely as security teams adjust to a more dispersed workforce. With work-from-home here to stay, company network borders won’t look anything like they used to. Employees are accessing most work applications via SaaS, and IT teams are hesitant to inherit the risk of home networks. Mistrusting every device is the key to supporting and securing remote workforces.

Can zero trust be a business enabler? Read our take on this blog from Vincent Whaart.

Prediction #3: Insider and third-party risk will rise as attackers take advantage of vulnerable parties in the economic downturn.

The impending recession will loom even closer in 2023, and cybercriminals will take advantage of the dire economic situation to bribe their way into corporate systems. We predict that software hacking will decline in 2023 in favor of “insider risk.”

Attackers will set aside their hacking skills and instead single out vulnerable employees at third-party vendors, such as shipping authorities, supply chain companies, internet service providers, and software vendors.

Companies must remain vigilant to not only secure their own network perimeters but also build a strong vendor risk management program.

Prediction #4: Reliance on passwords will decline as the flimsiness of MFA is exposed.

While it’s unlikely that passwords will completely disappear in 2023, MFA fatigue could usher in a passwordless future in years to come. The recent Uber breach highlighted the flimsiness of MFA and left security teams searching for a better alternative. In 2023, we’ll see an emphasis on securing accounts with as many other safeguards as possible, including stronger passwords and password managers.

Prediction #5: Skepticism around blockchain security and availability will continue without more caution.

2023 will be another tumultuous year for blockchain technologies unless it shifts away from “point in time” security measures. Currently, too much trust is put into code to be perfect.

Blockchain security teams must layer in more robust controls, including detection and response capabilities, to deter threat actors. The billions of dollars of bridge hacks that occurred in 2022 put a huge dent in users’ confidence in blockchain security.

Luckily, blockchain enterprises and projects are aware that customers are just as concerned about their chosen blockchain’s security as its features. This will lead blockchains to apportion the appropriate resources to improve security in 2023.

In addition to cryptocurrency theft, blockchain availability and stability should be a priority in 2023. If outages and slowdowns continue, blockchains face user decline or even complete collapse.

Learn more about Kudelski Security’s portfolio of blockchain security services.

Prediction #6: Companies concerned about quantum computing should begin assessing risks now.

Controls to prepare for quantum computing are unlikely to see mass adoption in 2023, but keep an eye on it for 2024. The current risks of quantum computing don’t quite outweigh the incredible investment required yet. That said, companies that stand the most to lose from future quantum attacks — e.g., financial services, defense contractors, and companies that transmit extremely sensitive data especially — should begin assessing their risks now.

Are you ready for the era of quantum computing? Watch our webinar to know how to be better prepared.

What Impact Will the Recession Have on Security Teams in 2023?

The recession should have relatively little impact on security teams in 2023. We predict security teams are going to remain mostly untouched even as companies across industries are forced to make cuts to their budgets and workforce in response to the upcoming recession.

American privacy laws will likely elevate to reach current European standards, putting a renewed focus on security and compliance in boardrooms and C-suites.

Additionally, cybersecurity labeling for consumer products, especially on hardware, will further the importance of corporate security teams. Economic hardships will necessitate that security teams work smarter and consolidate to meet the evolving economic and tech landscape.

What Should Security Leaders Prioritize in 2023?

In response to these top cybersecurity predictions for 2023, security leaders should prioritize the following initiatives:

  • Adopting an offensive security posture rather than a defensive one.
  • Focusing detection and response initiatives on preventive features instead of reactive fixes.
  • Phasing out VPN in favor of zero trust strategies for the remote workforce.
  • Building out a strong vendor risk management program to protect against third-party risk.
  • Looking for alternatives to MFA while implementing stronger password requirements and account protections.
  • Working smarter and consolidating to meet the evolving economic and tech landscape.
  • Bolstering availability and security of blockchain-related services.
  • Assessing risks related to quantum computing, especially for those in financial services, defense, or other industries that deal with highly sensitive data.

Get in Touch

Kudelski Security can help you prepare for 2023 and beyond with a comprehensive suite of security advisory services. From MDR and zero trust to blockchain and quantum, our experts can assess, design, implement and manage a resilient cybersecurity strategy. Get in touch with  our team here.

Quantum Computing And The New IT Revolution

Quantum Computing And The New IT Revolution

Quantum Computing is going to be the next IT revolution, and it’s going to be disruptive. These futuristic machines are on the way to being built right now, and they can crack open the most secure cryptographic codes used today. Nothing is going to be the same: blockchain, e-commerce, cloud, military communications, even AI… everything is going to be redesigned. Time is ticking.

50 years ago it was 1969, the year man first set foot on the Moon. This incredible feat was undoubtedly made possible by the huge technological advancements witnessed in that period, a scientific golden age fueled partially by the rapid post-war economical growth of the Western world, but also partially by the unstable global political situation – sounds quite familiar today. NASA, the space agency which accomplished the legendary milestone, is historically one of the entities always in serious need of computational power. The most powerful supercomputer of that time, the IBM System/360 Model 91, was the cornerstone that made the whole Apollo program possible: it was a masterwork of technology, incredibly powerful, and so expensive that only a few institutions in the world could afford one (less than 20 units have been built overall). It could boost up to 6 MiB of central memory, with a total power consumption of the order of hundreds of kW, and it was probably smaller than your kitchen. Probably.

 

Today we carry in our pockets hand-held devices which are many orders of magnitude more powerful than such ancient computing behemoths. In our fast times, 50 years can change many things, especially from the IT point of view. It is therefore natural to look back at history and compare the situation with today’s bleeding edge technology: quantum computing. This is not just a tech buzzword: quantum computers exist today already. Granted, they are just prototypes. Just like their classical silicon ancestors, they are impressively large, expensive, complicated machines which are yet not very powerful; only a bunch of them exist, being developed and tested by a few large organizations and companies.

However, these machines can hardly be considered at an embryonic stage anymore: the last few years witnessed a true explosion in R&D of quantum information processing, and we all know where this leads if the historical trend means something. Things that at the beginning of this century were still considered science fiction are now at hand: today it is possible to send and process quantum information over distances of hundreds of km, to detect and correct many quantum transmission errors, and to build machines with almost a hundred quantum bits of memory – whereas it was formerly challenging just to build a single one. Given that according to many researchers, 100 quantum bits is the boundary after which we should start observing “quantum supremacy” (that is, being able to tackle problems practically impossible for the most powerful supercomputers), it feels like we probably won’t need to wait for other 50 years in order to witness what is arguably going to be the next big IT revolution.

It is, therefore, time to start looking at the future and understand what the potential consequences of all this could be. In this article, we are going to have a look at what a quantum computer is, why it is important, and what it means for a future-oriented CISO strategy.

What Is A Quantum Computer?

The idea of a quantum computer (QC) was first theorized by famous physicist Richard Feynman in 1981. He was looking at the problem of predicting and explaining the behavior of certain physical systems, a task which classical computers are usually very good at: think for example at the problem of computing an artillery shell trajectory, which was (sadly) one of the first uses of computing machines. Now, this works pretty well for large systems, which are ruled by the Newtonian laws of physics, but the issue is that as the scale of the system shrinks down to microscopical levels, these laws stop working. The phenomenon, known to physicists since the end of the 19th century, is due to the fact that at such scale Newtonian physics is unable to correctly approximate the behavior of Nature, which is better described by the (far more complex) laws of Quantum Mechanics (QM).

QM is a scientific theory, like Einstein’s General Relativity; however, whereas the latter is focused on describing systems which are astronomically large, QM intends to describe systems at the nanometer scale (one-millionth of a millimeter and smaller). This is the scale where, e.g., chemical reactions happen, and therefore QM proved to be an indispensable tool for the development of material and engineering technologies that we use today. We are talking here about very, very concrete problems such as “what is the best chemical catalyst for improving the industrial production of ammonia?” or “what is the best material for improving a smartphone’s battery life?”. Unfortunately, even with the help of QM, the computation necessary to solve some of these problems is so complicated that even all the supercomputers in the world combined together would require too long (scientists talk about “many times the age of the universe”). This is where Feynman had the original idea: since Nature at that scale behaves quantumly, what about building a computing machine that inherently follows the same quantum laws in order to solve these problems?

 

This original idea was later developed, and soon a theoretical model of QC was born. In the most general form, a QC is a machine able to perform computation (that is: entering an input, performing some transformation, and extracting an output) on quantum data. Quantum data is stored on some (usually microscopical) physical system, and manipulated through physical transformations controlled by the user. Quantum data (usually represented as the quantum state of elementary systems called “quantum bits”, or “qubits”) encodes quantum information. This can encompass classical information (bits), but it also includes “exotic” types of information which behave very differently from the intuitive way we think about computer data. For example, certain quantum information is physically impossible to copy (a very interesting thing about the laws of QM, this is called the “No-Cloning Theorem“). Whereas bits can be in either 0 or 1 state, a single qubit can be in a state which is a continuum between the two (this is called “superposition“, and it is a bit like saying that a QC can represent the full spectrum of colors, whereas a classical computer can only represent the two colors black and white). Moreover, certain quantum systems can influence each other with random perturbations that propagate beyond the speed of light, even at astronomical distances (this phenomenon is a consequence of so-called “entanglement”, but contrary to common misconception it does not allow under any circumstance to communicate information faster than light… another counterintuitive fact of QM!).

In the most common conception, a QC works like this: first, start with a (classical) input to a certain problem that you want to solve (for example, a number you want to compute the square root of). Then, encode your classical input into a quantum memory register (that is, a physical system composed of qubits). The physical nature of the register and the exact procedure for encoding classical information into it vary depending on the specific engineering technology used – there are currently different competing alternatives. Usually, it involves starting from a “blank state” for the system (think of an empty tape) and bringing it to the desired input state. At this point, a QC performs the computation proper: a transformation on the memory register, usually further decomposed into a sequence of elementary quantum operations called quantum gates, implemented by different means according to the technology used (laser beams, modulated microwave pulses, etc). At the end of the sequence, which can be more or less long depending on the problem and the size of the input, a final state is obtained on the quantum register. The last step involves extracting a classical output (in our example, the square root of the input) from the register, a procedure called “measurement”. This last part is the most interesting one because a QC is inherently probabilistic: because of the superposition principle, at the end of the computation, the quantum register will be in a “mix” of different possible classical answers to the problem. Usually, just one of these answers is the right one (in our case, the square root of the number) while the others are… garbage! The trick here (and the real challenge for the scientists who design quantum computer programs, or “quantum algorithms”) is to make sure that during the computation, wrong answers to the problem “cancel out each other” (another QM phenomenon called “interference”, similar to the canceling effect that two sea waves have on each other when they cross in a certain way), while the right answer “strengthens up”. In this way, the final state of the register will encode the right answer to the problem “with high probability”, and the measurement will return that answer instead of the garbage.

By using this approach it is possible to perform any type of computation that a classical computer can do, albeit more complicatedly. For this reason, experts envision that QC will not be “standalone machines” in the foreseeable future, but more akin to special hardware add-ons, or coprocessors, to integrate side-by-side into classical computers. Unfortunately, whereas the theory behind QC works flawlessly, in practice these machines are extremely hard to build because of the huge engineering efforts necessary. Since the physical systems required to realize qubits are so tiny and delicate, any minimal environmental noise or perturbation tends to interfere with the computation and to cause irrecoverable errors. This phenomenon, known as “decoherence” is the main reason why QCs are not still fully usable in practice today. To avoid decoherence, engineers have to devise special containment methods able to shield the quantum memory from heat, vibrations, electromagnetic interference and so on. This usually involves enclosing the quantum computing medium inside a vacuum chamber, which is also an electromagnetic insulator, which is also cooled down to minus 273 degrees Celsius using liquid neon and super expensive and bulky cryostat machines. And even all this only allows resistance to decoherence for a few milliseconds at best!

Courtesy of IBM: https://www.research.ibm.com/ibm-q/

 

Sounds familiar, doesn’t it?

 

Regarding existing realization technologies today, there are really too many of them to discuss here. One of the most promising ones so far is called “superconducting qubits”, where quantum data is encoded in tiny transition zones between elements of a silicon microchip, and manipulated through the careful use of microwaves. This is the approach currently used by some of the biggest players in the race to the realization of a fully-fledged QC, but even with this technology today we can only realize a few tens of qubits and compute on them for a few milliseconds. Not really useful.

Then Why Is All This Interesting At All?

So far building a QC sounds like a lot of pain for no gain. That’s because we haven’t discussed the computational speedup aspect yet. Before doing that, though, a couple of necessary clarifications.

First of all, it is not the case that a QC is an all-powerful machine: technically speaking, it is not believed that a QC can efficiently solve NP-complete problems, not more than a classical computer. This means, informally, that there exist “natural” problems which are probably too difficult to solve even for a QC. Second, it is not known whether a QC can achieve more than a polynomial-time speedup over classical computers. Informally speaking, this means that there is currently no proof that a QC can achieve more than a “modest” (in theoretical terms) speedup over classical computers for any real problem at all. In fact, there are just a bunch of candidate problems which are suspected (but not proven) to be intrinsically hard for a classical computer but easily solvable for a QC. So, in theory, the alleged quantum superiority over classical is very thin.

Despite all this, a QC can do amazing feats that are just unthinkable with a classical computer. The reason is twofold. On one hand, “modest in theoretical terms” translates to “huge, enormous impact in practice” for certain applications, and many of them are very, very important. Think of Grover’s algorithm, one of the most well-known quantum algorithms, and its consequences. Suppose you have a deck of 100 cards, where one is an ace and the other 99 are non-aces. The deck is shuffled and then handed over to you, and your goal is to find the ace with the minimum amount of draws as possible. It is easy to calculate the odds classically: one can show that it will take an average of roughly 50 draws before you’ll find the ace with good probability. However, if you can represent the deck on a QC and operate Grover’s algorithm, one can show that only roughly 10 “quantum draws” are sufficient to find the ace – a quadratic speedup over the classical case. This might sound completely silly, counterintuitive, and ultimately not very useful at all. But what if instead of 100 cards you have several billions of them? What if, instead of cards, you are searching for a particular protein-encoding gene sequence in a DNA strand? What if you are searching for the best catalyst for an industrial chemical reaction, or the best logistic scheduling for a fleet of delivery trucks over an infinity of possible routes? These kinds of problems are called “optimization problems”. They are very general, found in everyday life, and very, very difficult to solve when the size of the problem grows too much. In these cases, even “just” a quadratic speedup makes the difference between hundreds of years and a bunch of seconds of computation time necessary to find a solution. Arguably, the most important civilian application of quantum computers will be efficiently finding the solutions to such problems, with a tremendous impact on society: new drug treatments will be made available, better energy saving, better flow of traffic, and much more. Notable areas of interest (which are in substantial part driving the investments for the realization of a QC) are: better and faster AI, genome sequencing, and financial market predictions. The applications are countless, and for this reason alone it is not surprising how much quantum computing R&D has been growing in the last few years.

But there is another interesting reason, especially for a CISO. We mentioned before as there are just a bunch of problems which are currently known to be easily solvable on a QC but believed to be really hard (even from a theoretical point of view) for classical computers. Well, it turns out that, coincidentally, such few and particular problems are actually super important: they are at the core of basically every cryptographic security solution in use today! Cryptosystems such as RSA and elliptic curves are ubiquitous in our digital society, used in every commercial and non-commercial security solution, and the whole Internet (and any communication network younger than the telegraph) would literally be unthinkable without these mathematical constructs. The point is, these algorithms are only secure because no single genius or research group, even after hundreds of years of research, has found an efficient way to solve the underlying mathematical problems yet: these cryptosystems work because for an adversary to break the security of the scheme it would be necessary to solve one of these mastodontically hard problems, impossible today even with the fastest classical supercomputer. And this is where quantum computers come into play! All these cryptosystems have been developed quite a few years ago, at a time when QCs were still mostly unheard of. Since 1994, with the discovery of Shor’s quantum algorithm, we know that the vast majority of the cryptographic protocols in use today would be vulnerable as soon as a working QC is built. And this is the reason why, after 1994, research in quantum computing has always had military and intelligence organizations as generous sponsors. There is currently an ongoing arms race for the first superpower able to use quantum computing to break the security of adversarial actors’ communication networks. This would have potentially devastating consequences, as the history of the Enigma cryptographic machines during WW2 teaches us.

The truth is, the world is not yet ready for quantum computers. If tomorrow someone were to suddenly succeed in building one, the consequences would probably be devastating. It is true that such a feat is probably not going to happen tomorrow really. However, given the impressive acceleration of the pace in technological achievements toward the realization of a working QC, it is probably time to start worrying. And, in fact, people really are.

How Near Is The Quantum Apocalypse?

The short answer is: nobody knows. Quantum Computers used to be the typical technology considered to be “constantly 20 years in the future in development”, but the recent breakthroughs and accelerated pace of technological advancements hint at the fact that this might not be the case. According to some of the major experts in the field:

  • Michele Mosca [Oxford, 1996]: “20 qubits in 20 years” <- this really happened
  • Microsoft Research [October 2015]: “Recent improvements in control of quantum systems make it seem feasible to finally build a quantum computer within a decade”
  • Michele Mosca ([NIST, April 2015], [ISACA, September 2015]): “1/7 chance of breaking RSA-2048 by 2026, 1/2 chance by 2031”
  • Michele Mosca [London, September 2017]: (updating a previous prediction) “1/6 chance within 10 years”
  • Simon Benjamin [London, September 2017]: speculates that if someone is willing to “go Manhattan project” then “maybe 6-12 years”

On the other hand, some experts are skeptical that a fully working QC can be ever built at all. The reason, they say, is that all the big technological challenges met so far might eventually be impossible to overcome, because there might be some inherent physical reason, or yet unknown fundamental law, that just makes it impossible to scale the engineering efforts further than just a small number of qubits. It is worth pointing out that nothing in the known scientific consensus currently supports this theory, which is a minority point of view among the physicists. For example, world-renown expert Scott Aaronson offers a 100k USD prize to whoever can scientifically prove that building a QC is impossible. Nobody claimed the prize so far.

Furthermore, the problem of decoherence might not be impossible to overcome. Quantum error correcting codes, or QECC, are a special way to encode quantum information by employing many physical qubits “grouped together” in order to represent a single “logical” one, in such a way that the whole system is “self-correcting”. That is, it constantly checks and repairs itself over time, thus overcoming the problem of decoherence. The number of physical qubits necessary to represent a single logical qubit is currently of the order of the thousands – so, not too far away from the current state of the art already. Moreover, this number is probably going to shrink in the near future, as better QECC are discovered and “less noisy” physical qubits are realized. As soon as a QECC is implemented, this might boost substantially the scaling capability of a QC, making it possible to run quantum algorithms with thousands, or even millions, of logical qubits. This would really be a tipping point in QC technology, and we might be not that far away.

What Is The State Of The Art?

Currently, entities in both the private and public sector are engaged in the race to building a QC. Next, to startups like Rigetti, we find big players like Intel and Microsoft, each of them with their competing R&D programs. Last year Google announced their Bristlecone architecture, able to scale potentially up to 72 qubits. Also last year, IBM announced a stable 50 qubits prototype, and they started recently making available (through the IBM Q cloud infrastructure) commercial access to a 20-qubits and free access to a 16-qubits QC, for testing and training purposes. Other companies like D-Wave use a different approach: they provide quantum-like machines with thousands of qubits, but weaker quantum computing capabilities, only interesting for very specific applications. Universities and research institutes across the globe are more focused on advancing the efficiency of the underlying technologies and quantum algorithms rather than providing a commercially working sample. We do not know what military and intelligence organizations are up to at the moment, but we know for sure (also from famous leaked material now publicly accessible online) that big superpowers have started their own quantum computing programs since a few years already, both in the Western and the Eastern world.

In short, things are moving fast, and this is very bad from the point of view of IT security as we previously discussed. Luckily, there are solutions which do not require waiting for the “Crypto Doomsday” before taking action. Quantum-resistant, or quantum-safe cryptography, is classical cryptography (able to run on today’s computers for instance) but which is at the same time specifically designed to be resistant against future quantum computers. Sometimes it is also called “post-quantum cryptography”, but this naming is misleading – we will cover the difference in another blog post in the future. We have already discussed the existence of mathematical problems so hard that not even a QC is believed to be able to solve them efficiently. One can use those problems and other security techniques in order to build cryptographic schemes (public-key encryption, digital signatures, zero-knowledge proofs, etc.) which will likely remain unbreakable even for a QC. These schemes have been studied only since relatively recent times, but they have now reached a point of maturity, and are ready to be standardized and deployed. And this is happening right now. In 2016, NIST launched the Post-Quantum Cryptography Standardization Challenge. In a similar fashion to what happened with the standardization of cryptographic tools such AES and SHA-3, NIST solicited researchers and scientific institutions to submit their proposals for quantum-safe cryptography that could replace schemes such as RSA, DSA, and elliptic curves. These new algorithms are currently tested and evaluated, and one or more of these will be selected as new cryptographic standards. One of these proposed new schemes, Gravity SPHINCS, was submitted at NIST by a team of researchers including VP of Technology at Kudelski Security J.P. Aumasson.

Unfortunately, the process of standardizing cryptography is very, very slow. NIST is aiming at a final draft by the mid-2020s, and we know from previous crypto standardization experience that the process of building trust into, adopting, and deploying a cryptographic standard is a lengthy process as well. This means that, even in the mildly optimistic scenario that nobody will build a QC for the next 20 years, it is time now to start addressing the problem from a security perspective, and for certain applications, it might even be too late already. In fact, certain data is so sensitive that needs to retain its confidentiality or integrity for decades, or even hundreds of years. Things such as medical records, genetic data, and financial records need to be safeguarded for a period of time that often exceeds the owner’s lifetime – and sometimes this is even mandated by law. Currently, the totality of the civilian, military, and diplomatic communication is unprotected against a QC. We know that powerful actors have already looked at the opportunity: they have started to intercept and harvest encrypted data into large “cold storage data banks” already since some years ago. They cannot break into those secrets right now, but one day they might if quantum computers become available, they just need to wait. This means, for example, that if you submit a genetic test online today, even if it is protected with the most secure cryptographic solutions currently used, its confidentiality cannot be ensured for your children or grandchildren.

What Should A Modern CISO Do About Quantum Computing?

Start planning the transition to a quantum world now. This is especially true if you are responsible for the integrity and confidentiality of long-term valuable data. You should keep advancements in quantum technology in your radar, and start preparing now for the migration to quantum-safe cryptography and quantum-resistant security solutions.

The NIST competition is a good place to start: the 1st round evaluation process has just ended, and the algorithms selected for the 2nd round will undoubtedly already be offering strong quantum security guarantees. We are going to cover quantum-resistant cryptography in detail in a future blog post, but one word of advice that we can already give now is: whenever possible, it is better to rely on symmetric-key cryptography rather than public-key cryptography, because the former is generally considered to be more resistant to a QC than the latter. This will change in the near future as soon as more trust in the currently evaluated quantum-safe cryptographic standards is built over time, so it’s better to keep an eye on that. Unfortunately, quantum-safe cryptography is far more complicated than the traditional public-key schemes we are used to, like RSA and elliptic curves. The usual advice of not cooking your own cryptographic schemes is here more alive than ever. It is not important for now to understand how these schemes work, but be prepared to the possibility that the schemes deployed in today’s security solutions might soon become insecure, and in need of a drop-in replacement which might have some drawbacks, e.g. in terms of performance.

In any case, it is crucial for a farsighted CISO to start building know-how in quantum technology inside their organization. After cryptography engineers, machine learning, and data scientists, quantum IT specialists will probably be the next big shortage on the talent market, and if the trend continues it will be a big shortage because these competencies are really hard to acquire. It is important to understand the risks and the challenges, and for this, it is important to understand the technology. It is a good idea to invest in education and training, both in the area of quantum-safe security and in the area of quantum algorithms, both of which might prove crucial assets in the future.

Finally, remember that quantum technology is definitely disruptive, but also opens up a lot of new opportunities, especially in IT security. It feels like we are again at the beginning of a new exciting era of computing revolution, and the positive applications will undoubtedly surpass the negative ones. Being able to harness the power of quantum computing would prove to be an invaluable tool for threat detection, analysis, and countermeasures, and the possibility of using quantum computers in a “defensive” way (for example by encrypting quantum data, or performing quantum key distribution) has the potential to be a real game-changer in the field of cybersecurity.

In short: “Nature is quantum, dammit”, and cybersecurity is going to be, too. Get ready for that.