Quantum Computing is going to be the next IT revolution, and it’s going to be disruptive. At Kudelski Security, our focus is on helping clients prepare for the new era of quantum computing. These futuristic machines are on the way to being built right now, and they can crack open the most secure cryptographic codes used today. Nothing is going to be the same: blockchain, e-commerce, cloud, military communications, even AI… everything is going to be redesigned. Time is ticking.

In this article:

How we got here

In 1969, more than 50 years ago, man first set foot on the moon. This incredible feat was undoubtedly made possible by the huge technological advancements witnessed in that period. It was a scientific golden age fueled partially by the rapid post-war economic growth of the Western world but also partially by the unstable global political situation – sounds quite familiar today.

NASA, the space agency which accomplished the legendary milestone, is historically one of the entities always in serious need of computational power. The most powerful supercomputer of that time, the IBM System/360 Model 91, was the cornerstone that made the whole Apollo program possible.

It was a masterwork of technology, incredibly powerful, and so expensive that only a few institutions in the world could afford one (less than 20 units have been built overall). It could boost up to 6 MiB of central memory, with a total power consumption of the order of hundreds of kW, and it could maybe fit in your kitchen. Maybe.

Today we carry in our pockets hand-held devices which are many orders of magnitude more powerful than such ancient computing behemoths. In our fast times, 50 years can change many things, especially from the IT point of view. It is therefore natural to look back at history and compare the situation with today’s bleeding edge technology: quantum computing.

This is not just a tech buzzword. Quantum computers (QC) exist today already. Granted, they are just prototypes. Just like their classical silicon ancestors, they are impressively large, expensive, complicated machines which are yet not very powerful; only a handful of them exist, being developed and tested by a few large organizations and companies.

However, these machines can hardly be considered at an embryonic stage anymore. The last few years witnessed a true explosion of R&D in quantum information processing, and we all know where this leads if the historical trend means something. Things that at the beginning of this century were still considered science fiction are now at hand.

Today it is possible to send and process quantum information over distances of hundreds of km, to detect and correct many quantum transmission errors, and to build machines with almost a hundred quantum bits of memory – whereas it was formerly challenging just to build a single one.

Given that 100 quantum bits is the boundary after which we should start observing “quantum supremacy” (that is, being able to tackle problems that have been practically impossible for the most powerful supercomputers), it feels like we probably won’t need to wait for other 50 years in order to witness what is arguably going to be the next big IT revolution.

It is, therefore, time to start looking at the future and understand what the potential consequences of all this could be.

What is quantum computing?

The idea of quantum computing was first theorized by famous physicist Richard Feynman in 1981. He was looking at the problem of predicting and explaining the behavior of certain physical systems, a task which classical computers are usually very good at. Think for example of the problem of computing an artillery shell trajectory, which was (sadly) one of the first uses of computing machines.

Now, this works pretty well for large systems, which are ruled by the Newtonian laws of physics, but the issue is that as the scale of the system shrinks down to microscopic levels, these laws stop working. This phenomenon, known to physicists since the end of the 19th century, is due to the fact that at such scale Newtonian physics is unable to correctly approximate the behavior of nature, which is better described by the (far more complex) laws of quantum mechanics (QM).

Quantum mechanics is a scientific theory, like Einstein’s General Relativity. However, whereas the latter is focused on describing systems that are astronomically large, quantum mechanics intends to describe systems at the nanometer scale (one-millionth of a millimeter and smaller).

This is the scale where, e.g., chemical reactions happen, and therefore quantum mechanics proved to be an indispensable tool for the development of material and engineering technologies that we use today. We are talking here about very, very concrete problems such as “what is the best chemical catalyst for improving the industrial production of ammonia?” or “what is the best material for improving a smartphone’s battery life?”.

Unfortunately, even with the help of quantum mechanics, the computation necessary to solve some of these problems is so complicated that even all the supercomputers in the world combined together would take too long (scientists talk about “many times the age of the universe”). This is where Feynman had the original idea. Since nature at that scale behaves quantumly, what about building a computing machine that inherently follows the same quantum laws in order to solve these problems?

This original idea was later developed, and soon a theoretical model of quantum computing was born. In the most general form, a quantum computer is a machine able to perform computation (i.e., entering an input, performing some transformation, and extracting an output) on quantum data.

Quantum data is stored on some (usually microscopical) physical system, and manipulated through physical transformations controlled by the user. Quantum data (usually represented as the quantum state of elementary systems called “quantum bits”,or “qubits”) encodes quantum information. This can encompass classical information (bits), but it also includes “exotic” types of information which behave very differently from the intuitive way we think about computer data. For example, certain quantum information is physically impossible to copy.

This is a very interesting thing about the laws of quantum mechanics called the “No-Cloning Theorem”. Whereas bits can be in either 0 or 1 state, a single qubit can be in a state which is a continuum between the two. This is called superposition, and it is a bit like saying that a quantum computer can represent the full spectrum of colors, whereas a classical computer can only represent the two colors black and white).

Moreover, certain quantum systems can influence each other with random perturbations that propagate beyond the speed of light, even at astronomical distances. This phenomenon is a consequence of so-called entanglement, but contrary to common misconception it does not allow under any circumstance to communicate information faster than light… another counterintuitive fact of quantum mechanics.

How does quantum computing work?

In the most common conception, a quantum computer works like this:

  1. Start with a classical input to a certain problem that you want to solve. For example, a number you want to compute the square root of.
  2. Then, encode your classical input into a quantum memory register—that is, a physical system composed of qubits. The physical nature of the register and the exact procedure for encoding classical information into it vary depending on the specific engineering technology used. There are currently different competing alternatives. Usually, it involves starting from a “blank state” for the system (think of an empty tape) and bringing it to the desired input state.
  3. At this point, a quantum computer performs the computation proper. A transformation on the memory register, usually further decomposed into a sequence of elementary quantum operations called quantum gates, implemented by different means according to the technology used (laser beams, modulated microwave pulses, etc).
  4. At the end of the sequence, which can be more or less long depending on the problem and the size of the input, a final state is obtained on the quantum register.
  5. The last step involves extracting a classical output.in our example, the square root of the input) from the register, a procedure called “measurement”.

This last part is the most interesting one because a quantum computer is inherently probabilistic. Because of the superposition principle, at the end of the computation, the quantum register will be in a “mix” of different possible classical answers to the problem. Usually, just one of these answers is the right one (in our case, the square root of the number) while the others are… garbage!

The trick here (and the real challenge for the scientists who design quantum computer programs or “quantum algorithms”) is to make sure that during the computation, wrong answers to the problem “cancel each other out. This is another quantum mechanics phenomenon called interference

It’s similar to the canceling effect that two sea waves have on each other when they cross in a certain way, while the right answer “strengthens up”. In this way, the final state of the register will encode the right answer to the problem with high probability, and the measurement will return that answer instead of the garbage.

By using this approach, it is possible to perform any type of computation that a classical computer can do, albeit more complicatedly. For this reason, experts envision that quantum will not be standalone machines in the foreseeable future, but more akin to special hardware add-ons, or coprocessors, to integrate side-by-side into classical computers.

Unfortunately, whereas the theory behind quantum computing works flawlessly, in practice these machines are extremely hard to build because of the huge engineering efforts necessary. Since the physical systems required to realize qubits are so tiny and delicate, any minimal environmental noise or perturbation tends to interfere with the computation and to cause irrecoverable errors. This phenomenon, known as decoherence, is the main reason why quantum computers are not still fully usable in practice today.

To avoid decoherence, engineers have to devise special containment methods able to shield the quantum memory from heat, vibrations, electromagnetic interference and so on. This usually involves enclosing the quantum computing medium inside a vacuum chamber, which is also an electromagnetic insulator, which is also cooled down to minus 273 degrees Celsius using liquid neon and super expensive and bulky cryostat machines. And even all this only allows resistance to decoherence for a few milliseconds at best! Sounds familiar, doesn’t it?

Courtesy of IBM: https://www.research.ibm.com/ibm-q/

 

Regarding existing realization technologies today, there are really too many of them to discuss here. One of the most promising ones so far is called superconducting qubits, where quantum data is encoded in tiny transition zones between elements of a silicon microchip and manipulated through the careful use of microwaves.

This is the approach currently used by some of the biggest players in the race to the realization of a fully-fledged quantum computing, but even with this technology today we can only realize a few tens of qubits and compute on them for a few milliseconds. Not really useful.

Why quantum computing matters

So far building a quantum computer sounds like a lot of pain for no gain. That’s because we haven’t discussed the computational speedup aspect yet. Before doing that, though, a couple of necessary clarifications.

First of all, it is not the case that a quantum computer is an all-powerful machine. Technically speaking, it is not believed that a quantum computer can efficiently solve NP-complete problems, not more than a classical computer. This means, informally, that there exist “natural” problems which are probably too difficult to solve even for a quantum computer.

Second, it is not known whether a QC can achieve more than a polynomial-time speedup over classical computers. Informally speaking, this means that there is currently no proof that a quantum computer can achieve more than a modest (in theoretical terms) speedup over classical computers for any real problem at all.

In fact, there are just a bunch of candidate problems which are suspected (but not proven) to be intrinsically hard for a classical computer but easily solvable for a quantum computer. So, in theory, the alleged quantum superiority over classical is very thin.

Despite all this, a quantum computer can do amazing feats that are just unthinkable with a classical computer. The reason is twofold. On one hand, “modest in theoretical terms” translates to “huge, enormous impact in practice” for certain applications, and many of them are very, very important.

Think of Grover’s algorithm, one of the most well-known quantum algorithms, and its consequences. Suppose you have a deck of 100 cards, where one is an ace and the other 99 are non-aces. The deck is shuffled and then handed over to you, and your goal is to find the ace with the minimum amount of draws possible. It is easy to calculate the odds classically. One can show that it will take an average of roughly 50 draws before you’ll find the ace with good probability.

However, if you can represent the deck on a quantum computer and operate Grover’s algorithm, one can show that only roughly 10 “quantum draws” are sufficient to find the ace—a quadratic speedup over the classical case. This might sound completely silly, counterintuitive, and ultimately not very useful at all.

But what if instead of 100 cards you have several billions of them? What if, instead of cards, you are searching for a particular protein-encoding gene sequence in a DNA strand? What if you are searching for the best catalyst for an industrial chemical reaction, or the best logistic scheduling for a fleet of delivery trucks over an infinity of possible routes?

These kinds of problems are called optimization problems. They are very general, found in everyday life, and very, very difficult to solve when the size of the problem grows too much. In these cases, even “just” a quadratic speedup makes the difference between hundreds of years and a bunch of seconds of computation time necessary to find a solution.

Arguably, the most important civilian application of quantum computers will be efficiently finding the solutions to such problems, with a tremendous impact on society:

  • new drug treatments will be made available
  • better energy saving
  • better flow of traffic
  • and much more

Notable areas of interest,which are in substantial part driving the investments for the realization of a quantum computer, are:

  • better and faster AI
  • genome sequencing
  • financial market predictions

The applications are countless, and for this reason alone it is not surprising how much quantum computing R&D has been growing in the last few years.

But there is another interesting reason quantum computing is important, especially for a CISO. We mentioned before that there are just a bunch of problems which are currently known to be easily solvable on a quantum computer but believed to be really hard (even from a theoretical point of view) for classical computers. Well, it turns out that, coincidentally, such few and particular problems are actually super important. They are at the core of basically every cryptographic security solution in use today!

Cryptosystems such as RSA and elliptic curves are ubiquitous in our digital society, used in every commercial and non-commercial security solution, and the whole Internet (and any communication network younger than the telegraph) would literally be unthinkable without these mathematical constructs.

The point is, these algorithms are only secure because no single genius or research group, even after hundreds of years of research, has found an efficient way to solve the underlying mathematical problems yet. These cryptosystems work because for an adversary to break the security of the scheme it would be necessary to solve one of these mastodontically hard problems, impossible today even with the fastest classical supercomputer.

And this is where quantum computers come into play. All these cryptosystems were developed quite a few years ago, at a time when quantum computers were still mostly unheard of. Since 1994, with the discovery of Shor’s quantum algorithm, we know that the vast majority of the cryptographic protocols in use today would be vulnerable as soon as a working quantum computer is built.

This is the reason why, after 1994, research in quantum computing has always had military and intelligence organizations as generous sponsors. There is currently an ongoing arms race for the first superpower able to use quantum computing to break the security of adversarial actors’ communication networks. This would have potentially devastating consequences, as the history of the Enigma cryptographic machines during WW2 teaches us.

The truth is, the world is not yet ready for quantum computers. If tomorrow someone were to suddenly succeed in building one, the consequences would probably be devastating. It is true that such a feat is probably not really going to happen tomorrow.

However, given the impressive acceleration of the pace in technological achievements toward the realization of a working quantum computer, it is probably time to start worrying. And, in fact, people really are.

How near is quantum supremacy?

The short answer is: nobody knows. Quantum computers used to be the technology considered to be “constantly 20 years in the future in development”, but the recent breakthroughs and accelerated pace of technological advancements hint at the fact that this might not be the case. According to some of the major experts in the field:

  • Michele Mosca [Oxford, 1996]: “20 qubits in 20 years” <- this really happened
  • Microsoft Research [October 2015]: “Recent improvements in control of quantum systems make it seem feasible to finally build a quantum computer within a decade”
  • Michele Mosca ([NIST, April 2015], [ISACA, September 2015]): “1/7 chance of breaking RSA-2048 by 2026, 1/2 chance by 2031”
  • Michele Mosca [London, September 2017]: (updating a previous prediction) “1/6 chance within 10 years”
  • Simon Benjamin [London, September 2017]: speculates that if someone is willing to “go Manhattan project” then “maybe 6-12 years”

On the other hand, some experts are skeptical that a fully working quantum can be ever built at all. The reason, they say, is that all the big technological challenges met so far might eventually be impossible to overcome, because there might be some inherent physical reason, or yet unknown fundamental law, that just makes it impossible to scale the engineering efforts further than just a small number of qubits.

It is worth pointing out that nothing in the known scientific consensus currently supports this theory, which is a minority point of view among the physicists. For example, world-renowned expert Scott Aaronson offers a 100K USD prize to whoever can scientifically prove that building a quantum computer is impossible. Nobody has claimed the prize so far.

Furthermore, the problem of decoherence might not be impossible to overcome. Quantum error correcting codes, or QECC, are a special way to encode quantum information by employing many physical qubits grouped together in order to represent a single logical one, in such a way that the whole system is self-correcting. That is, it constantly checks and repairs itself over time, thus overcoming the problem of decoherence.

The number of physical qubits necessary to represent a single logical qubit is currently of the order of the thousands—so, not too far away from the current state of the art already. Moreover, this number is probably going to shrink in the near future, as better QECC are discovered and “less noisy” physical qubits are realized.

As soon as a QECC is implemented, this might substantially boost the scaling capability of a quantum computer, making it possible to run quantum algorithms with thousands, or even millions, of logical qubits. This would really be a tipping point in quantum computing technology, and we might not be that far away.

Who is leading the quantum revolution?

Currently, entities in both the private and public sector are engaged in the race to building a quantum computer. In addition to startups like Rigetti, we find big players like Intel and Microsoft, each of them with their competing R&D programs. Last year Google announced their Bristlecone architecture, able to scale potentially up to 72 qubits. Also last year, IBM announced a stable 50 qubits prototype, and they started recently making available (through the IBM Q cloud infrastructure) commercial access to a 20-qubits and free access to a 16-qubits QC, for testing and training purposes.

Other companies like D-Wave use a different approach. They provide quantum-like machines with thousands of qubits, but weaker quantum computing capabilities, only interesting for very specific applications.

Universities and research institutes across the globe are more focused on advancing the efficiency of the underlying technologies and quantum algorithms rather than providing a commercially working sample. We do not know what military and intelligence organizations are up to at the moment, but we know for sure (also from famous leaked material now publicly accessible online) that big superpowers have started their own quantum computing programs since a few years already, both in the Western and the Eastern world.

In short, things are moving fast, and this is very bad from the point of view of IT security as we previously discussed. Luckily, there are solutions which do not require waiting for the “Crypto Doomsday” before taking action.

Quantum-resistant, or quantum-safe cryptography, is classical cryptography (able to run on today’s computers for instance) but which is at the same time specifically designed to be resistant against future quantum computers. Sometimes it is also called “post-quantum cryptography”, but this naming is misleading—we will cover the difference in another blog post in the future.

We have already discussed the existence of mathematical problems so hard that not even a quantum computer is believed to be able to solve them efficiently. One can use those problems and other security techniques in order to build cryptographic schemes (public-key encryption, digital signatures, zero-knowledge proofs, etc.) which will likely remain unbreakable even for a QC.

These schemes have been studied only since relatively recent times, but they have now reached a point of maturity, and are ready to be standardized and deployed. And this is happening right now.

In 2016, NIST launched the Post-Quantum Cryptography Standardization Challenge. In a similar fashion to what happened with the standardization of cryptographic tools such AES and SHA-3, NIST solicited researchers and scientific institutions to submit their proposals for quantum-safe cryptography that could replace schemes such as RSA, DSA, and elliptic curves.

These new algorithms are currently tested and evaluated, and one or more of these will be selected as new cryptographic standards. One of these proposed new schemes, Gravity SPHINCS, was submitted at NIST by a team of researchers including VP of Technology at Kudelski Security J.P. Aumasson.

Unfortunately, the process of standardizing cryptography is very, very slow. NIST is aiming at a final draft by the mid-2020s, and we know from previous crypto standardization experience that the process of building trust into, adopting, and deploying a cryptographic standard is a lengthy process as well.

This means that, even in the mildly optimistic scenario that nobody will build a quantum computer for the next 20 years, it is time now to start addressing the problem from a security perspective, and for certain applications, it might even be too late already. In fact, certain data is so sensitive that it needs to retain its confidentiality or integrity for decades or even hundreds of years.

Things such as medical records, genetic data, and financial records need to be safeguarded for a period of time that often exceeds the owner’s lifetime—and sometimes this is even mandated by law.

Currently, the totality of the civilian, military, and diplomatic communication is unprotected against a quantum computer. We know that powerful actors have already looked at the opportunity. They have already started to intercept and harvest encrypted data into large cold storage data banks. They cannot break into those secrets right now, but one day they might if quantum computers become available, they just need to wait.

This means, for example, that if you submit a genetic test online today, even if it is protected with the most secure cryptographic solutions currently used, its confidentiality cannot be ensured for your children or grandchildren.

What should CISOs do about quantum computing?

CISOs should start planning the transition to a quantum world now. This is especially true if you are responsible for the integrity and confidentiality of long-term valuable data. You should keep advancements in quantum technology on your radar, and start preparing now for the migration to quantum-safe cryptography and quantum-resistant security solutions.

The NIST competition is a good place to start. The first round evaluation process has just ended, and the algorithms selected for the second round will undoubtedly already be offering strong quantum security guarantees.

We are going to cover quantum-resistant cryptography in detail in a future blog post, but one word of advice that we can already give now is: whenever possible, it is better to rely on symmetric-key cryptography rather than public-key cryptography.

Symmetric-key cryptography is generally considered to be more resistant to a QC than the latter. This will change in the near future as soon as more trust in the currently evaluated quantum-safe cryptography standards is built over time, so it’s better to keep an eye on that.

Unfortunately, quantum-safe cryptography is far more complicated than the traditional public-key schemes we are used to, like RSA and elliptic curves. The usual advice of not cooking your own cryptographic schemes is more alive here than ever. It is not important for now to understand how these schemes work, but be prepared for the possibility that the schemes deployed in today’s security solutions might soon become insecure, and in need of a drop-in replacement which might have some drawbacks, e.g. in terms of performance.

In any case, it is crucial for a farsighted CISO to start building know-how in quantum technology inside their organization. After cryptography engineers, machine learning, and data scientists, quantum IT specialists will probably be the next big shortage in the talent market, and if the trend continues it will be a big shortage because these competencies are really hard to acquire.

It is important to understand the risks and the challenges, and for this, it is important to understand the technology. It is a good idea to invest in education and training, both in the area of quantum-safe security and in the area of quantum algorithms, both of which might prove crucial assets in the future.

Finally, remember that quantum technology is definitely disruptive, but also opens up a lot of new opportunities, especially in IT security. It feels like we are again at the beginning of a new exciting era of computing revolution, and the positive applications will undoubtedly surpass the negative ones.

Being able to harness the power of quantum computing would prove to be an invaluable tool for threat detection, analysis, and countermeasures, and the possibility of using quantum computers in a “defensive” way (for example by encrypting quantum data, or performing quantum key distribution) has the potential to be a real game-changer in the field of cybersecurity.

In short: “Nature is quantum, dammit”, and cybersecurity is going to be, too. Get ready for that.

Tommaso Gagliardoni is senior cryptography expert at Kudelski Security, currently focused on leading the innovation efforts in redefining cybersecurity in the new Quantum Age. A cryptographer, privacy hacktivist, blockchain, and quantum security expert, Tommaso has an extensive academic background and has published in many industry journals. He is known for solving the longstanding problem of adaptive quantum authentication (EUROCRYPT 2018, TQC 2019) and breaking the security of ISO-standard smart card protocol PLAID (Real World Crypto 2015, SSR 2015). As a subject expert on quantum security, he collaborates with national bodies within the Wassenaar Arrangements framework and with the World Economic Forum.

Was this article helpful?